WO2017152848A1 - 人物面部模型的编辑方法及装置 - Google Patents

人物面部模型的编辑方法及装置 Download PDF

Info

Publication number
WO2017152848A1
WO2017152848A1 PCT/CN2017/076029 CN2017076029W WO2017152848A1 WO 2017152848 A1 WO2017152848 A1 WO 2017152848A1 CN 2017076029 W CN2017076029 W CN 2017076029W WO 2017152848 A1 WO2017152848 A1 WO 2017152848A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
operated
editing
model
person
Prior art date
Application number
PCT/CN2017/076029
Other languages
English (en)
French (fr)
Inventor
李小猛
王强
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2018545644A priority Critical patent/JP6661780B2/ja
Priority to KR1020187025694A priority patent/KR102089473B1/ko
Publication of WO2017152848A1 publication Critical patent/WO2017152848A1/zh
Priority to US16/111,922 priority patent/US10628984B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present application relates to the field of computers, and in particular to a method and apparatus for editing a facial model of a person.
  • the embodiment of the present application provides a method and an apparatus for editing a face model of a person, so as to at least solve the technical problem of high complexity of an editing operation caused by using an existing person face model editing method.
  • an editor of a face model of a person including:
  • the edited face portion to be operated is displayed in the above-described person's face model.
  • an apparatus for editing a facial model of a person including:
  • a first detecting unit configured to detect a position of the cursor in the displayed facial model of the person, wherein the displayed facial model of the person includes a plurality of facial parts;
  • a determining unit configured to determine a part of the plurality of face parts to be operated according to the position
  • a second detecting unit configured to detect a selected operation on the face part to be operated
  • An editing unit configured to edit the to-be-operated face portion in response to the acquired editing operation on the face portion to be operated
  • a first display unit configured to display the edited face portion to be operated in the character face model.
  • the part to be operated in the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model displayed by the terminal, and the selection of the part to be operated is detected.
  • the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person.
  • the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so as to facilitate The editing process is directly performed on the face portion, and there is no need to drag the slider corresponding to the face portion to be operated in the additional control list, so that the user can directly perform face picking and editing on the face model of the person, thereby simplifying the face model of the person.
  • the editing operation overcomes the problem of high operational complexity caused by switching between the face model of the person and the slider in the control list.
  • the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs.
  • FIG. 1 is a schematic diagram of an application environment of a method for editing a face model of a person according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for editing a character face model according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of application of an editing method of a character face model according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
  • FIG. 6 is an application of an editing method of still another character face model according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an editing apparatus for a face model of a person according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a hardware structure of an editing terminal of a character face model according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of an application of a method for editing a character face model according to an embodiment of the present application.
  • an embodiment of a method for editing a character face model is provided.
  • the editing method of the character face model may be, but is not limited to, applied to an application environment as shown in FIG. 1 .
  • the terminal 102 detects the position of the cursor in the face model of the person displayed by the terminal 102 after acquiring the face model of the person from the server 106 through the network 104, wherein the displayed face model includes a plurality of face parts; The above position determines a part of the plurality of face parts to be operated; after detecting the selected operation of the part to be operated, the editing operation of the face part to be operated is edited in response to the acquired face part; thereby realizing the face part model
  • the edited face part to be operated is displayed.
  • the foregoing terminal may include, but is not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, and a PC.
  • a method for editing a face model of a person includes:
  • the editing method of the above-described person's face model may be, but is not limited to, being applied to the persona creation process in the terminal application, and editing the corresponding person's face model for the persona character.
  • the editing method of the above-described person's face model may be, but is not limited to, being applied to the persona creation process in the terminal application, and editing the corresponding person's face model for the persona character.
  • a game application when a character in a game application is created for a player, it is possible to detect where the cursor is located in a plurality of face parts of the displayed face model of the person. The position determines the face portion to be operated, and after detecting the selected operation of the face portion to be operated, performs a corresponding editing operation in response to the acquired editing operation.
  • FIG. 3 is a schematic diagram of an application of a method for editing a face model of a person according to an embodiment of the present application.
  • the eyebrow portion in the face model of the person is selected by the position of the cursor (as shown by the dotted line on the left side of FIG. 3), and after the editing operation (such as the rotation operation) of the part is obtained, the The eyebrows are edited and the edited eyebrows are displayed in the face model of the character (as shown by the dotted line on the right side of Figure 3).
  • the editing operation of the character face model of the character character is simplified, and the edited character character is displayed quickly and accurately for the player.
  • the above example is merely an example, and the editing method for the face model of the person described in the embodiment is applicable to any type of terminal application.
  • the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
  • the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
  • the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs.
  • the location of the cursor may be, but is not limited to, indicating that the mouse is on the screen. The corresponding position displayed on the screen.
  • the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
  • the editing may include at least one of: moving the face portion to be operated; rotating the face portion to be operated; zooming in on the face portion to be operated; and reducing the face portion to be operated.
  • determining a part of the plurality of face parts to be operated according to the location of the cursor includes:
  • a portion of the plurality of face portions corresponding to the color value to be operated is determined.
  • the color value of the pixel point in the acquiring position may include, but is not limited to, acquiring a color value of a pixel point corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model.
  • the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion, wherein the color value of the pixel points may include one of the following: a red color value of the pixel point, The green color value of the pixel and the blue color value of the pixel.
  • each of the mask regions on the mask map attached to the face model of the person respectively corresponds to a face portion on the face model of the person. That is to say, by selecting the mask area on the mask map attached to the face model of the character by the cursor, the corresponding face part in the face model can be selected, thereby directly editing the face part on the face model of the person. , to achieve the purpose of simplifying the editing operation.
  • the mask map can be constructed, but not limited to, by the following code:
  • float4finalColor float4(0,2,5,alpha*2.5f*maskColor.b);
  • maskColor.r is used to indicate the red channel
  • maskColor.b is used to indicate the blue channel.
  • the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
  • the nose specifically includes six parts, and each part is respectively provided with a red color value (indicated by the R color value).
  • each value may be between but not Limited to a minimum of 10 units.
  • determining the to-be-operated face portion corresponding to the color value in the plurality of face portions may include, but is not limited to, obtaining the color value of the pixel point corresponding to the position in the mask texture, and obtaining the The mask area corresponding to the color value of the pixel, and further Obtain a face part to be operated corresponding to the mask area.
  • the position of the cursor in the displayed person's face model may be detected, but not limited to, using a pixel picking technique of pixel picking.
  • Pixel picking is a picking technology in the form of model objects, which is a method of detecting and interacting with a virtual object selected or clicked on the display screen by detecting a cursor.
  • the virtual object may be, but not limited to, a face part in a corresponding facial model of a person.
  • an identification ID is set for each virtual object (ie, the face part), and then all the pickable virtual objects are drawn on a render target, and the ID is passed through the constant register.
  • GetRenderTarget() (a function in Direct3D) is used to get the ID of the part where the current cursor is located.
  • the method before detecting the position of the cursor in the displayed face model of the person, the method further includes: displaying the face model of the person and the generated mask map, wherein the mask map is set to fit on the face model of the person on.
  • the process of obtaining a facial model of a person can be as follows:
  • the face picking system is actually a pick-up method that is sensitive to the color of the texture, and the core HLSL code is implemented:
  • Floatdepth EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
  • Result.color1 float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
  • mDepthSelted maskCode.a
  • the pixel selected by the user's cursor position is calculated by detecting the position of the cursor on the screen and the current resolution of the screen.
  • the part to be operated among the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model of the person displayed by the terminal, and the part to be operated is detected.
  • the face portion to be operated is edited in response to the acquired edit operation of the face portion to be operated, so that the edited face portion to be operated is displayed in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
  • the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
  • determining the to-be-operated face part of the plurality of face parts according to the location includes:
  • a portion of the plurality of face portions corresponding to the color value to be operated is determined.
  • the color value of the pixel point may be, but is not limited to, a color value of a pixel point corresponding to the position of the cursor in the mask texture.
  • the mask map is attached to the face model of the person, and the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion.
  • the color values of the pixel points corresponding to the cursor position in the mask map may include, but are not limited to, a color value of the red channel and a color value of the blue channel.
  • the color value of the pixel may include, but is not limited to, one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
  • the face of the character formed by the color value of the red channel can be as shown in FIG. 4 (the image should be displayed in red), the different shades indicate different degrees of red, and the color of the filled area of the left twill is larger than that of the wave-filled area.
  • the color is brighter and the color of the fill area of the wave point is brighter than the color of the fill area of the horizontal diagonal.
  • the face of the person composed of the color values of the blue channel can be as shown in Figure 5 (the image should be displayed in blue).
  • the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
  • the mask value corresponding to the color value of the person's face model is determined by the color value of the pixel point at the position where the cursor is located, thereby determining the part of the plurality of face parts to be operated.
  • the corresponding mask area can be determined by searching a preset mapping relationship, and then the corresponding mask area is obtained.
  • the face to be operated is the "nose bridge".
  • the to-be-operated face portion corresponding to the color value among the plurality of face portions is determined by the acquired color value of the pixel point at the position where the cursor is located. That is to say, the face part to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face part in the face model of the person, so as to simplify the editing operation.
  • the purpose of the work is to say, the face part to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face part in the face model of the person, so as to simplify the editing operation.
  • obtaining color values of pixel points in the location includes:
  • Obtaining a color value of a pixel corresponding to the position in the mask texture wherein the mask texture is attached to the character facial model, and the mask map includes a plurality of mask regions corresponding to the plurality of facial portions, each of The mask area corresponds to a facial part;
  • the color value of the pixel includes one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
  • a muscle part control list is obtained, and a red color value (indicated by R color value) is set for each part, in order to avoid Error, each value differs by at least 10 units.
  • the mask values corresponding to the face model of the person can be obtained by using the color values of the pixels corresponding to the parts, as shown in Table 2 (partial parts):
  • a mask map corresponding to the face model of the person can be drawn, and the mask map is attached to the face model of the person, and the mask area includes multiple mask areas.
  • the color value of the corresponding pixel is obtained by the mask texture attached to the face model of the character, so as to accurately obtain the color value of the pixel at the position of the cursor, so as to be based on the color.
  • the value acquires the corresponding face part to be operated.
  • the method before detecting the position of the cursor in the displayed facial model of the person, the method further includes:
  • a face model and a generated mask map are displayed, wherein the mask map is set to fit over the person's face model.
  • the character face model and the generated mask map as shown in FIG. 6 are displayed on the terminal screen, wherein the mask map is set. Fitted on top of the character's face model.
  • the image combined with the generated face model and the generated mask texture is displayed in advance, so that when the position of the cursor is detected, the corresponding position can be directly obtained through the mask map, and the face model can be accurately obtained.
  • the part of the plurality of face parts to be operated achieves the purpose of improving editing efficiency.
  • the method when the selected operation of the face portion to be operated is detected, the method further includes:
  • the face to be manipulated is highlighted in the face model of the person.
  • the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
  • the user can intuitively see the editing operation on the face portion in the face model of the person and the change effect of the edited face portion. Realize WYSIWYG, so that editing operations can be closer to user needs.
  • the editing of the face portion to be operated in response to the acquired editing operation of the face portion to be operated includes at least one of the following:
  • the operation mode for implementing the above editing may be, but is not limited to, at least one of the following: clicking and dragging. That is to say, at least one of the following edits can be made to the face portion to be operated by a combination of different operation modes: moving, rotating, enlarging, and reducing.
  • the editing process of the content shown from the left side of FIG. 3 to the right side of FIG. 3 can be realized by clicking the selected part to be operated and performing editing such as rotation, reduction, and movement.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
  • an editing apparatus for a human face model for implementing the editing method of the above-described character facial model.
  • the apparatus includes:
  • a first detecting unit 702 configured to detect a position of the cursor in the displayed human face model, wherein the displayed human face model includes a plurality of facial parts;
  • a determining unit 704 configured to determine a part of the plurality of face parts to be operated according to the position
  • a second detecting unit 706, configured to detect a selected operation of the face portion to be operated
  • the editing unit 708 is configured to edit the facial part to be operated in response to the acquired editing operation of the facial part to be operated;
  • the first display unit 710 is configured to display the edited face part to be operated in the character face model.
  • the editing device of the human face model may be, but is not limited to, applied to the character creation process in the terminal application, and the corresponding human face model is edited for the character character.
  • the position of the face to be operated may be determined by detecting the position of the cursor in the plurality of face parts of the displayed face model, and is detected. After the selected operation of the operation face portion, the corresponding editing operation is performed in response to the acquired editing operation.
  • the eyebrow portion in the face model of the person is selected by the position of the cursor (as shown by the dotted line on the left side of FIG. 3), and after the editing operation (such as the rotation operation) of the part is obtained, the The eyebrows are edited and the edited eyebrows are displayed in the face model of the character (as shown by the dotted line on the right side of Figure 3).
  • the editing operation of the character face model of the character character is simplified, and the edited character character is displayed quickly and accurately for the player.
  • the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
  • the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character. Inter-operational switching results in higher complexity The problem. Further, by directly editing the face part in the face model of the person, the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs and improve the user experience.
  • the location of the cursor may be, but is not limited to, a corresponding position for indicating that the mouse is displayed on the screen.
  • the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
  • the editing may include at least one of: moving the face portion to be operated; rotating the face portion to be operated; zooming in on the face portion to be operated; and reducing the face portion to be operated.
  • determining a part of the plurality of face parts to be operated according to the location of the cursor includes:
  • a portion of the plurality of face portions corresponding to the color value to be operated is determined.
  • the color value of the pixel point in the acquiring position may include, but is not limited to, acquiring a color value of a pixel point corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model.
  • the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion, wherein the color value of the pixel points may include one of the following: a red color value of the pixel point, The green color value of the pixel and the blue color value of the pixel.
  • each of the mask regions on the mask map attached to the face model of the person respectively corresponds to a face portion on the face model of the person. That is to say, by selecting the mask area on the mask map attached to the face model of the character by the cursor, the selection can be realized.
  • the corresponding facial part in the facial model of the person realizes direct editing of the facial part on the facial model of the person, thereby achieving the purpose of simplifying the editing operation.
  • the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
  • the nose specifically includes six parts, and each part is respectively provided with a red color value (indicated by the R color value).
  • each value may be between but not Limited to a minimum of 10 units.
  • determining the to-be-operated face portion corresponding to the color value among the plurality of face portions may include, but is not limited to, obtaining the RGB color value of the pixel point corresponding to the position in the mask texture, and obtaining the The mask area corresponding to the color value of the pixel point, and further the part of the face to be operated corresponding to the mask area is obtained.
  • the position of the cursor in the displayed person's face model may be detected, but not limited to, using a pixel picking technique of pixel picking.
  • Pixel picking is a picking technology in the form of model objects, which is a method of detecting and interacting with a virtual object selected or clicked on the display screen by detecting a cursor.
  • the virtual object may be, but not limited to, a face part in a corresponding facial model of a person.
  • an identification ID is set for each virtual object (ie, the face part), and then all the pickable virtual objects are drawn on a render target, and the ID is passed through the constant register.
  • GetRenderTarget() (a function in Direct3D) is used to get the ID of the part where the current cursor is located.
  • the method before detecting the position of the cursor in the displayed face model of the person, the method further includes: displaying the face model of the person and the generated mask map, wherein the mask map is set to fit on the face model of the person on.
  • the process of obtaining a facial model of a person can be as follows:
  • the face picking system is actually a pick-up method that is sensitive to the color of the texture, and the core HLSL code is implemented:
  • Floatdepth EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
  • Result.color1 float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
  • a verification code is set for each pixel in the screen, and it is verified by the verification code whether it belongs to the face model of the person. Only pixels that match the verification code (equal to (1.0, 1.0, 0)) will be used to process according to the person's face model:
  • mDepthSelted maskCode.a
  • the pixel selected by the user's cursor position is calculated by detecting the position of the cursor on the screen and the current resolution of the screen.
  • the part to be operated among the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model of the person displayed by the terminal, and the part to be operated is detected.
  • the face portion to be operated is edited in response to the acquired edit operation of the face portion to be operated, so that the edited face portion to be operated is displayed in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
  • the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
  • the determining unit includes:
  • An obtaining module configured to obtain a color value of a pixel at a position
  • a determining module configured to determine a part of the plurality of face parts corresponding to the color value to be operated.
  • the color value of the pixel point may be, but is not limited to, a color value of a pixel point corresponding to the position of the cursor in the mask texture.
  • the mask map is attached to the face model of the person, and the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion.
  • the color values of the pixel points corresponding to the cursor position in the mask map may include, but are not limited to, a color value of the red channel and a color value of the blue channel.
  • the color value of the pixel may include, but is not limited to, one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
  • the face of the character formed by the color value of the red channel can be as shown in FIG. 4 (the image should be displayed in red), the different shades indicate different degrees of red, and the color of the filled area of the left twill is larger than that of the wave-filled area.
  • the color is brighter and the color of the fill area of the wave point is brighter than the color of the fill area of the horizontal diagonal.
  • the face of the person composed of the color values of the blue channel can be as shown in Figure 5 (the image should be displayed in blue).
  • the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
  • the mask value corresponding to the color value of the person's face model is determined by the color value of the pixel point at the position where the cursor is located, thereby determining the part of the plurality of face parts to be operated.
  • the corresponding mask area can be determined by searching a preset mapping relationship, and then the corresponding mask area is obtained.
  • the face to be operated is the "nose bridge".
  • the to-be-operated face portion corresponding to the color value among the plurality of face portions is determined by the acquired color value of the pixel point at the position where the cursor is located. That is to say, the face portion to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face portion in the face model of the person to achieve the purpose of simplifying the editing operation.
  • the obtaining module includes:
  • Obtaining a sub-module configured to obtain a color value of a pixel corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model, and the mask map includes a plurality of one-to-one correspondence with the plurality of facial parts a mask area, each mask area corresponding to a face part;
  • the color value of the pixel includes one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
  • a muscle part control list is obtained, and a red color value (indicated by R color value) is set for each part, in order to avoid Error, each value differs by at least 10 units.
  • the mask values corresponding to the face model of the person can be obtained by using the color values of the pixels corresponding to the parts, as shown in Table 4 (partial parts):
  • a mask map corresponding to the face model of the person can be drawn, and the mask map is attached to the face model of the person, and the mask area includes multiple mask areas.
  • the color value of the corresponding pixel is obtained by the mask texture attached to the face model of the character, so as to accurately obtain the color value of the pixel at the position of the cursor, so as to be based on the color.
  • the value acquires the corresponding face part to be operated.
  • the device further includes:
  • a second display unit configured to display the character face model and the generated mask map before detecting the position of the cursor in the displayed face model of the person, wherein the mask map is set to fit over the person face model.
  • the character face model and the generated mask map as shown in FIG. 6 are displayed on the terminal screen, wherein the mask map is set. Fitted on top of the character's face model.
  • the image combined with the generated face model and the generated mask map is displayed in advance, thereby facilitating detection of the position of the cursor.
  • the corresponding position is directly and quickly obtained through the mask map, thereby accurately acquiring the to-be-operated face portion of the plurality of facial portions of the facial model of the person, thereby improving the editing efficiency.
  • the device further includes:
  • a third display unit configured to: when detecting a selected operation of the face portion to be operated, The face portion to be manipulated in the face model is highlighted.
  • the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
  • the user can intuitively see the editing operation on the face portion in the face model of the person and the change effect of the edited face portion. Realize WYSIWYG, so that editing operations can be closer to user needs and improve user experience.
  • the editing unit includes at least one of the following:
  • a first editing module configured to move the part to be operated
  • a second editing module for rotating the face portion to be operated
  • the fourth editing module is used for reducing the face portion to be operated.
  • the operation mode for implementing the above editing may be, but is not limited to, at least one of the following: clicking and dragging. That is to say, at least one of the following edits can be made to the face portion to be operated by a combination of different operation modes: moving, rotating, enlarging, and reducing.
  • the editing process of the content shown from the left side of FIG. 3 to the right side of FIG. 3 can be realized by clicking the selected part to be operated and performing editing such as rotation, reduction, and movement.
  • an editing terminal for a human face model for implementing the editing method of the above-described character facial model is further provided.
  • the terminal includes:
  • a communication interface 802 configured to acquire a facial model of a person, wherein the facial model of the person includes a plurality of facial portions;
  • the memory 804 is connected to the communication interface 802 and configured to store a character facial model
  • the processor 806 is connected to the communication interface 802 and the memory 804, and is configured to detect a position of the cursor in the displayed facial model of the person; and is further configured to determine a part of the plurality of facial parts to be operated according to the position; The operation of selecting the face portion is also performed; and is further configured to edit the face portion to be operated in response to the acquired editing operation of the face portion to be operated; and to set the edited face portion to be operated in the face model of the person.
  • the memory 804 may be a non-transitory computer readable storage medium for storing machine readable instructions, including a first detection instruction, a determination instruction, a second detection instruction, an editing instruction, and a first display instruction.
  • the machine readable instructions further include a second display instruction and a third display instruction.
  • the processor 806 is configured to read machine readable instructions stored in the memory 804 to implement the steps of the method of editing the face model of the person in the above embodiment and the functions of the units in the editing device of the face model.
  • the embodiment of the present application provides an application scenario for implementing the editing method of the above-described human face model.
  • the application environment of the embodiment is the same as the embodiment of the editing method and apparatus of the human face model.
  • the editing method of the above-described human face model may be applied to the makeup process of the face of the game character or the process of the face of the face of the game character.
  • the face of the game character may be edited by using the editing method of the face model provided in the above embodiment, thereby achieving the purpose of improving the facial fineness of the game character displayed by the client.
  • a makeup process applied to a face of a game character is taken as an example.
  • the effective part of the game character in the UV space is cropped, and the above editing method is applied to the cropped face area to create different facial models of the person, such as designing various styles of eye makeup on the cut eye part.
  • different makeup replacements can be selected for each partial region region on the face basic map (shown on the right side of FIG. 9), for example, for the provided replacement.
  • the eye part (shown in Figure 9 is the eyebrows) of the makeup cut (as shown in the middle of Figure 9) to choose, assuming the choice of the makeup shown in the solid box, you can get the final game character's face image , as shown on the left side of Figure 9.
  • the makeup of the game character can be more diversified, and the effect of enriching the character image of the game character can be achieved.
  • DiffuseMap, SpecularMap, and NormalMap are cut according to the size of 2's N power (16*16, 32*32, 32*64, 64*128), and the coordinate positions are recorded;
  • the registry configuration can be:
  • the cropped object as an eye part (such as an eyebrow)
  • the corresponding content in DiffuseMap, SpecularMap, and NormalMap is cropped, and the original size is 1024*1024, and the content is cropped as Cut the size to 512*128.
  • the game character face may be constructed by, but not limited to, directly selecting the desired cut image by clicking. As shown in FIG. 11, each cut image of "eye makeup”, “lip makeup”, “skin” and the like is selected and replaced to obtain a desired facial image.
  • the prior art generally uses a boring and complicated slider method for face editing.
  • the editing method of the person's face provided in this embodiment can not only meet the needs of different users in the game application, but also in the process of running the application. It also ensures that users can achieve diversified face editing in the case of smooth running, which greatly reduces system consumption and achieves the purpose of enriching the character image.
  • Embodiments of the present application also provide a storage medium, which is a non-volatile storage medium.
  • the storage medium is arranged to store program code for performing the following steps:
  • the selected operation of the face portion to be operated is detected
  • the edited face portion to be operated is displayed in the face model of the person.
  • the foregoing storage medium may include, but not limited to, a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • mobile hard disk a magnetic disk
  • magnetic disk a magnetic disk
  • optical disk a variety of media that can store program code.
  • the specific example in this embodiment can refer to the embodiment of the editing method and apparatus of the above-described human face model.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
  • a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.

Abstract

一种人物面部模型的编辑方法及装置。其中,该方法包括:检测光标在显示的人物面部模型中的位置,其中,显示的人物面部模型包括多个面部部位(S202);根据位置确定多个面部部位中的待操作面部部位(S204);检测到对待操作面部部位的选中操作(S206);响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑(S208);在人物面部模型中显示编辑后的待操作面部部位(S210)。

Description

人物面部模型的编辑方法及装置
本申请要求于2016年3月10日提交中国专利局、申请号为201610136300.1、发明名称为“人物面部模型的编辑方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种人物面部模型的编辑方法及装置。
发明背景
目前,在现有的人物面部模型编辑过程中,往往是通过调整控制器中不同的滑块在进度条中的位置来实现的。例如,在编辑人物面部模型中的眼睛部位时,常常需要在与眼睛部位对应的非常长的控制列表中,依次拖动调整不同的滑块在进度条中的位置,从而完成对眼睛部位的全部编辑。这样,玩家不仅要确保光标所选中的是要调整部位对应的滑块,而且同时还要确认编辑过程中面部的实际变化效果。
也就是说,采用现有的面部模型编辑方式,需要不停地在面部和滑块之间进行切换,编辑过程不直观,而且编辑操作繁琐,增加了操作复杂度和开发难度。
发明内容
本申请实施例提供了一种人物面部模型的编辑方法及装置,以至少解决由于采用现有的人物面部模型编辑方法所导致的编辑操作的复杂度较高的技术问题。
根据本申请实施例的一个方面,提供了一种人物面部模型的编辑方 法,包括:
检测光标在显示的人物面部模型中的位置,其中,上述显示的人物面部模型包括多个面部部位;
根据上述位置确定上述多个面部部位中的待操作面部部位;
检测到对上述待操作面部部位的选中操作;
响应获取到的对上述待操作面部部位的编辑操作对上述待操作面部部位进行编辑;
在上述人物面部模型中显示编辑后的上述待操作面部部位。
根据本申请实施例的另一方面,还提供了一种人物面部模型的编辑装置,包括:
第一检测单元,用于检测光标在显示的人物面部模型中的位置,其中,上述显示的人物面部模型包括多个面部部位;
确定单元,用于根据上述位置确定上述多个面部部位中的待操作面部部位;
第二检测单元,用于检测到对上述待操作面部部位的选中操作;
编辑单元,用于响应获取到的对上述待操作面部部位的编辑操作对上述待操作面部部位进行编辑;
第一显示单元,用于在上述人物面部模型中显示编辑后的上述待操作面部部位。
在本申请实施例中,根据在终端所显示的人物面部模型中检测到的光标的位置,确定人物面部模型的多个面部部位中的待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑,以在人物面部模型中显示编辑后的待操作面部部位。也就是说,通过检测光标所在的位置,确定在人物面部模型的多个面部部位中所选中的待操作面部部位,以便于 对待操作面部部位直接完成编辑过程,而无需在额外的控制列表中拖动与待操作面部部位对应的滑块,使用户可以对人物面部模型直接进行面部拾取编辑,从而简化了对人物面部模型的编辑操作,进而克服了在人物面部模型与控制列表中的滑块之间的切换所导致的操作复杂度较高的问题。
进一步,通过对人物面部模型中的面部部位直接进行编辑操作,使用户可以直观地看到终端显示的编辑操作以及编辑后的待操作面部部位的变化效果,实现所见即所得,从而使编辑操作可以更加贴近用户需求。
附图简要说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种人物面部模型的编辑方法的应用环境示意图;
图2是根据本申请实施例的一种人物面部模型的编辑方法的流程图;
图3是根据本申请实施例的一种人物面部模型的编辑方法的应用示意图;
图4是根据本申请实施例的另一种人物面部模型的编辑方法的应用示意图;
图5是根据本申请实施例的又一种人物面部模型的编辑方法的应用示意图;
图6是根据本申请实施例的又一种人物面部模型的编辑方法的应用 示意图;
图7是根据本申请实施例的一种人物面部模型的编辑装置的结构示意图;
图8是根据本申请实施例的一种人物面部模型的编辑终端的硬件结构示意图;
图9是根据本申请实施例的又一种人物面部模型的编辑方法的应用示意图;
图10是根据本申请实施例的又一种人物面部模型的编辑方法的应用示意图;以及
图11是根据本申请实施例的又一种人物面部模型的编辑方法的应用示意图。
实施本发明的方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必 限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例,提供了一种人物面部模型的编辑方法的实施例,上述人物面部模型的编辑方法可以但不限于应用于如图1所示的应用环境中。如图1所示,终端102在通过网络104从服务器106获取到人物面部模型后,检测光标在终端102显示的人物面部模型中的位置,其中,显示的人物面部模型包括多个面部部位;根据上述位置确定多个面部部位中的待操作面部部位;检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑;从而实现在人物面部模型中显示编辑后的待操作面部部位。在本实施例中,上述终端可以包括但不限于以下至少之一:手机、平板电脑、笔记本电脑、PC机。
根据本申请实施例,提供了一种人物面部模型的编辑方法,如图2所示,该方法包括:
S202,检测光标在显示的人物面部模型中的位置,其中,显示的人物面部模型包括多个面部部位;
S204,根据位置确定多个面部部位中的待操作面部部位;
S206,检测到对待操作面部部位的选中操作;
S208,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑;
S210,在人物面部模型中显示编辑后的待操作面部部位。
在本实施例中,上述人物面部模型的编辑方法可以但不限于应用于终端应用中的人物角色创建过程中,为人物角色编辑对应的人物面部模型。例如,以游戏应用为例,在为玩家创建游戏应用中的人物角色时,可以通过检测光标在所显示的人物面部模型的多个面部部位中所在的 位置确定待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的编辑操作进行对应的编辑操作。
图3是根据本申请实施例的一种人物面部模型的编辑方法的应用示意图。如图3所示,通过光标所在位置选中人物面部模型中的眉毛部位(如图3左侧所示虚线框),在获取到对该部位的编辑操作(如旋转操作)后,则可以对该眉毛部位进行编辑,并在人物面部模型中显示编辑后的眉毛部位(如图3右侧所示虚线框)。从而简化了对人物角色的人物面部模型的编辑操作,为玩家快速准确地显示编辑后的人物角色。上述举例仅是一种示例,本实施例中所描述的对人物面部模型的编辑方法适用于终端应用的任意类型。
在本实施例中,通过根据在终端所显示的人物面部模型中检测到的光标的位置,确定人物面部模型的多个面部部位中的待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑,以在人物面部模型中显示编辑后的待操作面部部位。也就是说,通过检测光标所在的位置,确定在人物面部模型的多个面部部位中所选中的待操作面部部位,以便于对待操作面部部位直接完成编辑过程,而无需在额外的控制列表中拖动与待操作面部部位对应的滑块,使用户可以对人物面部模型直接进行面部拾取编辑,从而简化了对人物面部模型的编辑操作,进而克服了在人物面部模型与控制列表中的滑块之间的切换所导致的操作复杂度较高的问题。进一步,通过对人物面部模型中的面部部位直接进行编辑操作,使用户可以直观地看到终端显示的编辑操作以及编辑后的待操作面部部位的变化效果,实现所见即所得,从而使编辑操作可以更加贴近用户需求。
在本实施例中,上述光标所在位置可以但不限于用于指示鼠标在屏 幕上显示的对应位置。
在本实施例中,检测到对待操作面部部位的选中操作时,可以包括但不限于:对待操作面部部位进行特殊显示。例如将该面部部位高亮显示、或在该面部部位显示阴影等。
在本实施例中,上述编辑可以包括以下至少之一:对待操作面部部位进行移动;对待操作面部部位进行旋转;对待操作面部部位进行放大;对待操作面部部位进行缩小。
在本实施例中,根据光标所在位置确定多个面部部位中的待操作面部部位包括:
获取该位置上的像素点的颜色值;
确定多个面部部位中与颜色值对应的待操作面部部位。
在本实施例中,获取位置上的像素点的颜色值可以包括但不限于:获取蒙板贴图中与位置对应的像素点的颜色值,其中,蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位,其中,上述像素点的颜色值可以包括以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
在本实施例中,贴合在人物面部模型上的蒙板贴图上的每一个蒙板区域分别与人物面部模型上的一个面部部位对应。也就是说,通过光标选中贴合在人物面部模型之上的蒙板贴图上的蒙板区域,就可以选中人物面部模型中对应的面部部位,从而实现对人物面部模型上的面部部位的直接编辑,达到简化编辑操作的目的。
例如,在本实施例中,可以但不限于通过以下代码构建蒙板贴图:
float4maskColor=tex2D(FacialMaskSampler,param.uv0);
float maskR=maskColor.r*255.0f;
float alpha=0;
if(IsValEqual(SelectedAreaColor,maskR))
{
alpha+=FacialMaskParams.y;
}
float4finalColor=float4(0,2,5,alpha*2.5f*maskColor.b);
其中,maskColor.r用于指示红通道,maskColor.b用于指示蓝通道。
此外,在本实施例中,可以但不限于预先设置蒙板贴图中各个蒙板区域与对应的像素点的颜色值的映射关系。例如,如表1所示的人物面部模型中鼻子具体包括6个部位,对每个部位分别设置一个红色颜色值(以R颜色值表示),为了避免误差,例如每个数值之间可以但不限于相差最少10个单位。
表1
Figure PCTCN2017076029-appb-000001
也就是说,确定多个面部部位中与颜色值对应的待操作面部部位可以包括但不限于:在获取蒙板贴图中与位置对应的像素点的颜色值后,可以通过查询映射关系获取与该像素点的颜色值对应的蒙板区域,进而 获取与该蒙板区域对应的待操作面部部位。
在本实施例中,可以但不限于使用像素拾取(pixel picking)的面部拾取技术来检测光标在显示的人物面部模型中的位置。其中,Pixel picking是以模型物件为单位的拾取技术,是一种通过检测光标在显示屏幕上选择或点击的某个虚拟物件并与之交互的方法。其中,上述虚拟物件可以但不限于对应人物面部模型中的面部部位。
在本实施例中,对每个虚拟物件(即面部部位)设置一个标识ID,然后将所有可拾取虚拟物件绘制在一个render target上,ID通过常量寄存器传入。最后通过GetRenderTarget()(即Direct3D里的一个函数)来获取当前光标所在位置的部位的ID。
在本实施例中,在检测光标在显示的人物面部模型中的位置之前,还可以包括:显示人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
其中,获取人物面部模型的过程可以如下:
面部拾取系统实际上是对贴图色彩敏感的一种拾取方法,实现的核心HLSL代码:
result.color0=tex2D(FacialMaskSampler,param.uv0);
result.color0.gb=param.uv0.xy;
floatdepth=EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
result.color1=float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
通过下面代码获取屏幕显示的人物面部模型:
tex2D(FacialMaskSampler,param.uv0);
result.color1=float4(1.0f,1.0f,0,depth);
给屏幕中每个像素都设置一个验证码,通过该验证码验证是否属于 人物面部模型。只有符合验证码的像素(等于(1.0,1.0,0)),才会用来按照人物面部模型来处理:
EncodingColorData(queryData,maskCode);
If(maskCode.r==0xff&&
maskCode.g==0xff&&
maskCode.b==0)
{
mDepthSelted=maskCode.a;
mIsPickingVaild=true;
}
通过检测光标在屏幕上的位置,和屏幕当前分辨率,计算出用户光标所在位置选择的像素。
在本申请提供的实施例中,根据在终端所显示的人物面部模型中检测到的光标的位置,确定人物面部模型的多个面部部位中的待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑,以在人物面部模型中显示编辑后的待操作面部部位。也就是说,通过检测光标所在的位置,确定在人物面部模型的多个面部部位中所选中的待操作面部部位,以便于对待操作面部部位直接完成编辑过程,而无需在额外的控制列表中拖动与待操作面部部位对应的滑块,使用户可以对人物面部模型直接进行面部拾取编辑,从而简化了对人物面部模型的编辑操作,进而克服了在人物面部模型与控制列表中的滑块之间的切换所导致的操作复杂度较高的问题。
在本申请实施例中,根据位置确定多个面部部位中的待操作面部部位包括:
获取位置上的像素点的颜色值;
确定多个面部部位中与颜色值对应的待操作面部部位。
在本实施例中,上述像素点的颜色值可以但不限于蒙板贴图中与光标所在位置对应的像素点的颜色值。其中,上述蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位。
在本实施例中,上述蒙板贴图中与光标位置对应的像素点的颜色值可以包括但不限于红色通道的颜色值和蓝色通道的颜色值。其中,像素点的颜色值可以包括但不限于以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
在本实施例中,红色通道的颜色值构成的人物面部可以如图4所示(图像应显示为红色),不同阴影表示不同程度的红色,左斜纹的填充区域的颜色比波点填充区域的颜色更亮,波点填充区域的颜色比横斜纹的填充区域的颜色更亮。蓝色通道的颜色值构成的人物面部可以如图5所示(图像应显示为蓝色)。
在本实施例中,可以但不限于预先设置蒙板贴图中各个蒙板区域与对应的像素点的颜色值的映射关系。从而实现通过光标所在位置的像素点的颜色值,确定人物面部模型与颜色值对应的蒙板区域,进而确定多个面部部位中的待操作面部部位。
例如,结合表1所示,在获取光标所在位置上的像素点的R颜色值为200时,则通过查找预先设置的映射关系可以确定对应的蒙板区域,进而获取与该蒙板区域对应的待操作面部部位为“鼻梁”。
在本申请提供的实施例中,通过获取到的光标所在位置上的像素点的颜色值,确定多个面部部位中与颜色值对应的待操作面部部位。也就是说,利用与光标所在位置的像素点的颜色值确定待操作面部部位,从而直接对人物面部模型中的面部部位进行编辑操作,以达到简化编辑操 作的目的。
在本申请实施例中,获取位置上的像素点的颜色值包括:
获取蒙板贴图中与位置对应的像素点的颜色值,其中,蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;
其中,像素点的颜色值包括以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
具体结合以下示例进行说明,根据人体解剖学,按照48根骨骼可以影响的肌肉分类,从而得到一个肌肉部位控制列表,并对每个部位设置一个红色颜色值(以R颜色值表示),为了避免误差,每个数值之间相差最少10个单位。进一步根据这些部位在人物面部的分布情况,利用这些部位对应的像素点的颜色值可以得到与人物面部模型对应的蒙板贴图,如表2所示(部分部位):
表2
Figure PCTCN2017076029-appb-000002
Figure PCTCN2017076029-appb-000003
也就是说,根据上述映射关系中的像素点的颜色值可以绘制与人物面部模型对应的蒙板贴图,该蒙板贴图贴合在人物面部模型之上,蒙板贴图包括的多个蒙板区域与多个面部部位一一对应。
在本申请提供的实施例中,通过贴合在人物面部模型上的蒙板贴图来获取对应的像素点的颜色值,从而准确获取光标所在位置上的像素点的颜色值,以便于根据该颜色值获取对应的待操作面部部位。
在本申请实施例中,在检测光标在显示的人物面部模型中的位置之前,还包括:
显示人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
具体结合以下示例进行说明,在检测光标在显示的人物面部模型中的位置之前,在终端屏幕上显示如图6所示效果的人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
在本申请提供的实施例中,通过在检测光标在显示的人物面部模型 中的位置之前,预先显示人物面部模型和生成的蒙板贴图结合后的图像,从而便于在检测光标所在位置时,可以通过蒙板贴图直接快速地获取对应的位置,进而准确获取人物面部模型的多个面部部位中的待操作面部部位,达到提高编辑效率的目的。
在本申请实施例中,检测到对待操作面部部位的选中操作时,还包括:
在人物面部模型中对待操作面部部位进行高亮显示。
在本实施例中,检测到对待操作面部部位的选中操作时,可以包括但不限于:对待操作面部部位进行特殊显示。例如将该面部部位高亮显示、或在该面部部位显示阴影等。
在本申请提供的实施例中,通过将待操作面部部位进行高亮显示,从而使用户可以直观地看到对人物面部模型中的面部部位所进行的编辑操作以及编辑后的面部部位的变化效果,实现所见即所得,从而使编辑操作可以更加贴近用户需求。
在本申请实施例中,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑包括以下至少之一:
对待操作面部部位进行移动;
对待操作面部部位进行旋转;
对待操作面部部位进行放大;以及
对待操作面部部位进行缩小。
在本实施例中,实现上述编辑的操作方式可以但不限于以下至少之一:点击、拖拽。也就是说,可以通过不同的操作方式的组合以实现对待操作面部部位进行以下至少一种编辑:移动、旋转、放大、缩小。
例如,如图3所示,通过点击选中待操作部位,并进行旋转、缩小、移动等编辑,就可以实现由图3左侧至图3右侧所示内容的编辑过程。
在本申请提供的实施例中,通过直接在人物面部模型上对待操作面部部位进行不同的编辑,从而简化编辑操作,提高编辑效率,克服现有技术中操作复杂度较高的问题。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例,还提供了一种用于实施上述人物面部模型的编辑方法的人物面部模型的编辑装置,如图7所示,该装置包括:
第一检测单元702,用于检测光标在显示的人物面部模型中的位置,其中,显示的人物面部模型包括多个面部部位;
确定单元704,用于根据位置确定多个面部部位中的待操作面部部位;
第二检测单元706,用于检测到对待操作面部部位的选中操作;
编辑单元708,用于响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑;
第一显示单元710,用于在人物面部模型中显示编辑后的待操作面部部位。
在本实施例中,上述人物面部模型的编辑装置可以但不限于应用于终端应用中的人物角色创建过程中,为人物角色编辑对应的人物面部模型。例如,以游戏应用为例,在为玩家创建游戏应用中的人物角色时,可以通过检测光标在所显示的人物面部模型的多个面部部位中所在的位置确定待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的编辑操作进行对应的编辑操作。
如图3所示,通过光标所在位置选中人物面部模型中的眉毛部位(如图3左侧所示虚线框),在获取到对该部位的编辑操作(如旋转操作)后,则可以对该眉毛部位进行编辑,并在人物面部模型中显示编辑后的眉毛部位(如图3右侧所示虚线框)。从而简化了对人物角色的人物面部模型的编辑操作,为玩家快速准确地显示编辑后的人物角色。上述举例仅是一种示例,本实施例中所描述的对人物面部模型的编辑装置适用于终端应用的任意类型。
在本实施例中,通过根据在终端所显示的人物面部模型中检测到的光标的位置,确定人物面部模型的多个面部部位中的待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑,以在人物面部模型中显示编辑后的待操作面部部位。也就是说,通过检测光标所在的位置,确定在人物面部模型的多个面部部位中所选中的待操作面部部位,以便于对待操作面部部位直接完成编辑过程,而无需在额外的控制列表中拖动与待操作面部部位对应的滑块,使用户可以对人物面部模型直接进行面部拾取编辑,从而简化了对人物面部模型的编辑操作,进而克服了在人物面部模型与控制列表中的滑块之间的切换所导致的操作复杂度较高 的问题。进一步,通过对人物面部模型中的面部部位直接进行编辑操作,使用户可以直观地看到终端显示的编辑操作以及编辑后的待操作面部部位的变化效果,实现所见即所得,从而使编辑操作可以更加贴近用户需求,改善了用户体验。
在本实施例中,上述光标所在位置可以但不限于用于指示鼠标在屏幕上显示的对应位置。
在本实施例中,检测到对待操作面部部位的选中操作时,可以包括但不限于:对待操作面部部位进行特殊显示。例如将该面部部位高亮显示、或在该面部部位显示阴影等。
在本实施例中,上述编辑可以包括以下至少之一:对待操作面部部位进行移动;对待操作面部部位进行旋转;对待操作面部部位进行放大;对待操作面部部位进行缩小。
在本实施例中,根据光标所在位置确定多个面部部位中的待操作面部部位包括:
获取该位置上的像素点的颜色值;
确定多个面部部位中与颜色值对应的待操作面部部位。
在本实施例中,获取位置上的像素点的颜色值可以包括但不限于:获取蒙板贴图中与位置对应的像素点的颜色值,其中,蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位,其中,上述像素点的颜色值可以包括以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
在本实施例中,贴合在人物面部模型上的蒙板贴图上的每一个蒙板区域分别与人物面部模型上的一个面部部位对应。也就是说,通过光标选中贴合在人物面部模型之上的蒙板贴图上的蒙板区域,就可以实现选 中人物面部模型中对应的面部部位,从而实现对人物面部模型上的面部部位的直接编辑,达到简化编辑操作的目的。
此外,在本实施例中,可以但不限于预先设置蒙板贴图中各个蒙板区域与对应的像素点的颜色值的映射关系。例如,如表3所示的人物面部模型中鼻子具体包括6个部位,对每个部位分别设置一个红色颜色值(以R颜色值表示),为了避免误差,例如每个数值之间可以但不限于相差最少10个单位。
表3
Figure PCTCN2017076029-appb-000004
也就是说,确定多个面部部位中与颜色值对应的待操作面部部位可以包括但不限于:在获取蒙板贴图中与位置对应的像素点的RGB颜色值后,可以通过查询映射关系获取与该像素点的颜色值对应的蒙板区域,进而获取与该蒙板区域对应的待操作面部部位。
在本实施例中,可以但不限于使用像素拾取(pixel picking)的面部拾取技术来检测光标在显示的人物面部模型中的位置。其中,Pixel picking是以模型物件为单位的拾取技术,是一种通过检测光标在显示屏幕上选择或点击的某个虚拟物件并与之交互的方法。其中,上述虚拟物件可以但不限于对应人物面部模型中的面部部位。
在本实施例中,对每个虚拟物件(即面部部位)设置一个标识ID,然后将所有可拾取虚拟物件绘制在一个render target上,ID通过常量寄存器传入。最后通过GetRenderTarget()(即Direct3D里的一个函数)来获取当前光标所在位置的部位的ID。
在本实施例中,在检测光标在显示的人物面部模型中的位置之前,还可以包括:显示人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
其中,获取人物面部模型的过程可以如下:
面部拾取系统实际上是对贴图色彩敏感的一种拾取方法,实现的核心HLSL代码:
result.color0=tex2D(FacialMaskSampler,param.uv0);
result.color0.gb=param.uv0.xy;
floatdepth=EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
result.color1=float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
通过下面代码获取屏幕显示的人物面部模型:
tex2D(FacialMaskSampler,param.uv0);
result.color1=float4(1.0f,1.0f,0,depth);
给屏幕中每个像素都设置一个验证码,通过该验证码验证是否属于人物面部模型。只有符合验证码的像素(等于(1.0,1.0,0)),才会用来按照人物面部模型来处理:
EncodingColorData(queryData,maskCode);
If(maskCode.r==0xff&&
maskCode.g==0xff&&
maskCode.b==0)
{
mDepthSelted=maskCode.a;
mIsPickingVaild=true;
}
通过检测光标在屏幕上的位置,和屏幕当前分辨率,计算出用户光标所在位置选择的像素。
在本申请提供的实施例中,根据在终端所显示的人物面部模型中检测到的光标的位置,确定人物面部模型的多个面部部位中的待操作面部部位,并在检测到对待操作面部部位的选中操作后,响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑,以在人物面部模型中显示编辑后的待操作面部部位。也就是说,通过检测光标所在的位置,确定在人物面部模型的多个面部部位中所选中的待操作面部部位,以便于对待操作面部部位直接完成编辑过程,而无需在额外的控制列表中拖动与待操作面部部位对应的滑块,使用户可以对人物面部模型直接进行面部拾取编辑,从而简化了对人物面部模型的编辑操作,进而克服了在人物面部模型与控制列表中的滑块之间的切换所导致的操作复杂度较高的问题。
在本申请实施例中,确定单元包括:
获取模块,用于获取位置上的像素点的颜色值;
确定模块,用于确定多个面部部位中与颜色值对应的待操作面部部位。
在本实施例中,上述像素点的颜色值可以但不限于蒙板贴图中与光标所在位置对应的像素点的颜色值。其中,上述蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位。
在本实施例中,上述蒙板贴图中与光标位置对应的像素点的颜色值可以包括但不限于红色通道的颜色值和蓝色通道的颜色值。其中,像素点的颜色值可以包括但不限于以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
在本实施例中,红色通道的颜色值构成的人物面部可以如图4所示(图像应显示为红色),不同阴影表示不同程度的红色,左斜纹的填充区域的颜色比波点填充区域的颜色更亮,波点填充区域的颜色比横斜纹的填充区域的颜色更亮。蓝色通道的颜色值构成的人物面部可以如图5所示(图像应显示为蓝色)。
在本实施例中,可以但不限于预先设置蒙板贴图中各个蒙板区域与对应的像素点的颜色值的映射关系。从而实现通过光标所在位置的像素点的颜色值,确定人物面部模型与颜色值对应的蒙板区域,进而确定多个面部部位中的待操作面部部位。
例如,结合表3所示,在获取光标所在位置上的像素点的R颜色值为200时,则通过查找预先设置的映射关系可以确定对应的蒙板区域,进而获取与该蒙板区域对应的待操作面部部位为“鼻梁”。
在本申请提供的实施例中,通过获取到的光标所在位置上的像素点的颜色值,确定多个面部部位中与颜色值对应的待操作面部部位。也就是说,利用与光标所在位置的像素点的颜色值确定待操作面部部位,从而直接对人物面部模型中的面部部位进行编辑操作,以达到简化编辑操作的目的。
在本申请实施例中,获取模块包括:
获取子模块,用于获取蒙板贴图中与位置对应的像素点的颜色值,其中,蒙板贴图贴合在人物面部模型之上,蒙板贴图包括与多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;
其中,像素点的颜色值包括以下之一:像素点的红色颜色值、像素点的绿色颜色值、像素点的蓝色颜色值。
具体结合以下示例进行说明,根据人体解剖学,按照48根骨骼可以影响的肌肉分类,从而得到一个肌肉部位控制列表,并对每个部位设置一个红色颜色值(以R颜色值表示),为了避免误差,每个数值之间相差最少10个单位。进一步根据这些部位在人物面部的分布情况,利用这些部位对应的像素点的颜色值可以得到与人物面部模型对应的蒙板贴图,如表4所示(部分部位):
表4
Figure PCTCN2017076029-appb-000005
Figure PCTCN2017076029-appb-000006
也就是说,根据上述映射关系中的像素点的颜色值可以绘制与人物面部模型对应的蒙板贴图,该蒙板贴图贴合在人物面部模型之上,蒙板贴图包括的多个蒙板区域与多个面部部位一一对应。
在本申请提供的实施例中,通过贴合在人物面部模型上的蒙板贴图来获取对应的像素点的颜色值,从而准确获取光标所在位置上的像素点的颜色值,以便于根据该颜色值获取对应的待操作面部部位。
在本申请实施例中,该装置还包括:
第二显示单元,用于在检测光标在显示的人物面部模型中的位置之前,显示人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
具体结合以下示例进行说明,在检测光标在显示的人物面部模型中的位置之前,在终端屏幕上显示如图6所示效果的人物面部模型和生成的蒙板贴图,其中,蒙板贴图被设置为贴合在人物面部模型之上。
在本申请提供的实施例中,通过在检测光标在显示的人物面部模型中的位置之前,预先显示人物面部模型和生成的蒙板贴图结合后的图像,从而便于在检测光标所在位置时,可以通过蒙板贴图直接快速地获取对应的位置,进而准确获取人物面部模型的多个面部部位中的待操作面部部位,达到提高编辑效率的目的。
在本申请实施例中,该装置还包括:
第三显示单元,用于在检测到对待操作面部部位的选中操作时,在 人物面部模型中对待操作面部部位进行高亮显示。
在本实施例中,检测到对待操作面部部位的选中操作时,可以包括但不限于:对待操作面部部位进行特殊显示。例如将该面部部位高亮显示、或在该面部部位显示阴影等。
在本申请提供的实施例中,通过将待操作面部部位进行高亮显示,从而使用户可以直观地看到对人物面部模型中的面部部位所进行的编辑操作以及编辑后的面部部位的变化效果,实现所见即所得,从而使编辑操作可以更加贴近用户需求,改善了用户体验。
在本申请实施例中,编辑单元包括以下至少之一:
第一编辑模块,用于对待操作面部部位进行移动;
第二编辑模块,用于对待操作面部部位进行旋转;
第三编辑模块,用于对待操作面部部位进行放大;以及
第四编辑模块,用于对待操作面部部位进行缩小。
在本实施例中,实现上述编辑的操作方式可以但不限于以下至少之一:点击、拖拽。也就是说,可以通过不同的操作方式的组合以实现对待操作面部部位进行以下至少一种编辑:移动、旋转、放大、缩小。
例如,如图3所示,通过点击选中待操作部位,并进行旋转、缩小、移动等编辑,就可以实现由图3左侧至图3右侧所示内容的编辑过程。
在本申请提供的实施例中,通过直接在人物面部模型上对待操作面部部位进行不同的编辑,从而简化编辑操作,提高编辑效率,克服现有技术中操作复杂度较高的问题。
根据本申请实施例,还提供了一种用于实施上述人物面部模型的编辑方法的人物面部模型的编辑终端,如图8所示,该终端包括:
通讯接口802,设置为获取人物面部模型,其中,人物面部模型包括多个面部部位;
存储器804,与通讯接口802连接,设置为存储人物面部模型;
处理器806,与通讯接口802及存储器804连接,设置为检测光标在显示的人物面部模型中的位置;还设置为根据位置确定多个面部部位中的待操作面部部位;还设置为检测到对待操作面部部位的选中操作;还设置为响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑;还设置为在人物面部模型中显示编辑后的待操作面部部位。
在本申请实施例中,存储器804可以为非易失性计算机可读存储介质,用于存储机器可读指令,包括第一检测指令、确定指令、第二检测指令、编辑指令和第一显示指令。在本申请另一实施例中,该机器可读指令还包括第二显示指令和第三显示指令。
处理器806用于读取存储器804中存储的机器可读指令,以实现上述实施例中人物面部模型的编辑方法的步骤以及人物面部模型的编辑装置中各单元的功能。
本实施例中的具体示例可以参考上述人物面部模型的编辑方法和装置的实施例中所描述的示例。
本申请实施例提供了用于实施上述人物面部模型的编辑方法的应用场景,该实施例的应用环境与上述人物面部模型的编辑方法和装置的实施例相同。
在本实施例中,可以但不限于将上述人物面部模型的编辑方法应用于游戏人物面部的化妆过程中,或游戏人物面部的整容过程中。例如,为了使游戏人物面部更加精致美观,可以采用上述实施例中提供的人物面部模型的编辑方法对游戏人物的面部进行编辑,从而达到改善客户端所显示的游戏人物的面部精细度的目的。
在本申请实施例中,以应用于游戏人物面部的化妆过程为例。例如, 将游戏人物在UV空间中有效的部分裁切出来,并对裁切后的面部区域应用上述编辑方法,以建立不同的人物面部模型,如对裁切后的眼睛部位设计多种样式的眼妆。这样如图9所示,在构造游戏人物的面部形象时,可以在面部基础贴图(如图9右侧所示)上为各个局部部位区域选择不同的妆容替换,例如,对所提供的可替换资源:眼部部位(图9所示为眉毛)的妆容切图(如图9中间所示)进行选择,假设选择了实线框中所示的妆容,则可以得到最终的游戏人物的面部形象,如图9左侧所示。采用本申请实施例提供的方法,可以使游戏人物的妆容更加多样化,达到丰富游戏人物的角色形象的效果。
获取局部部位区域的切图的具体过程可以如下:
将DiffuseMap、SpecularMap、NormalMap中有效的部分按照2的N次幂的尺寸裁切出来(16*16,32*32,32*64,64*128),并记录好坐标位置;
注册表配置:
mMakeupID="171"(妆容的美术资源ID)
<mTexSizeBias v="512"/><mTexSizeBias v="128"/>(裁切的贴图尺寸)
<mTexSizeBias v="512"<mTexSizeBias v="436"(该贴图在原来尺寸贴图中的指标位置)
注册表配置具体可以为:
<mFacialMakeupParts mMakeupID=“171”mMakeupID=“EMP_Eyebrow”mSexualType=“1”mDiffuseFilePath=“F_Eyebrow_0001_D.dds”mSpecFilePath=“F_Eyebrow_0001_S.dds”mNormalFilePath=“F_Eyebrow_0001_N.dds”mIconImage=“F_Eyebrow_0001”>
<mTexSizeBias IsStaticArray=“1”ArrayCount=“4”[XlsProp]=“0”>
<mTexSizeBias v=“512”/>
<mTexSizeBias v=“128”/>
<mTexSizeBias v=“512”/>
<mTexSizeBias v=“436”/>
</mTexSizeBias>
</mFacialMakeupParts>
例如,如图10所示,在选择裁切对象为眼部部位(如眉毛)后,对DiffuseMap、SpecularMap、NormalMap中对应的内容进行裁切,将原始尺寸为1024*1024的内容,裁切为裁切尺寸为512*128的内容。
进一步,在经裁切得到各个局部部位区域的不同切图后,构造游戏人物的过程中,可以但不限于通过点击直接选择所需的切图来构造游戏人物面部。如图11所示,对“眼妆”、“唇妆”、“皮肤”等内容的各个切图进行选择替换,以得到所需的面部形象。
现有技术通常采用枯燥且繁杂的滑条方式进行面部编辑,然而,采用本实施例中提供的人物面部的编辑方法,不仅能够满足游戏应用中不同用户的需求,而且在应用运行的过程中,还可以保证用户在运行流畅的情况下,实现多样化的面部编辑,大大降低了系统消耗,且达到了丰富角色形象的目的。
本申请的实施例还提供了一种存储介质,该存储介质为非易失性存储介质。在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
检测光标在显示的人物面部模型中的位置,其中,显示的人物面部模型包括多个面部部位;
根据位置确定多个面部部位中的待操作面部部位;
检测到对待操作面部部位的选中操作;
响应获取到的对待操作面部部位的编辑操作对待操作面部部位进行编辑;
在人物面部模型中显示编辑后的待操作面部部位。
在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本实施例中的具体示例可以参考上述人物面部模型的编辑方法和装置的实施例。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (12)

  1. 一种人物面部模型的编辑方法,其特征在于,包括:
    检测光标在显示的人物面部模型中的位置,其中,所述显示的人物面部模型包括多个面部部位;
    根据所述位置确定所述多个面部部位中的待操作面部部位;
    检测到对所述待操作面部部位的选中操作;
    响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑;
    在所述人物面部模型中显示编辑后的所述待操作面部部位。
  2. 根据权利要求1所述的方法,其特征在于,根据所述位置确定所述多个面部部位中的待操作面部部位包括:
    获取所述位置上的像素点的颜色值;
    确定所述多个面部部位中与所述颜色值对应的所述待操作面部部位。
  3. 根据权利要求2所述的方法,其特征在于,获取所述位置上的像素点的颜色值包括:
    获取蒙板贴图中与所述位置对应的像素点的颜色值,其中,所述蒙板贴图贴合在所述人物面部模型之上,所述蒙板贴图包括与所述多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;
    其中,所述像素点的颜色值包括以下之一:所述像素点的红色颜色值、所述像素点的绿色颜色值、所述像素点的蓝色颜色值。
  4. 根据权利要求3所述的方法,其特征在于,在检测光标在显示的人物面部模型中的位置之前,还包括:
    显示所述人物面部模型和生成的所述蒙板贴图,其中,所述蒙板贴 图被设置为贴合在所述人物面部模型之上。
  5. 根据权利要求1所述的方法,其特征在于,检测到对所述待操作面部部位的选中操作时,还包括:
    在所述人物面部模型中对所述待操作面部部位进行高亮显示。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑包括以下至少之一:
    对所述待操作面部部位进行移动;
    对所述待操作面部部位进行旋转;
    对所述待操作面部部位进行放大;
    对所述待操作面部部位进行缩小。
  7. 一种人物面部模型的编辑装置,其特征在于,包括:
    第一检测单元,用于检测光标在显示的人物面部模型中的位置,其中,所述显示的人物面部模型包括多个面部部位;
    确定单元,用于根据所述位置确定所述多个面部部位中的待操作面部部位;
    第二检测单元,用于检测到对所述待操作面部部位的选中操作;
    编辑单元,用于响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑;
    第一显示单元,用于在所述人物面部模型中显示编辑后的所述待操作面部部位。
  8. 根据权利要求7所述的装置,其特征在于,所述确定单元包括:
    获取模块,用于获取所述位置上的像素点的颜色值;
    确定模块,用于确定所述多个面部部位中与所述颜色值对应的所述待操作面部部位。
  9. 根据权利要求8所述的装置,其特征在于,所述获取模块包括:
    获取子模块,用于获取蒙板贴图中与所述位置对应的像素点的颜色值,其中,所述蒙板贴图贴合在所述人物面部模型之上,所述蒙板贴图包括与所述多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;
    其中,所述像素点的颜色值包括以下之一:所述像素点的红色颜色值、所述像素点的绿色颜色值、所述像素点的蓝色颜色值。
  10. 根据权利要求9所述的装置,其特征在于,还包括:
    第二显示单元,用于在检测光标在显示的人物面部模型中的位置之前,显示所述人物面部模型和生成的所述蒙板贴图,其中,所述蒙板贴图被设置为贴合在所述人物面部模型之上。
  11. 根据权利要求7所述的装置,其特征在于,还包括:
    第三显示单元,用于在检测到对所述待操作面部部位的选中操作时,在所述人物面部模型中对所述待操作面部部位进行高亮显示。
  12. 根据权利要求7至11中任一项所述的装置,其特征在于,所述编辑单元包括以下至少之一:
    第一编辑模块,用于对所述待操作面部部位进行移动;
    第二编辑模块,用于对所述待操作面部部位进行旋转;
    第三编辑模块,用于对所述待操作面部部位进行放大;
    第四编辑模块,用于对所述待操作面部部位进行缩小。
PCT/CN2017/076029 2016-03-10 2017-03-09 人物面部模型的编辑方法及装置 WO2017152848A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018545644A JP6661780B2 (ja) 2016-03-10 2017-03-09 顔モデル編集方法及び装置
KR1020187025694A KR102089473B1 (ko) 2016-03-10 2017-03-09 안면 모델 편집 방법 및 장치
US16/111,922 US10628984B2 (en) 2016-03-10 2018-08-24 Facial model editing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610136300.1A CN107180453B (zh) 2016-03-10 2016-03-10 人物面部模型的编辑方法及装置
CN201610136300.1 2016-03-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/111,922 Continuation US10628984B2 (en) 2016-03-10 2018-08-24 Facial model editing method and apparatus

Publications (1)

Publication Number Publication Date
WO2017152848A1 true WO2017152848A1 (zh) 2017-09-14

Family

ID=59788998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/076029 WO2017152848A1 (zh) 2016-03-10 2017-03-09 人物面部模型的编辑方法及装置

Country Status (5)

Country Link
US (1) US10628984B2 (zh)
JP (1) JP6661780B2 (zh)
KR (1) KR102089473B1 (zh)
CN (1) CN107180453B (zh)
WO (1) WO2017152848A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113101668A (zh) * 2018-04-27 2021-07-13 网易(杭州)网络有限公司 虚拟场景生成方法、装置、存储介质及电子设备
CN109285209B (zh) * 2018-09-14 2023-05-26 网易(杭州)网络有限公司 游戏角色的面部模型的处理方法、装置、处理器及终端
CN110111417B (zh) * 2019-05-15 2021-04-27 浙江商汤科技开发有限公司 三维局部人体模型的生成方法、装置及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545068A (zh) * 2003-11-11 2004-11-10 易连科技股份有限公司 快速建立人脸影像平面模型的方法
CN102834843A (zh) * 2010-04-02 2012-12-19 诺基亚公司 用于面部检测的方法和装置
CN103392180A (zh) * 2011-02-24 2013-11-13 西门子产品生命周期管理软件公司 针对建模对象的整体变形
CN104103090A (zh) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 图像处理方法、个性化人体显示方法及其图像处理系统

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5687259A (en) 1995-03-17 1997-11-11 Virtual Eyes, Incorporated Aesthetic imaging system
US6362829B1 (en) * 1996-03-07 2002-03-26 Agfa Corporation Method for modifying a digital image
KR20010056965A (ko) * 1999-12-17 2001-07-04 박희완 부분 이미지 합성에 의한 인물 캐릭터 생성 방법
US7593603B1 (en) * 2004-11-30 2009-09-22 Adobe Systems Incorporated Multi-behavior image correction tool
GB2451050B (en) * 2006-05-05 2011-08-31 Parham Aarabi Method, system and computer program product for automatic and semiautomatic modification of digital images of faces
CN101021943A (zh) * 2007-04-06 2007-08-22 北京中星微电子有限公司 一种图像调整的方法和系统
US20090231356A1 (en) * 2008-03-17 2009-09-17 Photometria, Inc. Graphical user interface for selection of options from option groups and methods relating to same
JP2012004719A (ja) * 2010-06-15 2012-01-05 Nikon Corp 画像処理装置及びプログラム、並びに電子カメラ
CN103207745B (zh) * 2012-01-16 2016-04-13 上海那里信息科技有限公司 虚拟化身交互系统和方法
CN102999929A (zh) * 2012-11-08 2013-03-27 大连理工大学 一种基于三角网格化的人物图像瘦脸处理方法
US9747716B1 (en) * 2013-03-15 2017-08-29 Lucasfilm Entertainment Company Ltd. Facial animation models
CN104380339B (zh) * 2013-04-08 2018-11-30 松下电器(美国)知识产权公司 图像处理装置、图像处理方法、以及介质
JP6171635B2 (ja) * 2013-07-04 2017-08-02 ティアック株式会社 編集処理装置及び編集処理プログラム
CN104156912B (zh) * 2014-08-18 2018-11-06 厦门美图之家科技有限公司 一种人像增高的图像处理的方法
EP3186788A1 (en) * 2014-08-29 2017-07-05 Thomson Licensing Method and device for editing a facial image
CN105389835B (zh) * 2014-09-03 2019-07-12 腾讯科技(深圳)有限公司 一种图像处理方法、装置及终端
CN104616330A (zh) * 2015-02-10 2015-05-13 广州视源电子科技股份有限公司 一种图片的生成方法和装置
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
US10417738B2 (en) * 2017-01-05 2019-09-17 Perfect Corp. System and method for displaying graphical effects based on determined facial positions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545068A (zh) * 2003-11-11 2004-11-10 易连科技股份有限公司 快速建立人脸影像平面模型的方法
CN102834843A (zh) * 2010-04-02 2012-12-19 诺基亚公司 用于面部检测的方法和装置
CN103392180A (zh) * 2011-02-24 2013-11-13 西门子产品生命周期管理软件公司 针对建模对象的整体变形
CN104103090A (zh) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 图像处理方法、个性化人体显示方法及其图像处理系统

Also Published As

Publication number Publication date
KR20180108799A (ko) 2018-10-04
JP6661780B2 (ja) 2020-03-11
JP2019512141A (ja) 2019-05-09
US10628984B2 (en) 2020-04-21
CN107180453B (zh) 2019-08-16
CN107180453A (zh) 2017-09-19
KR102089473B1 (ko) 2020-03-16
US20180365878A1 (en) 2018-12-20

Similar Documents

Publication Publication Date Title
KR102169918B1 (ko) 인물 얼굴 모델의 표정 애니메이션 생성 방법 및 장치
US11836859B2 (en) Textured mesh building
US11037275B2 (en) Complex architecture for image processing
US8907984B2 (en) Generating slideshows using facial detection information
US10304162B1 (en) Automatic image inpainting using local patch statistics
US20200020173A1 (en) Methods and systems for constructing an animated 3d facial model from a 2d facial image
US8659622B2 (en) Systems and methods for creating and editing seam carving masks
US8823726B2 (en) Color balance
JP6355746B2 (ja) デバイスのための画像編集技法
CN108776970A (zh) 图像处理方法和装置
TW202234341A (zh) 圖像處理方法及裝置、電子設備、儲存媒體和程式產品
KR20150038518A (ko) 사진을 이용한 운영체제 색상 세팅
US8675014B1 (en) Efficiently detecting graphics objects near a selected point
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
WO2017152848A1 (zh) 人物面部模型的编辑方法及装置
Dong et al. Summarization-based image resizing by intelligent object carving
WO2023045941A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2023284632A1 (zh) 图像展示方法、装置及电子设备
US20230326110A1 (en) Method, apparatus, device and media for publishing video
US10120539B2 (en) Method and device for setting user interface
WO2022022260A1 (zh) 图像风格迁移方法及其装置
US20180211027A1 (en) Password setting method and device
CN112819741A (zh) 一种图像融合方法、装置,电子设备及存储介质
CN104463839A (zh) 图像处理装置、图像处理方法
CN116943158A (zh) 对象信息展示方法和相关装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018545644

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20187025694

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020187025694

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17762534

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17762534

Country of ref document: EP

Kind code of ref document: A1