WO2017152848A1 - 人物面部模型的编辑方法及装置 - Google Patents
人物面部模型的编辑方法及装置 Download PDFInfo
- Publication number
- WO2017152848A1 WO2017152848A1 PCT/CN2017/076029 CN2017076029W WO2017152848A1 WO 2017152848 A1 WO2017152848 A1 WO 2017152848A1 CN 2017076029 W CN2017076029 W CN 2017076029W WO 2017152848 A1 WO2017152848 A1 WO 2017152848A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- operated
- editing
- model
- person
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04812—Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
Definitions
- the present application relates to the field of computers, and in particular to a method and apparatus for editing a facial model of a person.
- the embodiment of the present application provides a method and an apparatus for editing a face model of a person, so as to at least solve the technical problem of high complexity of an editing operation caused by using an existing person face model editing method.
- an editor of a face model of a person including:
- the edited face portion to be operated is displayed in the above-described person's face model.
- an apparatus for editing a facial model of a person including:
- a first detecting unit configured to detect a position of the cursor in the displayed facial model of the person, wherein the displayed facial model of the person includes a plurality of facial parts;
- a determining unit configured to determine a part of the plurality of face parts to be operated according to the position
- a second detecting unit configured to detect a selected operation on the face part to be operated
- An editing unit configured to edit the to-be-operated face portion in response to the acquired editing operation on the face portion to be operated
- a first display unit configured to display the edited face portion to be operated in the character face model.
- the part to be operated in the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model displayed by the terminal, and the selection of the part to be operated is detected.
- the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person.
- the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so as to facilitate The editing process is directly performed on the face portion, and there is no need to drag the slider corresponding to the face portion to be operated in the additional control list, so that the user can directly perform face picking and editing on the face model of the person, thereby simplifying the face model of the person.
- the editing operation overcomes the problem of high operational complexity caused by switching between the face model of the person and the slider in the control list.
- the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs.
- FIG. 1 is a schematic diagram of an application environment of a method for editing a face model of a person according to an embodiment of the present application
- FIG. 2 is a flowchart of a method for editing a character face model according to an embodiment of the present application
- FIG. 3 is a schematic diagram of application of an editing method of a character face model according to an embodiment of the present application
- FIG. 4 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
- FIG. 6 is an application of an editing method of still another character face model according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of an editing apparatus for a face model of a person according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of a hardware structure of an editing terminal of a character face model according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
- FIG. 10 is a schematic diagram of application of another method for editing a character face model according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of an application of a method for editing a character face model according to an embodiment of the present application.
- an embodiment of a method for editing a character face model is provided.
- the editing method of the character face model may be, but is not limited to, applied to an application environment as shown in FIG. 1 .
- the terminal 102 detects the position of the cursor in the face model of the person displayed by the terminal 102 after acquiring the face model of the person from the server 106 through the network 104, wherein the displayed face model includes a plurality of face parts; The above position determines a part of the plurality of face parts to be operated; after detecting the selected operation of the part to be operated, the editing operation of the face part to be operated is edited in response to the acquired face part; thereby realizing the face part model
- the edited face part to be operated is displayed.
- the foregoing terminal may include, but is not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, and a PC.
- a method for editing a face model of a person includes:
- the editing method of the above-described person's face model may be, but is not limited to, being applied to the persona creation process in the terminal application, and editing the corresponding person's face model for the persona character.
- the editing method of the above-described person's face model may be, but is not limited to, being applied to the persona creation process in the terminal application, and editing the corresponding person's face model for the persona character.
- a game application when a character in a game application is created for a player, it is possible to detect where the cursor is located in a plurality of face parts of the displayed face model of the person. The position determines the face portion to be operated, and after detecting the selected operation of the face portion to be operated, performs a corresponding editing operation in response to the acquired editing operation.
- FIG. 3 is a schematic diagram of an application of a method for editing a face model of a person according to an embodiment of the present application.
- the eyebrow portion in the face model of the person is selected by the position of the cursor (as shown by the dotted line on the left side of FIG. 3), and after the editing operation (such as the rotation operation) of the part is obtained, the The eyebrows are edited and the edited eyebrows are displayed in the face model of the character (as shown by the dotted line on the right side of Figure 3).
- the editing operation of the character face model of the character character is simplified, and the edited character character is displayed quickly and accurately for the player.
- the above example is merely an example, and the editing method for the face model of the person described in the embodiment is applicable to any type of terminal application.
- the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
- the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
- the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs.
- the location of the cursor may be, but is not limited to, indicating that the mouse is on the screen. The corresponding position displayed on the screen.
- the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
- the editing may include at least one of: moving the face portion to be operated; rotating the face portion to be operated; zooming in on the face portion to be operated; and reducing the face portion to be operated.
- determining a part of the plurality of face parts to be operated according to the location of the cursor includes:
- a portion of the plurality of face portions corresponding to the color value to be operated is determined.
- the color value of the pixel point in the acquiring position may include, but is not limited to, acquiring a color value of a pixel point corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model.
- the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion, wherein the color value of the pixel points may include one of the following: a red color value of the pixel point, The green color value of the pixel and the blue color value of the pixel.
- each of the mask regions on the mask map attached to the face model of the person respectively corresponds to a face portion on the face model of the person. That is to say, by selecting the mask area on the mask map attached to the face model of the character by the cursor, the corresponding face part in the face model can be selected, thereby directly editing the face part on the face model of the person. , to achieve the purpose of simplifying the editing operation.
- the mask map can be constructed, but not limited to, by the following code:
- float4finalColor float4(0,2,5,alpha*2.5f*maskColor.b);
- maskColor.r is used to indicate the red channel
- maskColor.b is used to indicate the blue channel.
- the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
- the nose specifically includes six parts, and each part is respectively provided with a red color value (indicated by the R color value).
- each value may be between but not Limited to a minimum of 10 units.
- determining the to-be-operated face portion corresponding to the color value in the plurality of face portions may include, but is not limited to, obtaining the color value of the pixel point corresponding to the position in the mask texture, and obtaining the The mask area corresponding to the color value of the pixel, and further Obtain a face part to be operated corresponding to the mask area.
- the position of the cursor in the displayed person's face model may be detected, but not limited to, using a pixel picking technique of pixel picking.
- Pixel picking is a picking technology in the form of model objects, which is a method of detecting and interacting with a virtual object selected or clicked on the display screen by detecting a cursor.
- the virtual object may be, but not limited to, a face part in a corresponding facial model of a person.
- an identification ID is set for each virtual object (ie, the face part), and then all the pickable virtual objects are drawn on a render target, and the ID is passed through the constant register.
- GetRenderTarget() (a function in Direct3D) is used to get the ID of the part where the current cursor is located.
- the method before detecting the position of the cursor in the displayed face model of the person, the method further includes: displaying the face model of the person and the generated mask map, wherein the mask map is set to fit on the face model of the person on.
- the process of obtaining a facial model of a person can be as follows:
- the face picking system is actually a pick-up method that is sensitive to the color of the texture, and the core HLSL code is implemented:
- Floatdepth EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
- Result.color1 float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
- mDepthSelted maskCode.a
- the pixel selected by the user's cursor position is calculated by detecting the position of the cursor on the screen and the current resolution of the screen.
- the part to be operated among the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model of the person displayed by the terminal, and the part to be operated is detected.
- the face portion to be operated is edited in response to the acquired edit operation of the face portion to be operated, so that the edited face portion to be operated is displayed in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
- the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
- determining the to-be-operated face part of the plurality of face parts according to the location includes:
- a portion of the plurality of face portions corresponding to the color value to be operated is determined.
- the color value of the pixel point may be, but is not limited to, a color value of a pixel point corresponding to the position of the cursor in the mask texture.
- the mask map is attached to the face model of the person, and the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion.
- the color values of the pixel points corresponding to the cursor position in the mask map may include, but are not limited to, a color value of the red channel and a color value of the blue channel.
- the color value of the pixel may include, but is not limited to, one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
- the face of the character formed by the color value of the red channel can be as shown in FIG. 4 (the image should be displayed in red), the different shades indicate different degrees of red, and the color of the filled area of the left twill is larger than that of the wave-filled area.
- the color is brighter and the color of the fill area of the wave point is brighter than the color of the fill area of the horizontal diagonal.
- the face of the person composed of the color values of the blue channel can be as shown in Figure 5 (the image should be displayed in blue).
- the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
- the mask value corresponding to the color value of the person's face model is determined by the color value of the pixel point at the position where the cursor is located, thereby determining the part of the plurality of face parts to be operated.
- the corresponding mask area can be determined by searching a preset mapping relationship, and then the corresponding mask area is obtained.
- the face to be operated is the "nose bridge".
- the to-be-operated face portion corresponding to the color value among the plurality of face portions is determined by the acquired color value of the pixel point at the position where the cursor is located. That is to say, the face part to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face part in the face model of the person, so as to simplify the editing operation.
- the purpose of the work is to say, the face part to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face part in the face model of the person, so as to simplify the editing operation.
- obtaining color values of pixel points in the location includes:
- Obtaining a color value of a pixel corresponding to the position in the mask texture wherein the mask texture is attached to the character facial model, and the mask map includes a plurality of mask regions corresponding to the plurality of facial portions, each of The mask area corresponds to a facial part;
- the color value of the pixel includes one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
- a muscle part control list is obtained, and a red color value (indicated by R color value) is set for each part, in order to avoid Error, each value differs by at least 10 units.
- the mask values corresponding to the face model of the person can be obtained by using the color values of the pixels corresponding to the parts, as shown in Table 2 (partial parts):
- a mask map corresponding to the face model of the person can be drawn, and the mask map is attached to the face model of the person, and the mask area includes multiple mask areas.
- the color value of the corresponding pixel is obtained by the mask texture attached to the face model of the character, so as to accurately obtain the color value of the pixel at the position of the cursor, so as to be based on the color.
- the value acquires the corresponding face part to be operated.
- the method before detecting the position of the cursor in the displayed facial model of the person, the method further includes:
- a face model and a generated mask map are displayed, wherein the mask map is set to fit over the person's face model.
- the character face model and the generated mask map as shown in FIG. 6 are displayed on the terminal screen, wherein the mask map is set. Fitted on top of the character's face model.
- the image combined with the generated face model and the generated mask texture is displayed in advance, so that when the position of the cursor is detected, the corresponding position can be directly obtained through the mask map, and the face model can be accurately obtained.
- the part of the plurality of face parts to be operated achieves the purpose of improving editing efficiency.
- the method when the selected operation of the face portion to be operated is detected, the method further includes:
- the face to be manipulated is highlighted in the face model of the person.
- the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
- the user can intuitively see the editing operation on the face portion in the face model of the person and the change effect of the edited face portion. Realize WYSIWYG, so that editing operations can be closer to user needs.
- the editing of the face portion to be operated in response to the acquired editing operation of the face portion to be operated includes at least one of the following:
- the operation mode for implementing the above editing may be, but is not limited to, at least one of the following: clicking and dragging. That is to say, at least one of the following edits can be made to the face portion to be operated by a combination of different operation modes: moving, rotating, enlarging, and reducing.
- the editing process of the content shown from the left side of FIG. 3 to the right side of FIG. 3 can be realized by clicking the selected part to be operated and performing editing such as rotation, reduction, and movement.
- the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
- the technical solution of the present application which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
- the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present application.
- an editing apparatus for a human face model for implementing the editing method of the above-described character facial model.
- the apparatus includes:
- a first detecting unit 702 configured to detect a position of the cursor in the displayed human face model, wherein the displayed human face model includes a plurality of facial parts;
- a determining unit 704 configured to determine a part of the plurality of face parts to be operated according to the position
- a second detecting unit 706, configured to detect a selected operation of the face portion to be operated
- the editing unit 708 is configured to edit the facial part to be operated in response to the acquired editing operation of the facial part to be operated;
- the first display unit 710 is configured to display the edited face part to be operated in the character face model.
- the editing device of the human face model may be, but is not limited to, applied to the character creation process in the terminal application, and the corresponding human face model is edited for the character character.
- the position of the face to be operated may be determined by detecting the position of the cursor in the plurality of face parts of the displayed face model, and is detected. After the selected operation of the operation face portion, the corresponding editing operation is performed in response to the acquired editing operation.
- the eyebrow portion in the face model of the person is selected by the position of the cursor (as shown by the dotted line on the left side of FIG. 3), and after the editing operation (such as the rotation operation) of the part is obtained, the The eyebrows are edited and the edited eyebrows are displayed in the face model of the character (as shown by the dotted line on the right side of Figure 3).
- the editing operation of the character face model of the character character is simplified, and the edited character character is displayed quickly and accurately for the player.
- the face portion to be operated is edited in response to the acquired editing operation of the face portion to be operated to display the edited face portion to be operated in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
- the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character. Inter-operational switching results in higher complexity The problem. Further, by directly editing the face part in the face model of the person, the user can intuitively see the editing operation displayed by the terminal and the change effect of the edited face part after the editing, thereby realizing WYSIWYG, thereby enabling the editing operation. Can be closer to user needs and improve the user experience.
- the location of the cursor may be, but is not limited to, a corresponding position for indicating that the mouse is displayed on the screen.
- the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
- the editing may include at least one of: moving the face portion to be operated; rotating the face portion to be operated; zooming in on the face portion to be operated; and reducing the face portion to be operated.
- determining a part of the plurality of face parts to be operated according to the location of the cursor includes:
- a portion of the plurality of face portions corresponding to the color value to be operated is determined.
- the color value of the pixel point in the acquiring position may include, but is not limited to, acquiring a color value of a pixel point corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model.
- the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion, wherein the color value of the pixel points may include one of the following: a red color value of the pixel point, The green color value of the pixel and the blue color value of the pixel.
- each of the mask regions on the mask map attached to the face model of the person respectively corresponds to a face portion on the face model of the person. That is to say, by selecting the mask area on the mask map attached to the face model of the character by the cursor, the selection can be realized.
- the corresponding facial part in the facial model of the person realizes direct editing of the facial part on the facial model of the person, thereby achieving the purpose of simplifying the editing operation.
- the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
- the nose specifically includes six parts, and each part is respectively provided with a red color value (indicated by the R color value).
- each value may be between but not Limited to a minimum of 10 units.
- determining the to-be-operated face portion corresponding to the color value among the plurality of face portions may include, but is not limited to, obtaining the RGB color value of the pixel point corresponding to the position in the mask texture, and obtaining the The mask area corresponding to the color value of the pixel point, and further the part of the face to be operated corresponding to the mask area is obtained.
- the position of the cursor in the displayed person's face model may be detected, but not limited to, using a pixel picking technique of pixel picking.
- Pixel picking is a picking technology in the form of model objects, which is a method of detecting and interacting with a virtual object selected or clicked on the display screen by detecting a cursor.
- the virtual object may be, but not limited to, a face part in a corresponding facial model of a person.
- an identification ID is set for each virtual object (ie, the face part), and then all the pickable virtual objects are drawn on a render target, and the ID is passed through the constant register.
- GetRenderTarget() (a function in Direct3D) is used to get the ID of the part where the current cursor is located.
- the method before detecting the position of the cursor in the displayed face model of the person, the method further includes: displaying the face model of the person and the generated mask map, wherein the mask map is set to fit on the face model of the person on.
- the process of obtaining a facial model of a person can be as follows:
- the face picking system is actually a pick-up method that is sensitive to the color of the texture, and the core HLSL code is implemented:
- Floatdepth EncodeFloatToRGBA8(param.homoDepth.x/param.homoDepth.y);
- Result.color1 float4(1.0f,1.0f,0,depth);//magic code marking for facial mask map
- a verification code is set for each pixel in the screen, and it is verified by the verification code whether it belongs to the face model of the person. Only pixels that match the verification code (equal to (1.0, 1.0, 0)) will be used to process according to the person's face model:
- mDepthSelted maskCode.a
- the pixel selected by the user's cursor position is calculated by detecting the position of the cursor on the screen and the current resolution of the screen.
- the part to be operated among the plurality of face parts of the face model of the person is determined according to the position of the cursor detected in the face model of the person displayed by the terminal, and the part to be operated is detected.
- the face portion to be operated is edited in response to the acquired edit operation of the face portion to be operated, so that the edited face portion to be operated is displayed in the face model of the person. That is to say, by detecting the position where the cursor is located, the selected part of the face to be operated among the plurality of face parts of the face model of the person is determined, so that the editing process can be directly completed by operating the face part without dragging in an additional control list.
- the slider corresponding to the face part to be operated enables the user to directly perform face picking and editing on the face model of the person, thereby simplifying the editing operation on the face model of the person, thereby overcoming the slider in the face model and the control list of the character.
- the determining unit includes:
- An obtaining module configured to obtain a color value of a pixel at a position
- a determining module configured to determine a part of the plurality of face parts corresponding to the color value to be operated.
- the color value of the pixel point may be, but is not limited to, a color value of a pixel point corresponding to the position of the cursor in the mask texture.
- the mask map is attached to the face model of the person, and the mask map includes a plurality of mask regions corresponding to the plurality of face portions, and each mask region corresponds to one face portion.
- the color values of the pixel points corresponding to the cursor position in the mask map may include, but are not limited to, a color value of the red channel and a color value of the blue channel.
- the color value of the pixel may include, but is not limited to, one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
- the face of the character formed by the color value of the red channel can be as shown in FIG. 4 (the image should be displayed in red), the different shades indicate different degrees of red, and the color of the filled area of the left twill is larger than that of the wave-filled area.
- the color is brighter and the color of the fill area of the wave point is brighter than the color of the fill area of the horizontal diagonal.
- the face of the person composed of the color values of the blue channel can be as shown in Figure 5 (the image should be displayed in blue).
- the mapping relationship between the color mask values of the mask regions in the mask map and the corresponding pixel points may be set in advance, but not limited to.
- the mask value corresponding to the color value of the person's face model is determined by the color value of the pixel point at the position where the cursor is located, thereby determining the part of the plurality of face parts to be operated.
- the corresponding mask area can be determined by searching a preset mapping relationship, and then the corresponding mask area is obtained.
- the face to be operated is the "nose bridge".
- the to-be-operated face portion corresponding to the color value among the plurality of face portions is determined by the acquired color value of the pixel point at the position where the cursor is located. That is to say, the face portion to be operated is determined by the color value of the pixel point at the position where the cursor is located, thereby directly editing the face portion in the face model of the person to achieve the purpose of simplifying the editing operation.
- the obtaining module includes:
- Obtaining a sub-module configured to obtain a color value of a pixel corresponding to the position in the mask texture, wherein the mask texture is attached to the character facial model, and the mask map includes a plurality of one-to-one correspondence with the plurality of facial parts a mask area, each mask area corresponding to a face part;
- the color value of the pixel includes one of the following: a red color value of the pixel point, a green color value of the pixel point, and a blue color value of the pixel point.
- a muscle part control list is obtained, and a red color value (indicated by R color value) is set for each part, in order to avoid Error, each value differs by at least 10 units.
- the mask values corresponding to the face model of the person can be obtained by using the color values of the pixels corresponding to the parts, as shown in Table 4 (partial parts):
- a mask map corresponding to the face model of the person can be drawn, and the mask map is attached to the face model of the person, and the mask area includes multiple mask areas.
- the color value of the corresponding pixel is obtained by the mask texture attached to the face model of the character, so as to accurately obtain the color value of the pixel at the position of the cursor, so as to be based on the color.
- the value acquires the corresponding face part to be operated.
- the device further includes:
- a second display unit configured to display the character face model and the generated mask map before detecting the position of the cursor in the displayed face model of the person, wherein the mask map is set to fit over the person face model.
- the character face model and the generated mask map as shown in FIG. 6 are displayed on the terminal screen, wherein the mask map is set. Fitted on top of the character's face model.
- the image combined with the generated face model and the generated mask map is displayed in advance, thereby facilitating detection of the position of the cursor.
- the corresponding position is directly and quickly obtained through the mask map, thereby accurately acquiring the to-be-operated face portion of the plurality of facial portions of the facial model of the person, thereby improving the editing efficiency.
- the device further includes:
- a third display unit configured to: when detecting a selected operation of the face portion to be operated, The face portion to be manipulated in the face model is highlighted.
- the selection operation of the face portion to be operated when the selection operation of the face portion to be operated is detected, it may include, but is not limited to, special display of the face portion to be operated. For example, the face portion is highlighted, or a shadow or the like is displayed on the face portion.
- the user can intuitively see the editing operation on the face portion in the face model of the person and the change effect of the edited face portion. Realize WYSIWYG, so that editing operations can be closer to user needs and improve user experience.
- the editing unit includes at least one of the following:
- a first editing module configured to move the part to be operated
- a second editing module for rotating the face portion to be operated
- the fourth editing module is used for reducing the face portion to be operated.
- the operation mode for implementing the above editing may be, but is not limited to, at least one of the following: clicking and dragging. That is to say, at least one of the following edits can be made to the face portion to be operated by a combination of different operation modes: moving, rotating, enlarging, and reducing.
- the editing process of the content shown from the left side of FIG. 3 to the right side of FIG. 3 can be realized by clicking the selected part to be operated and performing editing such as rotation, reduction, and movement.
- an editing terminal for a human face model for implementing the editing method of the above-described character facial model is further provided.
- the terminal includes:
- a communication interface 802 configured to acquire a facial model of a person, wherein the facial model of the person includes a plurality of facial portions;
- the memory 804 is connected to the communication interface 802 and configured to store a character facial model
- the processor 806 is connected to the communication interface 802 and the memory 804, and is configured to detect a position of the cursor in the displayed facial model of the person; and is further configured to determine a part of the plurality of facial parts to be operated according to the position; The operation of selecting the face portion is also performed; and is further configured to edit the face portion to be operated in response to the acquired editing operation of the face portion to be operated; and to set the edited face portion to be operated in the face model of the person.
- the memory 804 may be a non-transitory computer readable storage medium for storing machine readable instructions, including a first detection instruction, a determination instruction, a second detection instruction, an editing instruction, and a first display instruction.
- the machine readable instructions further include a second display instruction and a third display instruction.
- the processor 806 is configured to read machine readable instructions stored in the memory 804 to implement the steps of the method of editing the face model of the person in the above embodiment and the functions of the units in the editing device of the face model.
- the embodiment of the present application provides an application scenario for implementing the editing method of the above-described human face model.
- the application environment of the embodiment is the same as the embodiment of the editing method and apparatus of the human face model.
- the editing method of the above-described human face model may be applied to the makeup process of the face of the game character or the process of the face of the face of the game character.
- the face of the game character may be edited by using the editing method of the face model provided in the above embodiment, thereby achieving the purpose of improving the facial fineness of the game character displayed by the client.
- a makeup process applied to a face of a game character is taken as an example.
- the effective part of the game character in the UV space is cropped, and the above editing method is applied to the cropped face area to create different facial models of the person, such as designing various styles of eye makeup on the cut eye part.
- different makeup replacements can be selected for each partial region region on the face basic map (shown on the right side of FIG. 9), for example, for the provided replacement.
- the eye part (shown in Figure 9 is the eyebrows) of the makeup cut (as shown in the middle of Figure 9) to choose, assuming the choice of the makeup shown in the solid box, you can get the final game character's face image , as shown on the left side of Figure 9.
- the makeup of the game character can be more diversified, and the effect of enriching the character image of the game character can be achieved.
- DiffuseMap, SpecularMap, and NormalMap are cut according to the size of 2's N power (16*16, 32*32, 32*64, 64*128), and the coordinate positions are recorded;
- the registry configuration can be:
- the cropped object as an eye part (such as an eyebrow)
- the corresponding content in DiffuseMap, SpecularMap, and NormalMap is cropped, and the original size is 1024*1024, and the content is cropped as Cut the size to 512*128.
- the game character face may be constructed by, but not limited to, directly selecting the desired cut image by clicking. As shown in FIG. 11, each cut image of "eye makeup”, “lip makeup”, “skin” and the like is selected and replaced to obtain a desired facial image.
- the prior art generally uses a boring and complicated slider method for face editing.
- the editing method of the person's face provided in this embodiment can not only meet the needs of different users in the game application, but also in the process of running the application. It also ensures that users can achieve diversified face editing in the case of smooth running, which greatly reduces system consumption and achieves the purpose of enriching the character image.
- Embodiments of the present application also provide a storage medium, which is a non-volatile storage medium.
- the storage medium is arranged to store program code for performing the following steps:
- the selected operation of the face portion to be operated is detected
- the edited face portion to be operated is displayed in the face model of the person.
- the foregoing storage medium may include, but not limited to, a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, a magnetic disk, or an optical disk.
- ROM Read-Only Memory
- RAM Random Access Memory
- mobile hard disk a magnetic disk
- magnetic disk a magnetic disk
- optical disk a variety of media that can store program code.
- the specific example in this embodiment can refer to the embodiment of the editing method and apparatus of the above-described human face model.
- the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
- the technical solution of the present application in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product, which is stored in a storage medium.
- a number of instructions are included to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the disclosed client may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
Abstract
Description
Claims (12)
- 一种人物面部模型的编辑方法,其特征在于,包括:检测光标在显示的人物面部模型中的位置,其中,所述显示的人物面部模型包括多个面部部位;根据所述位置确定所述多个面部部位中的待操作面部部位;检测到对所述待操作面部部位的选中操作;响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑;在所述人物面部模型中显示编辑后的所述待操作面部部位。
- 根据权利要求1所述的方法,其特征在于,根据所述位置确定所述多个面部部位中的待操作面部部位包括:获取所述位置上的像素点的颜色值;确定所述多个面部部位中与所述颜色值对应的所述待操作面部部位。
- 根据权利要求2所述的方法,其特征在于,获取所述位置上的像素点的颜色值包括:获取蒙板贴图中与所述位置对应的像素点的颜色值,其中,所述蒙板贴图贴合在所述人物面部模型之上,所述蒙板贴图包括与所述多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;其中,所述像素点的颜色值包括以下之一:所述像素点的红色颜色值、所述像素点的绿色颜色值、所述像素点的蓝色颜色值。
- 根据权利要求3所述的方法,其特征在于,在检测光标在显示的人物面部模型中的位置之前,还包括:显示所述人物面部模型和生成的所述蒙板贴图,其中,所述蒙板贴 图被设置为贴合在所述人物面部模型之上。
- 根据权利要求1所述的方法,其特征在于,检测到对所述待操作面部部位的选中操作时,还包括:在所述人物面部模型中对所述待操作面部部位进行高亮显示。
- 根据权利要求1至5中任一项所述的方法,其特征在于,响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑包括以下至少之一:对所述待操作面部部位进行移动;对所述待操作面部部位进行旋转;对所述待操作面部部位进行放大;对所述待操作面部部位进行缩小。
- 一种人物面部模型的编辑装置,其特征在于,包括:第一检测单元,用于检测光标在显示的人物面部模型中的位置,其中,所述显示的人物面部模型包括多个面部部位;确定单元,用于根据所述位置确定所述多个面部部位中的待操作面部部位;第二检测单元,用于检测到对所述待操作面部部位的选中操作;编辑单元,用于响应获取到的对所述待操作面部部位的编辑操作对所述待操作面部部位进行编辑;第一显示单元,用于在所述人物面部模型中显示编辑后的所述待操作面部部位。
- 根据权利要求7所述的装置,其特征在于,所述确定单元包括:获取模块,用于获取所述位置上的像素点的颜色值;确定模块,用于确定所述多个面部部位中与所述颜色值对应的所述待操作面部部位。
- 根据权利要求8所述的装置,其特征在于,所述获取模块包括:获取子模块,用于获取蒙板贴图中与所述位置对应的像素点的颜色值,其中,所述蒙板贴图贴合在所述人物面部模型之上,所述蒙板贴图包括与所述多个面部部位一一对应的多个蒙板区域,每个蒙板区域对应一个面部部位;其中,所述像素点的颜色值包括以下之一:所述像素点的红色颜色值、所述像素点的绿色颜色值、所述像素点的蓝色颜色值。
- 根据权利要求9所述的装置,其特征在于,还包括:第二显示单元,用于在检测光标在显示的人物面部模型中的位置之前,显示所述人物面部模型和生成的所述蒙板贴图,其中,所述蒙板贴图被设置为贴合在所述人物面部模型之上。
- 根据权利要求7所述的装置,其特征在于,还包括:第三显示单元,用于在检测到对所述待操作面部部位的选中操作时,在所述人物面部模型中对所述待操作面部部位进行高亮显示。
- 根据权利要求7至11中任一项所述的装置,其特征在于,所述编辑单元包括以下至少之一:第一编辑模块,用于对所述待操作面部部位进行移动;第二编辑模块,用于对所述待操作面部部位进行旋转;第三编辑模块,用于对所述待操作面部部位进行放大;第四编辑模块,用于对所述待操作面部部位进行缩小。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018545644A JP6661780B2 (ja) | 2016-03-10 | 2017-03-09 | 顔モデル編集方法及び装置 |
KR1020187025694A KR102089473B1 (ko) | 2016-03-10 | 2017-03-09 | 안면 모델 편집 방법 및 장치 |
US16/111,922 US10628984B2 (en) | 2016-03-10 | 2018-08-24 | Facial model editing method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610136300.1A CN107180453B (zh) | 2016-03-10 | 2016-03-10 | 人物面部模型的编辑方法及装置 |
CN201610136300.1 | 2016-03-10 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/111,922 Continuation US10628984B2 (en) | 2016-03-10 | 2018-08-24 | Facial model editing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017152848A1 true WO2017152848A1 (zh) | 2017-09-14 |
Family
ID=59788998
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/076029 WO2017152848A1 (zh) | 2016-03-10 | 2017-03-09 | 人物面部模型的编辑方法及装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10628984B2 (zh) |
JP (1) | JP6661780B2 (zh) |
KR (1) | KR102089473B1 (zh) |
CN (1) | CN107180453B (zh) |
WO (1) | WO2017152848A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113101668A (zh) * | 2018-04-27 | 2021-07-13 | 网易(杭州)网络有限公司 | 虚拟场景生成方法、装置、存储介质及电子设备 |
CN109285209B (zh) * | 2018-09-14 | 2023-05-26 | 网易(杭州)网络有限公司 | 游戏角色的面部模型的处理方法、装置、处理器及终端 |
CN110111417B (zh) * | 2019-05-15 | 2021-04-27 | 浙江商汤科技开发有限公司 | 三维局部人体模型的生成方法、装置及设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545068A (zh) * | 2003-11-11 | 2004-11-10 | 易连科技股份有限公司 | 快速建立人脸影像平面模型的方法 |
CN102834843A (zh) * | 2010-04-02 | 2012-12-19 | 诺基亚公司 | 用于面部检测的方法和装置 |
CN103392180A (zh) * | 2011-02-24 | 2013-11-13 | 西门子产品生命周期管理软件公司 | 针对建模对象的整体变形 |
CN104103090A (zh) * | 2013-04-03 | 2014-10-15 | 北京三星通信技术研究有限公司 | 图像处理方法、个性化人体显示方法及其图像处理系统 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5687259A (en) | 1995-03-17 | 1997-11-11 | Virtual Eyes, Incorporated | Aesthetic imaging system |
US6362829B1 (en) * | 1996-03-07 | 2002-03-26 | Agfa Corporation | Method for modifying a digital image |
KR20010056965A (ko) * | 1999-12-17 | 2001-07-04 | 박희완 | 부분 이미지 합성에 의한 인물 캐릭터 생성 방법 |
US7593603B1 (en) * | 2004-11-30 | 2009-09-22 | Adobe Systems Incorporated | Multi-behavior image correction tool |
GB2451050B (en) * | 2006-05-05 | 2011-08-31 | Parham Aarabi | Method, system and computer program product for automatic and semiautomatic modification of digital images of faces |
CN101021943A (zh) * | 2007-04-06 | 2007-08-22 | 北京中星微电子有限公司 | 一种图像调整的方法和系统 |
US20090231356A1 (en) * | 2008-03-17 | 2009-09-17 | Photometria, Inc. | Graphical user interface for selection of options from option groups and methods relating to same |
JP2012004719A (ja) * | 2010-06-15 | 2012-01-05 | Nikon Corp | 画像処理装置及びプログラム、並びに電子カメラ |
CN103207745B (zh) * | 2012-01-16 | 2016-04-13 | 上海那里信息科技有限公司 | 虚拟化身交互系统和方法 |
CN102999929A (zh) * | 2012-11-08 | 2013-03-27 | 大连理工大学 | 一种基于三角网格化的人物图像瘦脸处理方法 |
US9747716B1 (en) * | 2013-03-15 | 2017-08-29 | Lucasfilm Entertainment Company Ltd. | Facial animation models |
CN104380339B (zh) * | 2013-04-08 | 2018-11-30 | 松下电器(美国)知识产权公司 | 图像处理装置、图像处理方法、以及介质 |
JP6171635B2 (ja) * | 2013-07-04 | 2017-08-02 | ティアック株式会社 | 編集処理装置及び編集処理プログラム |
CN104156912B (zh) * | 2014-08-18 | 2018-11-06 | 厦门美图之家科技有限公司 | 一种人像增高的图像处理的方法 |
EP3186788A1 (en) * | 2014-08-29 | 2017-07-05 | Thomson Licensing | Method and device for editing a facial image |
CN105389835B (zh) * | 2014-09-03 | 2019-07-12 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及终端 |
CN104616330A (zh) * | 2015-02-10 | 2015-05-13 | 广州视源电子科技股份有限公司 | 一种图片的生成方法和装置 |
US10796480B2 (en) * | 2015-08-14 | 2020-10-06 | Metail Limited | Methods of generating personalized 3D head models or 3D body models |
US10417738B2 (en) * | 2017-01-05 | 2019-09-17 | Perfect Corp. | System and method for displaying graphical effects based on determined facial positions |
-
2016
- 2016-03-10 CN CN201610136300.1A patent/CN107180453B/zh active Active
-
2017
- 2017-03-09 KR KR1020187025694A patent/KR102089473B1/ko active IP Right Grant
- 2017-03-09 JP JP2018545644A patent/JP6661780B2/ja active Active
- 2017-03-09 WO PCT/CN2017/076029 patent/WO2017152848A1/zh active Application Filing
-
2018
- 2018-08-24 US US16/111,922 patent/US10628984B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1545068A (zh) * | 2003-11-11 | 2004-11-10 | 易连科技股份有限公司 | 快速建立人脸影像平面模型的方法 |
CN102834843A (zh) * | 2010-04-02 | 2012-12-19 | 诺基亚公司 | 用于面部检测的方法和装置 |
CN103392180A (zh) * | 2011-02-24 | 2013-11-13 | 西门子产品生命周期管理软件公司 | 针对建模对象的整体变形 |
CN104103090A (zh) * | 2013-04-03 | 2014-10-15 | 北京三星通信技术研究有限公司 | 图像处理方法、个性化人体显示方法及其图像处理系统 |
Also Published As
Publication number | Publication date |
---|---|
KR20180108799A (ko) | 2018-10-04 |
JP6661780B2 (ja) | 2020-03-11 |
JP2019512141A (ja) | 2019-05-09 |
US10628984B2 (en) | 2020-04-21 |
CN107180453B (zh) | 2019-08-16 |
CN107180453A (zh) | 2017-09-19 |
KR102089473B1 (ko) | 2020-03-16 |
US20180365878A1 (en) | 2018-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102169918B1 (ko) | 인물 얼굴 모델의 표정 애니메이션 생성 방법 및 장치 | |
US11836859B2 (en) | Textured mesh building | |
US11037275B2 (en) | Complex architecture for image processing | |
US8907984B2 (en) | Generating slideshows using facial detection information | |
US10304162B1 (en) | Automatic image inpainting using local patch statistics | |
US20200020173A1 (en) | Methods and systems for constructing an animated 3d facial model from a 2d facial image | |
US8659622B2 (en) | Systems and methods for creating and editing seam carving masks | |
US8823726B2 (en) | Color balance | |
JP6355746B2 (ja) | デバイスのための画像編集技法 | |
CN108776970A (zh) | 图像处理方法和装置 | |
TW202234341A (zh) | 圖像處理方法及裝置、電子設備、儲存媒體和程式產品 | |
KR20150038518A (ko) | 사진을 이용한 운영체제 색상 세팅 | |
US8675014B1 (en) | Efficiently detecting graphics objects near a selected point | |
WO2019237747A1 (zh) | 图像裁剪方法、装置、电子设备及计算机可读存储介质 | |
WO2017152848A1 (zh) | 人物面部模型的编辑方法及装置 | |
Dong et al. | Summarization-based image resizing by intelligent object carving | |
WO2023045941A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
WO2023284632A1 (zh) | 图像展示方法、装置及电子设备 | |
US20230326110A1 (en) | Method, apparatus, device and media for publishing video | |
US10120539B2 (en) | Method and device for setting user interface | |
WO2022022260A1 (zh) | 图像风格迁移方法及其装置 | |
US20180211027A1 (en) | Password setting method and device | |
CN112819741A (zh) | 一种图像融合方法、装置,电子设备及存储介质 | |
CN104463839A (zh) | 图像处理装置、图像处理方法 | |
CN116943158A (zh) | 对象信息展示方法和相关装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018545644 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187025694 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020187025694 Country of ref document: KR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17762534 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17762534 Country of ref document: EP Kind code of ref document: A1 |