CN110796721A - Color rendering method and device of virtual image, terminal and storage medium - Google Patents
Color rendering method and device of virtual image, terminal and storage medium Download PDFInfo
- Publication number
- CN110796721A CN110796721A CN201911056935.0A CN201911056935A CN110796721A CN 110796721 A CN110796721 A CN 110796721A CN 201911056935 A CN201911056935 A CN 201911056935A CN 110796721 A CN110796721 A CN 110796721A
- Authority
- CN
- China
- Prior art keywords
- target
- color
- target object
- head
- avatar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure provides a color rendering method, a device, a terminal and a storage medium of an avatar; the method comprises the following steps: acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object; determining the color of a target part in each part of the head corresponding to the virtual image model, wherein the number of the target parts is at least two; generating a mask map corresponding to the target portion; inputting a mask image corresponding to the target part into a color channel, wherein the mask image and the color channel are in a one-to-one correspondence relationship; rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel; through the method and the device, the resource occupancy rate can be reduced, and the color rendering processing efficiency is improved.
Description
Technical Field
The present disclosure relates to video processing technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for rendering a color of an avatar.
Background
With the rapid development of the internet industry, the application of the virtual world is increased due to artificial intelligence, and the creation of an 'avatar' is involved from animation to live broadcast, to the operation of short video and the like. In the related technology, a universal template is mostly adopted to provide an 'avatar' for a user, and the template-type 'avatar' is similar and lacks of individuation; and the colors of different parts of the virtual image are displayed through different sticker materials, so that the resource occupancy rate is high, and the processing efficiency in the rendering process is low.
Disclosure of Invention
In view of this, the disclosed embodiments provide a method, an apparatus, a terminal and a storage medium for rendering colors of an avatar.
In a first aspect, an embodiment of the present disclosure provides a method for rendering a color of an avatar, including:
acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
determining the color of a target part in each part of the head corresponding to the virtual image model, wherein the number of the target parts is at least two;
generating a mask map corresponding to the target portion;
inputting a mask image corresponding to the target part into a color channel, wherein the mask image and the color channel are in a one-to-one correspondence relationship;
rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
In the above solution, the obtaining an avatar model corresponding to a target object based on a collected frame image including the target object includes:
acquiring features of all parts of the head of a target object based on an acquired frame image containing the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
In the above scheme, the acquiring, based on the acquired frame image including the target object, features of each part of the head of the target object includes:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
In the above solution, the determining the color of the target portion in each portion of the head corresponding to the avatar model includes:
determining a region corresponding to the target part in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
In the foregoing solution, the inputting the mask map corresponding to the target portion into a color channel includes:
acquiring the number of the target parts;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
In the foregoing solution, the inputting the mask map corresponding to the target portion into a color channel includes:
acquiring the number of the target parts;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
In the above solution, after presenting the avatar of the target object, the method further includes:
presenting, in a view interface presenting the avatar, a color adjustment function item for adjusting a color of the target portion;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
In the above solution, the target part in each part of the head of the avatar model includes at least one of:
mouth, nose, eyes, cheek.
In a second aspect, an embodiment of the present disclosure provides an apparatus for rendering colors of an avatar, including:
the acquisition module is used for acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
a determining module, configured to determine colors of target portions in each portion of a head corresponding to the avatar model, where the number of the target portions is at least two;
the generating module is used for generating a mask map corresponding to the target part;
the input module is used for inputting a mask image corresponding to the target part into a color channel, and the mask image and the color channel are in one-to-one correspondence;
a rendering module for rendering and presenting the avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
In the above scheme, the obtaining module is further configured to obtain features of portions of a head of the target object based on the acquired frame image including the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
In the above scheme, the obtaining module is further configured to identify different portions of the head of the target object included in the frame image, so as to determine an image area corresponding to each portion of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
In the foregoing solution, the determining module is further configured to determine an area corresponding to the target portion in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
In the above scheme, the input module is further configured to obtain the number of the target portions;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
In the above scheme, the input module is further configured to obtain the number of the target portions;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
In the above scheme, the apparatus further comprises:
an adjustment module for presenting a color adjustment function item for adjusting a color of the target portion in a view interface presenting the avatar;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
In a third aspect, an embodiment of the present disclosure provides a terminal, including:
a memory for storing executable instructions;
and the processor is used for realizing the color rendering method of the virtual image provided by the embodiment of the disclosure when the executable instruction is executed.
In a fourth aspect, the present disclosure provides a storage medium storing executable instructions, which when executed, are configured to implement the color rendering method for an avatar provided in the present disclosure.
The application of the embodiment of the present disclosure has the following beneficial effects:
by applying the embodiment of the disclosure, the virtual image model of which the head part parts are matched with the corresponding parts of the head part of the target object is obtained, the mask maps of at least two target parts in the head part parts corresponding to the virtual image model are generated, the mask maps are input into the corresponding color channels, and then the virtual image of the target object is obtained through rendering based on the virtual image model, the colors of the target parts and the color channels; therefore, firstly, a sticker material corresponding to a target object is obtained by inputting the mask corresponding to each target part into the color channel and integrating, so that a virtual image is obtained by rendering based on the combined material, the resource occupancy rate is reduced, and the color rendering efficiency is improved; second, since portions of the head of the avatar model are matched with corresponding portions of the head of the target object, creation of a personalized avatar can be achieved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of an architecture of a color rendering system for an avatar provided in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure;
fig. 3 is a first flowchart illustrating a color rendering method of an avatar according to an embodiment of the present disclosure;
fig. 4 is a schematic view of an image acquisition interface according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an image acquisition interface provided in the embodiment of the present disclosure;
fig. 6 is a schematic diagram of frame image acquisition of a target object according to an embodiment of the present disclosure;
fig. 7 is a schematic view of an interface for detecting key points of a human face according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a mask corresponding to a target portion provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of waiting for an avatar to be created according to an embodiment of the present disclosure;
FIG. 10 is a schematic view interface diagram including color adjustment function items provided by an embodiment of the present disclosure;
FIG. 11 is a schematic view of an avatar modification interface provided by embodiments of the present disclosure;
fig. 12 is a second flowchart illustrating a color rendering method for an avatar according to an embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a mask diagram for an input color channel provided by an embodiment of the present disclosure;
FIG. 14 is an avatar diagram of a target object provided by an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a color rendering apparatus for an avatar according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) The virtual image converts expressions, actions, expression, language and the like of the user into one action of the virtual character in real time through intelligent identification, and the facial expressions, the action expression and the voice tone of the virtual character can completely copy the user.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) And the mask image adopts the selected graph to shield the image to be processed (wholly or partially), so that certain non-processing areas are shielded, and only the target area of the image to be processed is processed.
Based on the above explanations of terms and terms involved in the embodiments of the present disclosure, referring to fig. 1, fig. 1 is an architectural diagram of a color rendering system of an avatar provided by the embodiments of the present disclosure, in order to support an exemplary application, a terminal 400 (including a terminal 400-1 and a terminal 400-2) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both, and uses a wireless or wired link to implement data transmission.
A terminal 400 (e.g., terminal 400-1) for acquiring an avatar model corresponding to a target object based on a captured frame image including the target object; determining the color of a target part in each part of the head of the corresponding virtual image model; generating a mask map corresponding to the target portion; inputting a mask map corresponding to the target portion into the color channel; rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel;
the terminal 400 (such as terminal 400-1) is further configured to send an acquisition request of an avatar corresponding to the target object based on the acquired frame image containing the target object;
a server 200 for receiving an acquisition request of an avatar corresponding to a target object; and generating an avatar model corresponding to the target object based on the acquisition request, and sending the avatar model to the terminal.
Here, in practical applications, the terminal 400 may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a wearable computing device, a Personal Digital Assistant (PDA), a desktop computer, a cellular phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these data processing devices or other data processing devices; the server 200 may be a server configured separately to support various services, or may be a server cluster.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be various terminals including a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a vehicle mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital Television (TV), a desktop computer, etc. The electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the electronic device may include a processing device (e.g., central processing unit, graphics processor, etc.) 210 that may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 220 or a program loaded from a storage device 280 into a Random Access Memory (RAM) 230. In the RAM 230, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 210, the ROM 220, and the RAM 230 are connected to each other through a bus 240. An Input/Output (I/O) interface 250 is also connected to bus 240.
Generally, the following devices may be connected to I/O interface 250: input devices 260 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 270 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 280 including, for example, magnetic tape, hard disk, etc.; and a communication device 290. The communication device 290 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data.
In particular, the processes described by the provided flowcharts may be implemented as computer software programs according to embodiments of the present disclosure. For example, the disclosed embodiments include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network through communication device 290, or installed from storage device 280, or installed from ROM 220. The computer program, when executed by the processing device 220, performs functions in the color rendering method of the avatar of the disclosed embodiment.
It should be noted that the computer readable medium described above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including over electrical wiring, fiber optics, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be present alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the method for rendering colors of an avatar provided by the embodiments of the present disclosure.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) and a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams provided by the embodiments of the present disclosure illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described in the embodiments of the present disclosure may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field-Programmable Gate arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Parts (ASSPs)), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of embodiments of the present disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The following describes a color rendering method of an avatar provided by an embodiment of the present disclosure. Referring to fig. 3, fig. 3 is a first flowchart illustrating a color rendering method of an avatar according to an embodiment of the present disclosure, where the color rendering method of the avatar according to the embodiment of the present disclosure includes:
step 301: the terminal acquires an avatar model corresponding to the target object based on the acquired frame image containing the target object, and each part of the head of the avatar model is matched with the corresponding part of the head of the target object.
Here, the terminal may acquire a frame image including the target object through an image acquisition device configured by itself, and further acquire an avatar model corresponding to the target object based on the acquired frame image. Since the personalized avatar is created for the target object, the head portions of the avatar model are matched with the corresponding head portions of the target object, specifically, the similarity between the head portions of the avatar model and the corresponding head portions of the target object satisfies a preset similarity condition, for example, the similarity between the head portions of the avatar model and the corresponding head portions of the target object has reached a preset similarity threshold.
In practical application, a video shooting client, such as an instant messaging client, a microblog client, a short video client and the like, can be arranged on the terminal. The user can execute click operation aiming at a shooting key presenting an image acquisition interface on the terminal view interface on the client so as to trigger an image acquisition instruction. The terminal receives the image acquisition instruction through the client, and then acquires a frame image containing the target object through image acquisition equipment such as a camera and the like based on the image acquisition interface so as to acquire the virtual image model corresponding to the target object.
Illustratively, referring to fig. 4, fig. 4 is a schematic view of an image capturing interface provided in an embodiment of the present disclosure, where when the terminal presents the image capturing interface and detects a target object, an image scanning frame is presented, and a user is prompted by displaying a text "please place a face in the frame" to put the face in the image scanning frame when performing avatar creation, which is needed. If the terminal detects that the contour of the target object is not in the image scanning frame, the user can be prompted to adjust the shooting posture, angle or distance through characters such as 'please shoot the front face', 'please move the face into the frame', and the like, so as to ensure that the acquired frame image of the target object is more accurate, referring to fig. 5, fig. 5 is a schematic diagram of an image acquisition interface provided by the embodiment of the disclosure, and the contour of the target object in fig. 5 is not matched with the image scanning frame.
After acquiring the frame image containing the target object, in some embodiments, the terminal may obtain the avatar model corresponding to the target object by: acquiring features of all parts of the head of the target object based on the acquired frame image containing the target object; sending an acquisition request carrying the characteristics of each part of the head of the target object; receiving a returned virtual image model of the target object; the characteristics of each part of the head are used for predicting the category of each part of the head, so that materials corresponding to each part of the head are determined based on the predicted category, and the materials corresponding to each part of the head are used for combining and generating the virtual image model.
After acquiring the frame image containing the target object, the terminal may send an acquisition request of the avatar model to the server based on the frame image. The acquisition request can carry the characteristics of the parts of the head of the target object, so that the server generates the avatar model corresponding to the target object based on the characteristics of the parts of the head.
In actual application, a user can trigger an avatar generation instruction through click operation, and when the terminal detects the avatar generation instruction, an acquisition request of an avatar model is sent to the server. Illustratively, referring to fig. 6, fig. 6 is a schematic diagram of acquiring a frame image of a target object according to an embodiment of the present disclosure, where a terminal presents a preview frame image containing the target object through a view interface and presents a page containing an avatar icon. When a user clicks the virtual image icon, the terminal presents that the virtual image icon is in a selected state, namely the virtual image icon can be enclosed by the square frame, at the moment, the terminal receives a virtual image generation instruction triggered by the user, and then an acquisition request of the virtual image model is sent to the server in response to the virtual image generation instruction.
In some embodiments, the terminal may obtain the features of the head portions of the target object included in the frame image by: identifying different parts of the head of a target object contained in the frame image to determine image areas corresponding to the parts of the head of the target object; based on image areas corresponding to all parts of the head of the target object, carrying out area segmentation on the frame image to obtain images corresponding to different parts of the head of the target object; and respectively extracting the features of the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
Here, the head portion of the target object includes at least one of: eyes, hair, ears, mouth, nose, eyebrows, beard, and face. Here, the eyes may include eyes and glasses, and the hair part may include hair and a cap.
In some embodiments, if the characteristics of the head portions of the target object are determined, it is first required to obtain image areas in the frame images corresponding to the head portions. Specifically, the terminal may determine the image area of each part of the head object of the target object by means of face key point recognition. Here, the face key point refers to a point that can reflect local features (such as color features, shape features, and texture features) of a target object in an image, and is generally a set of a plurality of pixel points, for example, the face key point may be an eye key point, a mouth key point, or a nose key point.
In practical application, carrying out face key point detection on a frame image containing a target object, and determining key points included by each part of the head of the target object; and based on the determined key points of the face, carrying out face alignment by adopting a face alignment algorithm, and further determining an area formed by the key points and an image area corresponding to each part of the head of the target object. Referring to fig. 7, fig. 7 is a schematic interface diagram of face keypoint detection provided by the embodiment of the present disclosure, where a dashed box 1 is an image area of a nose determined by keypoints included in the nose, and a dashed box 2 is an image area of a mouth determined by keypoints included in the mouth.
Based on the determined image areas corresponding to the parts of the head of the target object, carrying out area segmentation on the acquired frame image, so that each segmented image corresponds to one of different parts of the head of the target object; and respectively extracting the features of the images corresponding to different parts of the head of the target object to obtain the features of all parts of the head of the target object, namely feature vectors representing the features of all parts of the head.
And the terminal carries the determined characteristics of each part of the head of the target object into an acquisition request and sends the acquisition request to a server so as to request to acquire the virtual image model corresponding to the target object.
The server receives the acquisition request of the virtual image model, analyzes the acquisition request and obtains the characteristics of each part of the head of the target object carried in the acquisition request. Respectively carrying out feature similarity matching on the features of all parts of the head so as to determine the categories of all parts of the head; or the server can also input the feature vectors representing the features of all parts of the head into a neural network model trained in advance, and the five sense organs belonging to the neural network model are predicted to determine the classes of all parts of the head. Here, the category to which each part of the header belongs may be any combination of different kinds of different attributes. Illustratively, the attributes of the category to which the hair belongs may include length, curl degree, hair color, and the respective corresponding categories may include head, short hair, medium and long hair; curling and straightening hair; black, brown, yellow. The method comprises the steps of performing similarity matching on the characteristics of the hair in the frame image and the characteristics of the hair in various preset categories, so as to determine the category of the hair in the frame image, such as black medium and long straight hair.
Determining materials corresponding to the parts of the head part based on the types of the parts of the head part; and combining the corresponding materials according to the relative position information of each part of the head to generate an avatar model corresponding to the target object, and sending the avatar model to the terminal to enable the terminal to acquire the avatar model of the target object.
Step 302: the color of the target portion in each portion of the head corresponding to the avatar model is determined, and the number of the target portions is at least two.
And after the terminal acquires the virtual image model corresponding to the target object, the color of the target part in each part of the head of the virtual image model is continuously determined. In some embodiments, the color of the target portion of the head portions of the corresponding avatar model may be determined as follows: determining a region corresponding to a target part in a frame image; and identifying the color of the target part in the frame image based on the determined area, and taking the color of the identified target part as the color of the target part of the corresponding virtual image model.
In some embodiments, the target portion of the head portions of the avatar model includes at least one of: mouth, nose, eyes, cheek.
When determining the color corresponding to the target portion, the terminal may first determine an image region corresponding to the target portion in the frame image, identify the color of the image region corresponding to the target portion through an image identification algorithm, and use the color of the image region as the color of the target portion of the avatar model.
Exemplarily, when the target portion in the avatar model is "mouth", determining an image area corresponding to the "mouth" portion in the captured frame image; and identifying the color of the image area corresponding to the 'mouth' part, wherein the color of the image area is the color of the 'mouth' part in the virtual image model.
Step 303: a mask map corresponding to the target portion is generated.
Here, the mask image is used to mask the image to be processed (in whole or in part) with the selected pattern, thereby masking some non-processed areas to achieve processing of only the target area of the image to be processed. In some embodiments, this may be done by way of a mask if only image processing of the target portion is required.
In practical applications, a mask map corresponding to the target portion may be generated based on the selected target portion. Specifically, when the mask map is generated, the acquired frame image including the target object may be processed. For example, the frame image may be cropped according to the size of the mask layer to obtain an image including the head portions of the target object, and the image may be further processed such that the region corresponding to the target portion in the image is represented by white and the regions corresponding to the rest of the head portions are represented by black, thereby generating a mask map corresponding to the target object.
Exemplarily, referring to fig. 8, fig. 8 is a schematic diagram of a mask image of a corresponding target portion provided by an embodiment of the present disclosure. Here, the target portion includes "eyes". The acquired frame image containing the target object is cut to obtain an image containing each part of the head of the target object, and then the area corresponding to the 'eyes' of the target part is processed into 'white', and the areas corresponding to the rest parts of the head are processed into 'black'.
Step 304: and inputting the mask image corresponding to the target part into the color channels, wherein the mask image and the color channels are in one-to-one correspondence.
Here, there is a one-to-one correspondence between the mask map corresponding to the target portion and the color channel, and the one-to-one correspondence may be set in advance, for example, the mask map corresponding to the mouth corresponds to the color channel 1, the mask map corresponding to the eye corresponds to the color channel 2, and so on. It should be noted that the correspondence is not fixed, but is used to represent that a color channel corresponds to a mask map of only one target portion, that is, a color channel can only input a mask map of only one target portion. The specific configuration may be set as required, and is not limited in the embodiments of the present disclosure.
And inputting the mask map corresponding to the target part into the corresponding color channel according to the one-to-one correspondence relationship between the preset mask map and the color channel.
In some embodiments, the mask map of the target portion may be input into the color channel in the following manner: acquiring the number of target parts; when the number of the target parts is determined to be not more than three, the mask images corresponding to the target parts are respectively input into red R, green G and blue B color channels according to the corresponding relation; and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into the red R, green G, blue B and transparency A color channels according to the corresponding relation.
The target portion of each head portion of the avatar model may include mouth, nose, eyes and cheek, and based on this, in practical applications, the color channels may be three-channel, or four-channel, such as red R, green G, blue B color channels or red R, green G, blue B, transparency a color channels. The specific color channel to be used is determined according to the number of the target portions.
When the mask map is input into the color channel, the number of target parts corresponding to the mask map is firstly obtained, and then the adopted color channel type is determined according to the number. Specifically, when the number of target portions is determined not to exceed three, red R, green G, and blue B color channels may be employed; when the number of target portions is determined to be four, red R, green G, blue B, and transparency a color channels may be used.
Furthermore, because the mask images and the color channels have one-to-one correspondence, the mask images corresponding to the target parts can be directly input into the red R, green G and blue B color channels according to the correspondence; or the mask images corresponding to the target parts are respectively input into red R, green G, blue B and transparency A color channels.
Illustratively, when the number of target portions is 3, including mouth, nose, and eyes, it may be determined that red R, green G, and blue B color channels are used. Wherein the correspondence preset as required includes: a mask corresponding to the mouth corresponds to the red R color channel, a mask corresponding to the nose corresponds to the green G color channel, and a mask corresponding to the eyes corresponds to the blue B color channel; then according to the corresponding relationship, the mask corresponding to the mouth is inputted into the red R color channel, the mask corresponding to the nose is inputted into the red R color channel, and the mask corresponding to the eyes is inputted into the red R color channel.
Step 305: rendering and presenting the avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
And rendering the avatar model of the target object by adopting the color of the identified target part and through the color channel of the mask image which is input to the corresponding target part, namely filling the color of the target part into the corresponding part of the avatar model to generate and present the avatar of the target object.
In practical application, a rendering program may be used to perform color rendering on the avatar model according to the color and the color channel of the target portion through a Graphics Processing Unit (GPU) to obtain the avatar of the target object.
In addition, in the process of creating the virtual image, time is consumed for steps of analyzing frame image features, downloading resources, matching materials of all parts of the head, rendering colors and the like, and in some embodiments, anxiety of a user waiting can be reduced in a text prompting mode. Referring to fig. 9, fig. 9 is a schematic diagram of waiting for creation of an avatar provided by the embodiment of the present disclosure, which indicates to a user that the avatar is in the process of being generated by presenting the text "identify and create avatar please wait …".
In some embodiments, after the avatar creation is finished and presented, the color variations of different target portions of the avatar may be controlled to achieve the creation of a color diverse avatar by: presenting a color adjustment function item for adjusting a color of the target part in a view interface presenting the avatar; adjusting the color of the target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item; and presenting the adjusted virtual image of the target object.
After the terminal generates the avatar for the target object and presents the avatar in the view interface, the view interface may also present color adjustment function items, such as a color adjustment axis, a color adjustment key, and the like, for adjusting the color of the target portion. The color adjusting function item is set for the target portion, and may include a plurality of color adjusting axes or color adjusting keys corresponding to the target portion at the same time. The user can adjust the color of the target part corresponding to the virtual image by clicking a color adjusting key corresponding to the target part of which the color needs to be adjusted, or dragging a color adjusting shaft corresponding to the target part of which the color needs to be adjusted, and the like according to the needs.
Illustratively, referring to fig. 10, fig. 10 is a schematic view interface diagram of a view interface provided by an embodiment of the present disclosure and including color adjustment function items, where the color adjustment function items are represented by color adjustment axes and color adjustment keys, and include color adjustment axes and color adjustment keys corresponding to a plurality of target portions, such as color adjustment axes of "mouth" and color adjustment axes of "eyes". Wherein, a round adjusting button is combined on the color adjusting shaft, and color adjusting keys of plus and minus are displayed at the two ends. Based on this, the user can adjust the color of the target portion by either dragging the circular adjustment button on the color adjustment axis or clicking the "+" or "-" color adjustment button.
And the terminal receives the operation of the user aiming at the color adjusting function item, responds to a color adjusting instruction triggered by the user based on the color adjusting function item, adjusts the color of the target part of the virtual image of the target object, and presents the adjusted virtual image through the view interface. In practical application, when a user performs color adjustment operation, the terminal can present the virtual image after the color adjustment of the target part in real time based on a color adjustment instruction triggered by the user, so that the user can conveniently select a more appropriate color according to personal preference. Illustratively, when the user triggers a color adjustment instruction corresponding to the target portion "mouth", the terminal may adjust the color of the "mouth" portion of the avatar in response to the color adjustment instruction, and present the avatar after the "mouth" color adjustment.
In some embodiments, in addition to the requirement that the user adjusts the color of the target portion as required, if the user is not satisfied with the constructed avatar or wants to improve the avatar, a modification instruction of the avatar may be triggered by clicking an avatar icon corresponding to a certain avatar presented in the view interface, i.e., a thumbnail of the avatar. Referring to fig. 11, fig. 11 is a schematic view of an avatar modification interface provided in the embodiment of the present disclosure, where a terminal displays that a user creates two avatars in total, and when a click operation of the user is received, an avatar icon corresponding to an avatar designated to be modified by the user is enclosed by a selection frame, and a button of "modify avatar" is displayed on a view interface for the user to perform a modification operation.
The terminal reconstructs an avatar model corresponding to the target object through the server again based on the above steps 301 to 305, and returns it to the terminal. The terminal updates the avatar of the presented target object through color rendering or the like based on the updated avatar model.
By applying the embodiment of the disclosure, the virtual image model of which the head part parts are matched with the corresponding parts of the head part of the target object is obtained, the mask maps of at least two target parts in the head part parts corresponding to the virtual image model are generated, the mask maps are input into the corresponding color channels, and then the virtual image of the target object is obtained through rendering based on the virtual image model, the colors of the target parts and the color channels; therefore, firstly, a sticker material corresponding to a target object is obtained by inputting the mask corresponding to each target part into the color channel and integrating, so that a virtual image is obtained by rendering based on the combined material, the resource occupancy rate is reduced, and the color rendering efficiency is improved; second, since portions of the head of the avatar model are matched with corresponding portions of the head of the target object, creation of a personalized avatar can be achieved.
The following description is continued with reference to a specific embodiment of the color rendering method of the avatar provided in the embodiment of the present disclosure. Referring to fig. 12, fig. 12 is a schematic flowchart illustrating a second method for rendering colors of an avatar according to an embodiment of the present disclosure, where the method for rendering colors of an avatar according to an embodiment of the present disclosure includes:
step 1201: and the terminal responds to the triggered virtual image generation instruction and sends an acquisition request of the virtual image model.
Here, an avatar icon in a view interface presenting a preview frame image containing a target object is used to trigger a generation instruction of the avatar upon receiving a click operation, and a user may trigger an avatar generation instruction by clicking the avatar icon. The terminal transmits an acquisition request of the avatar model to the server in response to the instruction.
Step 1202: the server receives an acquisition request of the virtual image model.
Here, the acquisition request carries the characteristics of the parts of the header of the target object.
Step 1203: and the server generates an avatar model corresponding to the target object based on the acquisition request and sends the avatar model to the terminal.
Step 1204: the terminal receives the avatar model and determines the color of the target portion in each portion of the head of the avatar model.
Here, the number of target portions is at least two, such as "mouth" and "eyes".
When the color of the target portion is determined, an image area corresponding to the target portion in the frame image is determined, then the color of the image area corresponding to the target portion is identified through an image identification algorithm, and the color of the image area is used as the color of the target portion of the avatar model.
Step 1205: a mask map corresponding to the target portion is generated.
Step 1206: a mask corresponding to the target portion is input to the color channel.
Here, the mask map has a one-to-one correspondence relationship with color channels, which include two types of RGB color channels and RGBA color channels. The specific type of color channel to be adopted is determined according to the number of the target parts, namely, when the number of the target parts is not more than three, the mask images corresponding to the target parts are respectively input into the RGB color channels according to the corresponding relation; and when the number of the target parts is four, the mask images corresponding to the target parts are respectively input into the RGBA color channels according to the corresponding relation.
Referring to fig. 13, fig. 13 is a schematic diagram of a mask map of an input color channel provided by an embodiment of the present disclosure, where the target portions are "eyes" and "mouth", and the generated mask map includes, for the target portions: the area corresponding to the 'eyes' of the target part is white, and the areas corresponding to the parts of other heads are black mask images; and the area corresponding to the target part 'mouth' is white, and the areas corresponding to the other head parts are black mask images. And inputting the two mask maps into an RGB color channel to generate a combined mask map Mak eupmak with corresponding areas of the target part 'eyes' and 'mouth' being white and corresponding areas of the other parts being black, so that the terminal can fill corresponding colors into the target part 'eyes' and 'mouth' of the virtual image model based on the MakeupMask to obtain a rendered virtual image.
Here, after the mask corresponding to the target portion is input into the color channel, the generated combined mask may be stored in a picture format of the PNG to reduce an occupied space of resources. In practical applications, the combination mask is stored in any picture format, which is not limited in the embodiment of the present disclosure.
Step 1207: rendering the virtual image model based on the color of the target part and the color channel input with the mask map to obtain and present the virtual image.
Referring to fig. 14, fig. 14 is an avatar schematic diagram of a target object provided by the embodiment of the present disclosure. Wherein the target portions "mouth" and "eyes" each render the color of the identified target portion, wherein the RGB values corresponding to "mouth" are 254, 122, respectively; the RGB values for "eye" are 254, 109, 189, respectively.
Step 1208: and presenting a color adjusting function item for adjusting the color of the target part on the view interface.
Here, the color adjustment function item may be a color adjustment axis, a color adjustment key, and the like.
Step 1209: in response to a color adjustment instruction triggered based on the color adjustment function item, adjusting a color of a target portion of the target object.
Step 1210: and presenting the adjusted virtual image of the target object.
The following describes units and/or modules in a color rendering apparatus implementing an avatar provided by an embodiment of the present disclosure. It is understood that the units or modules in the color rendering apparatus of the avatar may be implemented in the electronic device as shown in fig. 2 in the form of software (e.g., a computer program stored in the above-mentioned computer software program), and may also be implemented in the electronic device as shown in fig. 2 in the form of the above-mentioned hardware logic components (e.g., FPGA, ASIC, SOC, and CPLD).
Referring to fig. 15, fig. 15 is an alternative structural diagram of a color rendering apparatus 1500 of an avatar implementing an embodiment of the present disclosure, showing the following modules: an acquisition module 1510, a determination module 1520, a generation module 1530, an input module 1540, and a rendering module 1550, the functions of each of which will be described below.
It should be noted that the above classification of modules does not constitute a limitation on the electronic device itself, for example, some modules may be split into two or more sub-modules, or some modules may be combined into a new module.
It should also be noted that the names of the above modules do not in some cases limit the modules themselves, for example, the above "obtaining module 1510" may also be described as a module for obtaining an avatar model corresponding to a target object based on a captured frame image containing the target object.
For the same reason, units and/or modules in the electronic device, which are not described in detail, do not represent defaults of the corresponding units and/or modules, and all operations performed by the electronic device may be implemented by the corresponding units and/or modules in the electronic device.
With continuing reference to fig. 15, fig. 15 is a schematic structural diagram of an apparatus 1500 for rendering colors of an avatar according to an embodiment of the present disclosure, the apparatus including:
an obtaining module 1510, configured to obtain an avatar model corresponding to a target object based on a collected frame image including the target object, where each portion of a head of the avatar model matches a corresponding portion of the head of the target object;
a determining module 1520 for determining colors of target portions corresponding to the head portions of the avatar model, the number of the target portions being at least two;
a generating module 1530 for generating a mask map corresponding to the target portion;
an input module 1540, configured to input a mask corresponding to the target portion into a color channel, where the mask and the color channel are in a one-to-one correspondence;
a rendering module 1550 for rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
In some embodiments, the obtaining module 1510 is further configured to obtain features of portions of a head of a target object based on the acquired frame image containing the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
In some embodiments, the obtaining module 1510 is further configured to identify different portions of the head of the target object included in the frame image, so as to determine an image area corresponding to each portion of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
In some embodiments, the determining module 1520 is further configured to determine a region corresponding to the target portion in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
In some embodiments, the input module 1540 is further configured to obtain the number of target portions;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
In some embodiments, the input module 1540 is further configured to obtain the number of target portions;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
In some embodiments, the apparatus further comprises:
an adjusting module 1560 for presenting a color adjusting function item for adjusting a color of the target part in a view interface presenting the avatar;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
Here, it should be noted that: the above description related to the color rendering device of the avatar is similar to the above description of the method, and the description of the beneficial effects of the method is not repeated, and for the technical details not disclosed in the embodiment of the color rendering device of the avatar in the embodiment of the present disclosure, please refer to the description of the embodiment of the method of the present disclosure.
An embodiment of the present disclosure further provides a terminal, where the terminal includes:
a memory for storing an executable program;
and the processor is used for realizing the color rendering method of the virtual image provided by the embodiment of the disclosure when the executable program is executed.
The embodiment of the present disclosure also provides a storage medium storing executable instructions, and when the executable instructions are executed, the storage medium is used for implementing the color rendering method of the avatar provided by the embodiment of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a color rendering method of an avatar, including:
acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
determining the color of a target part in each part of the head corresponding to the virtual image model, wherein the number of the target parts is at least two;
generating a mask map corresponding to the target portion;
inputting a mask image corresponding to the target part into a color channel, wherein the mask image and the color channel are in a one-to-one correspondence relationship;
rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the acquiring of the avatar model corresponding to the target object based on the acquired frame image containing the target object includes:
acquiring features of all parts of the head of a target object based on an acquired frame image containing the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the acquiring, based on the acquired frame image containing the target object, features of portions of a head of the target object includes:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the determining colors corresponding to target portions in the head portions of the avatar model includes:
determining a region corresponding to the target part in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the inputting of the mask corresponding to the target portion into a color channel comprises:
acquiring the number of the target parts;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the inputting of the mask corresponding to the target portion into a color channel comprises:
acquiring the number of the target parts;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
after said presenting the avatar of the target object, the method further comprises:
presenting, in a view interface presenting the avatar, a color adjustment function item for adjusting a color of the target portion;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a color rendering method of an avatar, further including:
the target portion of the head portions of the avatar model includes at least one of:
mouth, nose, eyes, cheek.
According to one or more embodiments of the present disclosure, there is also provided an avatar color rendering apparatus, including:
the acquisition module is used for acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
a determining module, configured to determine colors of target portions in each portion of a head corresponding to the avatar model, where the number of the target portions is at least two;
the generating module is used for generating a mask map corresponding to the target part;
the input module is used for inputting a mask image corresponding to the target part into a color channel, and the mask image and the color channel are in one-to-one correspondence;
a rendering module for rendering and presenting the avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
The above description is only an example of the present disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (17)
1. A method of color rendering of an avatar, the method comprising:
acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
determining the color of a target part in each part of the head corresponding to the virtual image model, wherein the number of the target parts is at least two;
generating a mask map corresponding to the target portion;
inputting a mask image corresponding to the target part into a color channel, wherein the mask image and the color channel are in a one-to-one correspondence relationship;
rendering and presenting an avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
2. The method of claim 1, wherein said obtaining an avatar model corresponding to a target object based on an acquired frame image containing said target object comprises:
acquiring features of all parts of the head of a target object based on an acquired frame image containing the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
3. The method of claim 2, wherein the obtaining features of portions of a head of a target object based on an acquired frame image containing the target object comprises:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
4. The method of claim 1, wherein said determining a color corresponding to a target portion of the head portions of the avatar model comprises:
determining a region corresponding to the target part in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
5. The method of claim 1, wherein said inputting a mask map corresponding to said target portion into a color channel comprises:
acquiring the number of the target parts;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
6. The method of claim 1, wherein said inputting a mask map corresponding to said target portion into a color channel comprises:
acquiring the number of the target parts;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
7. The method of claim 1, wherein after said rendering the avatar of the target object, the method further comprises:
presenting, in a view interface presenting the avatar, a color adjustment function item for adjusting a color of the target portion;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
8. The method according to any one of claims 1 to 7, wherein the target portion of the head portions of the avatar model comprises at least one of:
mouth, nose, eyes, cheek.
9. An apparatus for color rendering of an avatar, the apparatus comprising:
the acquisition module is used for acquiring an avatar model corresponding to a target object based on an acquired frame image containing the target object, wherein each part of the head of the avatar model is matched with the corresponding part of the head of the target object;
a determining module, configured to determine colors of target portions in each portion of a head corresponding to the avatar model, where the number of the target portions is at least two;
the generating module is used for generating a mask map corresponding to the target part;
the input module is used for inputting a mask image corresponding to the target part into a color channel, and the mask image and the color channel are in one-to-one correspondence;
a rendering module for rendering and presenting the avatar of the target object based on the avatar model, the color of the target portion, and the color channel.
10. The apparatus of claim 9,
the acquisition module is further used for acquiring the characteristics of each part of the head of the target object based on the acquired frame image containing the target object;
sending the acquisition request carrying the characteristics of all parts of the head of the target object;
receiving a returned avatar model of the target object;
the characteristics of the head parts are used for predicting the categories of the head parts, determining materials corresponding to the head parts based on the predicted categories, and combining and generating the virtual image model.
11. The apparatus of claim 10,
the acquisition module is further configured to identify different parts of the head of the target object included in the frame image to determine an image area corresponding to each part of the head of the target object;
performing region segmentation on the frame image based on image regions corresponding to the parts of the head of the target object to obtain images corresponding to different parts of the head of the target object;
and respectively carrying out feature extraction on the images of different parts of the head of the target object to obtain the features of all parts of the head of the target object.
12. The apparatus of claim 9,
the determining module is further configured to determine an area corresponding to the target portion in the frame image;
and identifying the color of the target part in the frame image based on the determined area, and taking the identified color of the target part as the color of the target part corresponding to the virtual image model.
13. The apparatus of claim 9,
the input module is further used for acquiring the number of the target parts;
and when the number of the target parts is determined to be not more than three, inputting the mask image corresponding to each target part into red R, green G and blue B color channels according to the corresponding relation respectively.
14. The apparatus of claim 9,
the input module is further used for acquiring the number of the target parts;
and when the number of the target parts is determined to be four, the mask images corresponding to the target parts are respectively input into color channels of red R, green G, blue B and transparency A according to the corresponding relation.
15. The apparatus of claim 9, wherein the apparatus further comprises:
an adjustment module for presenting a color adjustment function item for adjusting a color of the target portion in a view interface presenting the avatar;
adjusting a color of a target portion of the target object in response to a color adjustment instruction triggered based on the color adjustment function item;
presenting the adjusted avatar of the target object.
16. A terminal, characterized in that the terminal comprises:
a memory for storing executable instructions;
a processor for implementing the avatar color rendering method of any of claims 1-8 when executing said executable instructions.
17. A storage medium storing executable instructions for implementing a method of color rendering of an avatar according to any of claims 1 to 8 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911056935.0A CN110796721A (en) | 2019-10-31 | 2019-10-31 | Color rendering method and device of virtual image, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911056935.0A CN110796721A (en) | 2019-10-31 | 2019-10-31 | Color rendering method and device of virtual image, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110796721A true CN110796721A (en) | 2020-02-14 |
Family
ID=69442425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911056935.0A Pending CN110796721A (en) | 2019-10-31 | 2019-10-31 | Color rendering method and device of virtual image, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110796721A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489429A (en) * | 2020-04-16 | 2020-08-04 | 诚迈科技(南京)股份有限公司 | Image rendering control method, terminal device and storage medium |
CN111935491A (en) * | 2020-06-28 | 2020-11-13 | 百度在线网络技术(北京)有限公司 | Live broadcast special effect processing method and device and server |
CN112051995A (en) * | 2020-10-09 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Image rendering method, related device, equipment and storage medium |
CN113112580A (en) * | 2021-04-20 | 2021-07-13 | 北京字跳网络技术有限公司 | Method, device, equipment and medium for generating virtual image |
CN113538455A (en) * | 2021-06-15 | 2021-10-22 | 聚好看科技股份有限公司 | Three-dimensional hairstyle matching method and electronic equipment |
CN114119935A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method and device |
CN114399615A (en) * | 2021-12-24 | 2022-04-26 | 广东时谛智能科技有限公司 | Efficient shoe color matching setting method and device based on three-dimensional model |
CN114612608A (en) * | 2022-02-11 | 2022-06-10 | 广东时谛智能科技有限公司 | Image recognition-based shoe body exclusive customization method and device |
CN115578548A (en) * | 2022-12-07 | 2023-01-06 | 广东时谛智能科技有限公司 | Method and device for processing three-dimensional shoe body model according to input picture |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120323581A1 (en) * | 2007-11-20 | 2012-12-20 | Image Metrics, Inc. | Systems and Methods for Voice Personalization of Video Content |
CN105574918A (en) * | 2015-12-24 | 2016-05-11 | 网易(杭州)网络有限公司 | Material adding method and apparatus of 3D model, and terminal |
CN106652037A (en) * | 2015-10-30 | 2017-05-10 | 深圳超多维光电子有限公司 | Face mapping processing method and apparatus |
CN108171789A (en) * | 2017-12-21 | 2018-06-15 | 迈吉客科技(北京)有限公司 | A kind of virtual image generation method and system |
CN109800679A (en) * | 2018-12-29 | 2019-05-24 | 上海依图网络科技有限公司 | A kind of method and device of the attribute information of determining object to be identified |
CN109857311A (en) * | 2019-02-14 | 2019-06-07 | 北京达佳互联信息技术有限公司 | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model |
-
2019
- 2019-10-31 CN CN201911056935.0A patent/CN110796721A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120323581A1 (en) * | 2007-11-20 | 2012-12-20 | Image Metrics, Inc. | Systems and Methods for Voice Personalization of Video Content |
CN106652037A (en) * | 2015-10-30 | 2017-05-10 | 深圳超多维光电子有限公司 | Face mapping processing method and apparatus |
CN105574918A (en) * | 2015-12-24 | 2016-05-11 | 网易(杭州)网络有限公司 | Material adding method and apparatus of 3D model, and terminal |
CN108171789A (en) * | 2017-12-21 | 2018-06-15 | 迈吉客科技(北京)有限公司 | A kind of virtual image generation method and system |
CN109800679A (en) * | 2018-12-29 | 2019-05-24 | 上海依图网络科技有限公司 | A kind of method and device of the attribute information of determining object to be identified |
CN109857311A (en) * | 2019-02-14 | 2019-06-07 | 北京达佳互联信息技术有限公司 | Generate method, apparatus, terminal and the storage medium of human face three-dimensional model |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489429A (en) * | 2020-04-16 | 2020-08-04 | 诚迈科技(南京)股份有限公司 | Image rendering control method, terminal device and storage medium |
CN111489429B (en) * | 2020-04-16 | 2024-06-07 | 诚迈科技(南京)股份有限公司 | Image rendering control method, terminal equipment and storage medium |
US11722727B2 (en) | 2020-06-28 | 2023-08-08 | Baidu Online Network Technology (Beijing) Co., Ltd. | Special effect processing method and apparatus for live broadcasting, and server |
CN111935491A (en) * | 2020-06-28 | 2020-11-13 | 百度在线网络技术(北京)有限公司 | Live broadcast special effect processing method and device and server |
CN112051995A (en) * | 2020-10-09 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Image rendering method, related device, equipment and storage medium |
CN113112580A (en) * | 2021-04-20 | 2021-07-13 | 北京字跳网络技术有限公司 | Method, device, equipment and medium for generating virtual image |
US12002160B2 (en) | 2021-04-20 | 2024-06-04 | Beijing Zitiao Network Technology Co., Ltd. | Avatar generation method, apparatus and device, and medium |
CN113538455B (en) * | 2021-06-15 | 2023-12-12 | 聚好看科技股份有限公司 | Three-dimensional hairstyle matching method and electronic equipment |
CN113538455A (en) * | 2021-06-15 | 2021-10-22 | 聚好看科技股份有限公司 | Three-dimensional hairstyle matching method and electronic equipment |
CN114119935A (en) * | 2021-11-29 | 2022-03-01 | 北京百度网讯科技有限公司 | Image processing method and device |
CN114119935B (en) * | 2021-11-29 | 2023-10-03 | 北京百度网讯科技有限公司 | Image processing method and device |
CN114399615A (en) * | 2021-12-24 | 2022-04-26 | 广东时谛智能科技有限公司 | Efficient shoe color matching setting method and device based on three-dimensional model |
CN114612608A (en) * | 2022-02-11 | 2022-06-10 | 广东时谛智能科技有限公司 | Image recognition-based shoe body exclusive customization method and device |
CN115578548A (en) * | 2022-12-07 | 2023-01-06 | 广东时谛智能科技有限公司 | Method and device for processing three-dimensional shoe body model according to input picture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110796721A (en) | Color rendering method and device of virtual image, terminal and storage medium | |
CN110766777B (en) | Method and device for generating virtual image, electronic equipment and storage medium | |
CN110827378B (en) | Virtual image generation method, device, terminal and storage medium | |
CN110827379A (en) | Virtual image generation method, device, terminal and storage medium | |
CN111242881B (en) | Method, device, storage medium and electronic equipment for displaying special effects | |
CN111368685B (en) | Method and device for identifying key points, readable medium and electronic equipment | |
CN110782515A (en) | Virtual image generation method and device, electronic equipment and storage medium | |
KR102697772B1 (en) | Augmented reality content generators that include 3D data within messaging systems | |
CN113287118A (en) | System and method for face reproduction | |
CN111369427B (en) | Image processing method, image processing device, readable medium and electronic equipment | |
CN112001872B (en) | Information display method, device and storage medium | |
CN112330527A (en) | Image processing method, image processing apparatus, electronic device, and medium | |
CN114331820A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN112581635B (en) | Universal quick face changing method and device, electronic equipment and storage medium | |
CN114092678A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113806306B (en) | Media file processing method, device, equipment, readable storage medium and product | |
CN114913061A (en) | Image processing method and device, storage medium and electronic equipment | |
CN114049674A (en) | Three-dimensional face reconstruction method, device and storage medium | |
CN115937356A (en) | Image processing method, apparatus, device and medium | |
CN114049417B (en) | Virtual character image generation method and device, readable medium and electronic equipment | |
CN114863482A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
WO2023009058A1 (en) | Image attribute classification method and apparatus, electronic device, medium, and program product | |
CN110059739B (en) | Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium | |
WO2023207381A1 (en) | Image processing method and apparatus, and electronic device and storage medium | |
CN116684394A (en) | Media content processing method, apparatus, device, readable storage medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |