CN110766777A - Virtual image generation method and device, electronic equipment and storage medium - Google Patents

Virtual image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110766777A
CN110766777A CN201911053622.XA CN201911053622A CN110766777A CN 110766777 A CN110766777 A CN 110766777A CN 201911053622 A CN201911053622 A CN 201911053622A CN 110766777 A CN110766777 A CN 110766777A
Authority
CN
China
Prior art keywords
target object
virtual image
expression
target
avatar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911053622.XA
Other languages
Chinese (zh)
Other versions
CN110766777B (en
Inventor
蒋颂晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201911053622.XA priority Critical patent/CN110766777B/en
Publication of CN110766777A publication Critical patent/CN110766777A/en
Application granted granted Critical
Publication of CN110766777B publication Critical patent/CN110766777B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the disclosure provides a method and a device for generating an avatar, an electronic device and a storage medium; the method comprises the following steps: carrying out feature recognition on a frame image comprising a target object to obtain bone features and expression features of the target object; acquiring a basic virtual image model; adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain a virtual image model matched with the bone features of the target object; adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target virtual image model is used for rendering to obtain the virtual image of the target object; through the method and the device, the creation of the personalized virtual image can be realized.

Description

Virtual image generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to video processing technologies, and in particular, to a method and an apparatus for generating an avatar, an electronic device, and a storage medium.
Background
With the high-speed development of the internet industry, the application of the virtual world is increased due to artificial intelligence, and the construction of an 'avatar' is involved from animation to live broadcast, to the operation of short video and the like. In the related technology, a universal template is mostly adopted to provide an 'avatar' for a user, the template-type 'avatar' is similar, individuation is lacked, and the presentation effect is poor.
Disclosure of Invention
In view of this, the present disclosure provides a method and an apparatus for generating an avatar, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a method for generating an avatar, including:
carrying out feature recognition on a frame image comprising a target object to obtain bone features and expression features of the target object;
acquiring a basic virtual image model;
adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain a virtual image model matched with the bone features of the target object;
adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
In the above scheme, the performing feature recognition on the frame image including the target object to obtain the bone feature and the expression feature of the target object includes:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
In the above solution, the adjusting the bone features of the basic avatar model based on the bone features of the target object includes:
obtaining the bone characteristics of the basic virtual image model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
In the above solution, the adjusting vertex information of each portion in the base avatar model based on the bone transformation information includes:
determining, based on the bone transformation information, a corresponding bone scaling factor, and a corresponding bone displacement;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
In the foregoing solution, the adjusting the expression features of the avatar model based on the expression features of the target object includes:
determining expression parameters corresponding to target parts in all parts of the head of the virtual image model based on the expression features of the target object, wherein the expression parameters are used for indicating the expression states of the target parts;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
In the foregoing solution, the adjusting the expression characteristics of the avatar model based on the expression parameters, the first expression data and the second expression data includes:
interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
In the foregoing scheme, the interpolating the first expression data and the second expression data based on the expression parameter to obtain an interpolation result includes:
based on the expression parameters, interpolating the first expression data and the second expression data by adopting the following formula to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
In the above scheme, the method further comprises:
performing key point identification on a plurality of continuous frame images including the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
In the above scheme, the method further comprises:
receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating an avatar, including:
the first identification module is used for carrying out feature identification on a frame image comprising a target object to obtain the bone feature and the expression feature of the target object;
the acquisition module is used for acquiring a basic virtual image model;
the first adjusting module is used for adjusting the bone characteristics of the basic virtual image model based on the bone characteristics of the target object to obtain a virtual image model matched with the bone characteristics of the target object;
the second adjusting module is used for adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
In the above scheme, the first identifying module is further configured to identify different portions of the head of the target object included in the frame image, so as to determine an image area corresponding to each portion of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
In the above scheme, the first adjusting module is further configured to obtain bone features of the basic avatar model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
In the above solution, the first adjusting module is further configured to determine, based on the bone transformation information, a corresponding bone scaling factor and a corresponding bone displacement;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
In the above scheme, the second adjusting module is further configured to determine, based on the expression features of the target object, expression parameters corresponding to a target portion in each portion of the head of the avatar model, where the expression parameters are used to indicate an expression state of the target portion;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
In the above scheme, the second adjusting module is further configured to interpolate the first expression data and the second expression data based on the expression parameter to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
In the foregoing scheme, the second adjusting module is further configured to interpolate the first expression data and the second expression data by using the following formula based on the expression parameter, so as to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
In the above scheme, the apparatus further comprises:
the second identification module is used for carrying out key point identification on a plurality of continuous frame images comprising the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
In the above scheme, the apparatus further comprises:
a modification module for receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the method for generating the virtual image provided by the embodiment of the disclosure when the executable instruction is executed.
In a fourth aspect, the present disclosure provides a storage medium storing executable instructions, where the executable instructions are executed to implement the above-mentioned avatar generation method provided in the present disclosure.
The application of the embodiment of the present disclosure has the following beneficial effects:
by applying the embodiment of the disclosure, the skeleton characteristics and the expression characteristics of the target object are obtained by identifying the frame image of the target object, the skeleton characteristics of the basic virtual image model are adjusted based on the skeleton characteristics, the expression characteristics of the virtual image model matched with the skeleton characteristics of the target object are adjusted based on the expression characteristics, and then the target virtual image model matched with the expression characteristics of the target object is obtained to generate the virtual image of the target object through rendering; the target virtual image model is obtained by adjusting the basic virtual image model based on the bone characteristics and the expression characteristics of the target object, so that the personalized creation of the virtual image can be realized, and the expression effect of the virtual image can be better presented.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic architecture diagram of an avatar generation system provided in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure;
fig. 3 is a first flowchart illustrating a method for generating an avatar according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of frame image acquisition of a target object according to an embodiment of the present disclosure;
FIG. 5 is a first schematic diagram of an image scanning frame according to an embodiment of the present disclosure;
fig. 6 is a second schematic diagram of an image scanning frame provided in the embodiment of the present disclosure;
fig. 7 is a schematic view of an interface for detecting key points of a human face according to an embodiment of the present disclosure;
FIG. 8 is a schematic illustration of the skeletal feature adjustment of a base avatar model provided by an embodiment of the present disclosure;
fig. 9 is a schematic diagram illustrating a comparison of an avatar obtained by adjusting an avatar model in different ways according to an embodiment of the present disclosure;
FIG. 10 is a schematic illustration of an avatar modification interface provided by an embodiment of the present disclosure;
fig. 11 is a flowchart illustrating a second method for generating an avatar according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an avatar generation apparatus according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Before further detailed description of the embodiments of the present disclosure, terms and expressions referred to in the embodiments of the present disclosure are explained, and the terms and expressions referred to in the embodiments of the present disclosure are applied to the following explanations.
1) The virtual image converts expressions, actions, expression, language and the like of the user into one action of the virtual character in real time through intelligent identification, and the facial expressions, the action expression and the voice tone of the virtual character can completely copy the user.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
Based on the above explanations of terms and terms involved in the embodiments of the present disclosure, referring to fig. 1, fig. 1 is an architectural diagram of a system for generating an avatar provided by the embodiments of the present disclosure, in order to support an exemplary application, a terminal 400 (including a terminal 400-1 and a terminal 400-2) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both networks, and data transmission is implemented using a wireless or wired link.
A terminal 400 (e.g., terminal 400-1) for acquiring a frame image containing a target object; rendering and presenting the avatar of the target object based on the target avatar model.
The server 200 is configured to perform feature recognition on a frame image including a target object to obtain a bone feature and an expression feature of the target object; acquiring a basic virtual image model; adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain a virtual image model matched with the bone features of the target object; and adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain the target virtual image model matched with the expression characteristics of the target object.
Here, in practical applications, the terminal 400 may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a wearable computing device, a Personal Digital Assistant (PDA), a desktop computer, a cellular phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these data processing devices or other data processing devices; the server 200 may be a server configured separately to support various services, or may be a server cluster.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device may be various terminals including a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a vehicle mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital Television (TV), a desktop computer, etc. The electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 2, the electronic device may include a processing device (e.g., central processing unit, graphics processor, etc.) 210 that may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 220 or a program loaded from a storage device 280 into a Random Access Memory (RAM) 230. In the RAM230, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 210, the ROM 220, and the RAM230 are connected to each other through a bus 240. An Input/Output (I/O) interface 250 is also connected to bus 240.
Generally, the following devices may be connected to I/O interface 250: input devices 260 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 270 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 280 including, for example, magnetic tape, hard disk, etc.; and a communication device 290. The communication device 290 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data.
In particular, the processes described by the provided flowcharts may be implemented as computer software programs according to embodiments of the present disclosure. For example, the disclosed embodiments include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network through communication device 290, or installed from storage device 280, or installed from ROM 220. The functions in the avatar generation method of the disclosed embodiment are performed when the computer program is executed by the processing device 220.
It should be noted that the computer readable medium described above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including over electrical wiring, fiber optics, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be present alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is enabled to execute the method for generating the avatar provided by the embodiment of the present disclosure.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) and a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams provided by the embodiments of the present disclosure illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described in the embodiments of the present disclosure may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field-Programmable Gate arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Parts (ASSPs)), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of embodiments of the present disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The following describes a method for generating an avatar provided by an embodiment of the present disclosure. Referring to fig. 3, fig. 3 is a first schematic flow chart of a method for generating an avatar according to an embodiment of the present disclosure, in some embodiments, the method for generating an avatar may be implemented by a server or a terminal, or implemented by the server and the terminal in a cooperative manner, taking the server as an example, the method for generating an avatar according to an embodiment of the present disclosure includes:
step 301: and the server performs characteristic identification on the frame image comprising the target object to obtain the bone characteristic and the expression characteristic of the target object.
Here, a frame image including the target object is captured by the terminal, and an acquisition request of an avatar corresponding to the target object is transmitted based on the frame image. The frame image of the target object is carried in the acquisition request, and the acquisition request is used for requesting a target virtual image model corresponding to the target object.
In some embodiments, a client is provided on the terminal, such as an instant messaging client, a microblog client, a short video client, and the like, and when a user needs to shoot videos related to the avatar, the user can trigger an instruction for generating the avatar by sliding, clicking, and the like on a view interface displayed on the terminal. The terminal responds to the generation instruction of the virtual image, collects a frame image containing the target object, and further sends an acquisition request of the virtual image corresponding to the target object to the server based on the frame image.
In practical application, the terminal presents a toolbar containing various shooting prop icons such as stickers, filters, avatars and the like to a user through a view interface, and the user can select a required shooting prop through click operation. When the terminal detects that the shooting prop icon selected by the user is the virtual image icon, a virtual image generation instruction triggered when the user clicks is received based on the virtual image icon. Illustratively, referring to fig. 4, fig. 4 is a schematic diagram of acquiring a frame image of a target object according to an embodiment of the present disclosure, where a terminal presents a preview frame image containing the target object through a view interface and presents a page containing an avatar icon. When a user clicks the virtual image icon, the terminal shows that the virtual image icon is in a selected state, namely the virtual image icon can be enclosed by the square frame, at the moment, the terminal receives a virtual image generation instruction triggered by the user, and then the frame image of the target object is collected based on the virtual image generation instruction.
In some embodiments, the terminal may further present an image scanning frame through the view interface when acquiring the frame image of the target object. The image scanning frame is set based on the target object, is matched with the outline of the target object, and can present corresponding prompt information to the user so as to prompt the user to adjust the shooting posture, the shooting angle, the shooting distance and the like of the user during shooting.
Illustratively, referring to fig. 5, fig. 5 is a schematic diagram of an image scanning frame provided in the embodiment of the present disclosure, where when the terminal presents an image capture interface and detects a target object, the terminal presents the image scanning frame and prompts the user to place a face in the image scanning frame when performing avatar creation by displaying a text "please place the face in the frame". If the terminal detects that the contour of the target object is not in the image scanning frame, the user may be prompted to adjust the shooting posture, angle or distance by the characters "please photograph the front face", "please move the face into the frame", and the like, see fig. 6, fig. 6 is a schematic diagram of the image scanning frame provided by the embodiment of the disclosure, and the contour of the target object in fig. 6 is not matched with the image scanning frame.
And the server receives the acquisition request of the frame image containing the target object, and identifies the frame image of the target object to acquire the bone characteristics and the expression characteristics of the target object.
In some embodiments, the bone feature and the expression feature of the target object in the frame image may be obtained as follows: identifying different parts of the head of a target object contained in the frame image to determine image areas corresponding to the parts of the head of the target object; and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
Here, the head portion of the target object includes at least one of: eyes, hair, ears, mouth, nose, eyebrows, beard, and face. Here, the eyes may include eyes and glasses, and the hair part may include hair and a cap.
In some embodiments, if the characteristics of the head portions of the target object are determined, it is first required to obtain image areas in the frame images corresponding to the head portions. Specifically, the terminal may determine the image area of each part of the head object of the target object by means of face key point recognition. Here, the face key point refers to a point that can reflect local features (such as color features, shape features, and texture features) of a target object in an image, and is generally a set of a plurality of pixel points, for example, the face key point may be an eye key point, a mouth key point, or a nose key point.
In practical application, carrying out face key point detection on a frame image containing a target object, and determining key points included by each part of the head of the target object; and based on the determined key points of the face, carrying out face alignment by adopting a face alignment algorithm, and further determining an area formed by the key points and an image area corresponding to each part of the head of the target object. Referring to fig. 7, fig. 7 is a schematic interface diagram of face keypoint detection provided by the embodiment of the present disclosure, where a dashed box 1 is an image area of a nose determined by keypoints included in the nose, and a dashed box 2 is an image area of a mouth determined by keypoints included in the mouth.
Based on the determined image areas corresponding to the parts of the head of the target object, carrying out area segmentation on the acquired frame image, so that each segmented image corresponds to one of different parts of the head of the target object; and respectively extracting the features of the images corresponding to the parts of the head of the target object based on the determined image areas to obtain the bone features and the expression features of the target object.
Step 302: and acquiring a basic virtual image model.
Here, the basic avatar model is a general template for creating an avatar, which is created in advance by a designer, specifically, based on a standard form corresponding to each part of the head such as a standard face, a standard lip shape, and a standard eye shape of a person.
In practical applications, the creation of the personalized avatar is based on the basic avatar model, and therefore, when generating the target avatar model corresponding to the target object, a pre-made basic avatar model needs to be obtained.
Step 303: and adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain the virtual image model matched with the bone features of the target object.
After the bone features of the target object are obtained, the bone features of the basic virtual image model can be adjusted based on the bone features, namely, the face pinching process is carried out, and then the virtual image model matched with the bone features of the target object is obtained.
In some embodiments, the skeletal features of the base avatar model may be adjusted by: obtaining the skeleton characteristics of the basic virtual image model; determining skeleton transformation information of the corresponding virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model; and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
First, the skeleton characteristics of the basic avatar model, that is, the position information of key points of each skeleton constituting the basic avatar model, are obtained. According to the bone characteristics of the target object and the bone characteristics of the basic virtual image model, for example, according to the change situation of the key point position of the bone, the bone transformation information of the virtual image model relative to the basic virtual image model is determined, and further, the vertex information of each part in the virtual image model can be adjusted according to the bone transformation information.
In some embodiments, the vertex information for each portion of the base avatar model may be adjusted by: determining, based on the bone transformation information, a corresponding bone scaling factor, and a corresponding bone displacement; and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
When vertex information of each part in the basic virtual model is adjusted, a bone scaling coefficient and a bone displacement corresponding to a bone of each part are determined according to the obtained bone transformation information, and the position of each vertex in the basic virtual model is adjusted based on the bone scaling coefficient and the bone displacement.
In practical application, the skeleton transformation information is a transformation matrix, each vertex of the basic virtual image model is also expressed by the matrix, the process of adjusting the position of each vertex in the basic virtual model is the space position transformation of each vertex, and specifically, the matrix corresponding to the model vertex can be multiplied by the transformation matrix (namely the skeleton transformation information) to realize the space position change of the vertex, so that the skeleton in the basic virtual model is scaled and displaced to change the appearance characteristic of the head, thereby achieving the effect of pinching the face.
Exemplarily, referring to fig. 8, fig. 8 is a schematic diagram illustrating the adjustment of the bone features of the basic avatar model according to the embodiment of the present disclosure. Here, the points shown in the figure are the vertices, the connecting lines are the constructed bones, and the bones in fig. 8 are the portions corresponding to the eyebrow bones. Based on the change of the vertex information, the scaling and the displacement of the skeleton are realized, so that the appearance of the basic virtual image model is changed, and the face pinching effect is achieved.
Step 304: adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
After the expression features of the target object are obtained, the expression features of the obtained virtual image model can be adjusted based on the expression features so that the expression features of the virtual image model are matched with the expression features of the target object, and then the virtual image of the target object can be obtained based on the adjusted target virtual image model through rendering.
In some embodiments, the expressive features of the avatar model may be adjusted by: determining expression parameters corresponding to target parts in all parts of the head of the virtual image model based on the expression characteristics of the target object, wherein the expression parameters are used for indicating the expression states of the target parts; acquiring first expression data of a basic virtual image model and second expression data of the virtual image model; and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
Based on the obtained expression features of the target part in each part of the head of the target object, determining an expression parameter, namely a Blendshape parameter, of the corresponding part of each part of the head of the virtual image model, where the expression parameter is used to indicate an expression state in which the target part is currently located, for example, the target part is an eye, and [0,1] may be used to indicate the expression state in which the eye is located, for example, 0 represents an open-eye state and 1 represents a closed-eye state.
And after determining the expression parameters corresponding to the target parts in each part of the head of the virtual image, adjusting the expression characteristics of the virtual image model based on the expression parameters. The virtual image model after the skeleton adjustment is directly adjusted according to the expression parameters, so that the problem of poor virtual image expression presentation effect can be caused. Therefore, in the embodiment of the present disclosure, the base avatar model and the adjusted avatar model after bone adjustment are combined together to adjust the expression characteristics of the avatar model. In practical application, first expression data of a basic virtual image model and second expression data of the virtual image model are obtained, and the expression data are position information of each vertex of the virtual image model. And adjusting the expression characteristics of the virtual image model based on the first expression data, the second expression data and the expression parameters.
Exemplarily, referring to fig. 9, fig. 9 is a schematic diagram illustrating a comparison of an avatar obtained by adjusting an avatar model in different ways according to an embodiment of the present disclosure, where the left diagram illustrates that the avatar model after having been adjusted by a skeleton is directly adjusted according to expression parameters, and the obtained avatar model renders the obtained avatar, and it is obvious that there is a problem in an eye portion of a target object, and the degree of fitting of an eyelid is low; the right diagram is a diagram which is obtained by adopting the method provided by the embodiment of the disclosure and adjusts the virtual image model according to the expression parameters and by combining the expression data of the basic virtual image model and the virtual image model, namely, the expression characteristics of the model after face pinching are adjusted by the expression parameters and the data before face pinching and the data after face pinching, so that the virtual image with extremely high eyelid fitting degree shown in the right diagram is obtained, and the expression of the virtual image is more natural.
In some embodiments, the expressive features of the avatar model may be adjusted by: interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result; and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
In practical application, the first expression data and the second expression data may be interpolated based on the expression parameters, and specifically, the following formula may be adopted to interpolate the first expression data and the second expression data to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target virtual image model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
And adjusting the position of each vertex in the virtual image model based on the interpolated result to obtain a target virtual image model matched with the target object.
And based on a target virtual image model matched with the target object and materials of all parts of the head corresponding to the preset target virtual image model, obtaining and presenting the virtual image of the target object through rendering processing of a graphic processor GPU and the like.
In some embodiments, after obtaining the avatar of the target object, the avatar may also be dynamically rendered: performing key point identification on a plurality of continuous frame images including a target object; acquiring key point change information of a target object in multiple continuous frame images; and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
After the corresponding avatar is generated for the target object, the avatar may be dynamically presented to improve the user's video capture experience, i.e., so that the avatar may be changed according to changes in the movements or expressions of the target object's head.
In actual implementation, the terminal can collect multiple continuous frame images for the target object and send the images to the server. The server executes the following operations for receiving the frame image of each collected frame: acquiring position change information of key points of each part of the head of the target object in the frame image relative to key points of corresponding parts of the target object in the frame image of the previous frame; adjusting materials of all parts of the corresponding head part based on the position change information of key points of all parts of the head part of the target object, updating a target virtual image model based on the adjusted materials, and further generating a form updating instruction of the virtual image; and sending a form updating instruction of the virtual image to the terminal, wherein the form updating instruction can carry an updated target virtual image model, so that the terminal updates the virtual image of the presented target object according to the updated target virtual image model.
In some embodiments, after generating the avatar for the target object, the avatar may also be modified: receiving a modification request aiming at a target part of the virtual image, wherein the modification request carries an image of a target object comprising the target part; in response to the modification request, the target avatar model of the target object is updated based on the image of the target object including the target portion to update the avatar of the target object based on the updated target avatar model.
When a user is not satisfied with the constructed virtual image or wants to further improve the virtual image, the user can click an icon of the virtual image corresponding to a certain virtual image presented on the view interface through the terminal, namely, a thumbnail of the virtual image, and trigger a modification instruction of the virtual image. Referring to fig. 10, fig. 10 is a schematic view of an avatar modification interface provided by an embodiment of the present disclosure, where a terminal displays that a user creates two avatars in total, and when the terminal receives a click operation of the user, the terminal encloses an avatar icon corresponding to an avatar that the user specifies to modify by a selection frame, and displays a button of "modify avatar" on a view interface for the user to perform a modification operation.
After receiving a modification instruction of a user for the virtual image, the terminal acquires the frame image of the target object again, carries the frame image including the target part in a modification request and sends the modification request to the server so as to request for modifying the target part of the virtual image.
After receiving a modification request for modifying a target part of the virtual image, the server analyzes the modification request, acquires a frame image carried in the modification request, and identifies the frame image of the target object comprising the target part so as to determine an image area corresponding to the target part; based on the image area corresponding to the target part, segmenting the frame of image to obtain an image corresponding to the target part; performing class prediction on the image of the target part through a pre-trained neural network model to determine the class of the target part; determining materials corresponding to the new target part according to the category of the target part; and replacing the material of the target part in the original target virtual image model with the material of the new target part so as to update the target virtual image model of the target object.
And sending the updated target avatar model to the terminal so that the terminal updates the avatar of the presented target object based on the updated target avatar model.
The application of the embodiment of the present disclosure has the following beneficial effects:
by applying the embodiment of the disclosure, the skeleton characteristics and the expression characteristics of the target object are obtained by identifying the frame image of the target object, the skeleton characteristics of the basic virtual image model are adjusted based on the skeleton characteristics, the expression characteristics of the virtual image model matched with the skeleton characteristics of the target object are adjusted based on the expression characteristics, and then the target virtual image model matched with the expression characteristics of the target object is obtained to generate the virtual image of the target object through rendering; the target virtual image model is obtained by adjusting the basic virtual image model based on the bone characteristics and the expression characteristics of the target object, so that the personalized creation of the virtual image can be realized, and the expression effect of the virtual image can be better presented.
The following description is continued with reference to a specific embodiment of the method for generating an avatar provided by the embodiment of the present disclosure. Referring to fig. 11, fig. 11 is a schematic flowchart of a second method for generating an avatar according to an embodiment of the present disclosure, where the method for generating an avatar according to an embodiment of the present disclosure includes:
step 1101: the terminal collects frame images of the target object and sends an avatar acquisition request corresponding to the target object.
Step 1102: and the server performs feature recognition on the frame image of the target object to obtain the bone features and the expression features of the target object.
When performing feature recognition, the server first recognizes different parts of the head of the target object included in the frame image, determines image areas corresponding to the respective parts, and performs feature extraction on the image of the image area to obtain the bone features and the expression features of the target object.
Step 1103: and obtaining the bone characteristics of the basic virtual image model.
Step 1104: and determining skeleton transformation information based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model.
Step 1105: based on the bone transform information, a bone scaling factor and a bone displacement are determined.
Step 1106: and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement.
Here, the basic avatar model is adjusted by a skeleton to obtain an avatar model, which is a face pinching process.
Step 1107: and determining expression parameters corresponding to the target part in each part of the head of the virtual image model based on the expression characteristics of the target object.
Step 1108: first expression data of the basic avatar model and second expression data of the avatar model are obtained.
Step 1109: interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result;
step 1110: and adjusting the positions of all vertexes in the virtual image model based on the interpolation result to obtain the target virtual image model.
Step 1111: and transmitting the target virtual image model.
Step 1112: and rendering and presenting the virtual image of the target object by the terminal based on the target virtual image model.
Step 1113: and acquiring a second frame of image of the target object and sending the second frame of image to the server.
Here, the second frame image and the first frame image are consecutive frame images.
Step 1114: and the server acquires the position information of the face key point of the target object in the second frame image.
Step 1115: and determining the position change information of the face key points of the target object in the second frame image relative to the face key points of the target object in the first frame image.
Step 1116: and updating a target virtual image model of the target object based on the position change information of the face key points of the target object, and sending a form updating instruction of the virtual image to the terminal.
Step 1117: and the terminal updates the virtual image of the presented target object according to the updated target virtual image model carried in the updating instruction.
The following describes units and/or modules in a generating device for realizing the avatar provided by the embodiment of the present disclosure. It is understood that the units or modules in the avatar generation apparatus may be implemented in the electronic device shown in fig. 2 by means of software (e.g., a computer program stored in the above-mentioned computer software program), and may also be implemented in the electronic device shown in fig. 2 by means of the above-mentioned hardware logic components (e.g., FPGA, ASIC, SOC, and CPLD).
Referring to fig. 12, fig. 12 is an alternative structural diagram of an avatar generation apparatus 1200 implementing an embodiment of the present disclosure, showing the following modules: the first identifying module 1210, the obtaining module 1220, the first adjusting module 1230 and the second adjusting module 1240, the functions of each of which will be described below.
It should be noted that the above classification of modules does not constitute a limitation on the electronic device itself, for example, some modules may be split into two or more sub-modules, or some modules may be combined into a new module.
It should be further noted that the names of the above modules do not limit the modules themselves in some cases, for example, the above "first recognition module 1210" may also be described as a module for performing feature recognition on a frame image including a target object to obtain a bone feature and an expression feature of the target object.
For the same reason, units and/or modules in the electronic device, which are not described in detail, do not represent defaults of the corresponding units and/or modules, and all operations performed by the electronic device may be implemented by the corresponding units and/or modules in the electronic device.
With continuing reference to fig. 12, fig. 12 is a schematic structural diagram of an avatar generation apparatus 1200 provided in the embodiment of the present disclosure, the apparatus includes:
a first identification module 1210, configured to perform feature identification on a frame image including a target object to obtain a bone feature and an expression feature of the target object;
an obtaining module 1220, configured to obtain a basic avatar model;
a first adjusting module 1230, configured to adjust the bone features of the base avatar model based on the bone features of the target object, so as to obtain an avatar model matching the bone features of the target object;
a second adjusting module 1240, configured to adjust the expression features of the avatar model based on the expression features of the target object, to obtain a target avatar model matching the expression features of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
In some embodiments, the first identifying module 1210 is further configured to identify different portions of the head of the target object included in the frame image, so as to determine an image area corresponding to each portion of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
In some embodiments, the first adjusting module 1230 is further configured to obtain bone features of the base avatar model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
In some embodiments, the first adjusting module 1230 is further configured to determine a corresponding bone scaling factor and a corresponding bone displacement based on the bone transformation information;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
In some embodiments, the second adjusting module 1240 is further configured to determine, based on the expression features of the target object, expression parameters corresponding to a target portion in each portion of the avatar model head, where the expression parameters are used to indicate an expression state of the target portion;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
In some embodiments, the second adjusting module 1240 is further configured to interpolate the first expression data and the second expression data based on the expression parameter to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
In some embodiments, the second adjusting module 1240 is further configured to interpolate the first expression data and the second expression data by using the following formula based on the expression parameter to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
In some embodiments, the apparatus further comprises:
a second identifying module 1250 configured to perform keypoint identification on a plurality of frames of continuous frame images including the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
In some embodiments, the apparatus further comprises:
a modification module 1260 for receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
Here, it should be noted that: the above description related to the avatar generation apparatus is similar to the above description of the method, and for the technical details not disclosed in the embodiments of the avatar generation apparatus according to the embodiments of the present disclosure, please refer to the description of the embodiments of the method of the present disclosure.
An embodiment of the present disclosure further provides an electronic device, which includes:
a memory for storing an executable program;
and the processor is used for realizing the method for generating the virtual image provided by the embodiment of the disclosure when the executable program is executed.
The embodiment of the present disclosure also provides a storage medium, which stores executable instructions, and when the executable instructions are executed, the storage medium is used for implementing the method for generating the avatar provided by the embodiment of the present disclosure.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, including:
carrying out feature recognition on a frame image comprising a target object to obtain bone features and expression features of the target object;
acquiring a basic virtual image model;
adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain a virtual image model matched with the bone features of the target object;
adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
the method for performing feature recognition on the frame image including the target object to obtain the bone features and the expression features of the target object includes:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
the adjusting the bone features of the base avatar model based on the bone features of the target object includes:
obtaining the bone characteristics of the basic virtual image model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
adjusting vertex information for each portion in the base avatar model based on the skeletal transformation information, comprising:
determining, based on the bone transformation information, a corresponding bone scaling factor, and a corresponding bone displacement;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
the adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object comprises:
determining expression parameters corresponding to target parts in all parts of the head of the virtual image model based on the expression features of the target object, wherein the expression parameters are used for indicating the expression states of the target parts;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data, including:
interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
the interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result includes:
based on the expression parameters, interpolating the first expression data and the second expression data by adopting the following formula to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
performing key point identification on a plurality of continuous frame images including the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a method for generating an avatar, further including:
receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
According to one or more embodiments of the present disclosure, there is also provided an avatar generation apparatus including:
the first identification module is used for carrying out feature identification on a frame image comprising a target object to obtain the bone feature and the expression feature of the target object;
the acquisition module is used for acquiring a basic virtual image model;
the first adjusting module is used for adjusting the bone characteristics of the basic virtual image model based on the bone characteristics of the target object to obtain a virtual image model matched with the bone characteristics of the target object;
the second adjusting module is used for adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
The above description is only an example of the present disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method for generating an avatar, the method comprising:
carrying out feature recognition on a frame image comprising a target object to obtain bone features and expression features of the target object;
acquiring a basic virtual image model;
adjusting the bone features of the basic virtual image model based on the bone features of the target object to obtain a virtual image model matched with the bone features of the target object;
adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
2. The method of claim 1, wherein the performing feature recognition on the frame image including the target object to obtain the bone feature and the expression feature of the target object comprises:
identifying different parts of the head of a target object contained in the frame image so as to determine image areas corresponding to the parts of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
3. The method of claim 1, wherein said adjusting the skeletal features of said base avatar model based on the skeletal features of said target object comprises:
obtaining the bone characteristics of the basic virtual image model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
4. The method of claim 3, wherein said adjusting vertex information for portions of said base avatar model based on said skeletal transformation information comprises:
determining, based on the bone transformation information, a corresponding bone scaling factor, and a corresponding bone displacement;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
5. The method of claim 1, wherein said adjusting the expressive features of the avatar model based on the expressive features of the target object comprises:
determining expression parameters corresponding to target parts in all parts of the head of the virtual image model based on the expression features of the target object, wherein the expression parameters are used for indicating the expression states of the target parts;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
6. The method of claim 5, wherein the adjusting the expressive features of the avatar model based on the expression parameters, the first expression data, and the second expression data comprises:
interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
7. The method of claim 6, wherein the interpolating the first expression data and the second expression data based on the expression parameters to obtain an interpolation result comprises:
based on the expression parameters, interpolating the first expression data and the second expression data by adopting the following formula to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
8. The method of claim 1, wherein the method further comprises:
performing key point identification on a plurality of continuous frame images including the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
9. The method of claim 1, wherein the method further comprises:
receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
10. An avatar generation apparatus, comprising:
the first identification module is used for carrying out feature identification on a frame image comprising a target object to obtain the bone feature and the expression feature of the target object;
the acquisition module is used for acquiring a basic virtual image model;
the first adjusting module is used for adjusting the bone characteristics of the basic virtual image model based on the bone characteristics of the target object to obtain a virtual image model matched with the bone characteristics of the target object;
the second adjusting module is used for adjusting the expression characteristics of the virtual image model based on the expression characteristics of the target object to obtain a target virtual image model matched with the expression characteristics of the target object; the target avatar model is used for rendering to obtain an avatar of the target object.
11. The apparatus of claim 10,
the first identification module is further configured to identify different parts of the head of the target object included in the frame image, so as to determine an image area corresponding to each part of the head of the target object;
and performing feature extraction on the image corresponding to each part of the head of the target object based on the determined image area to obtain the bone features and the expression features of the target object.
12. The apparatus of claim 10,
the first adjusting module is further used for obtaining the bone characteristics of the basic virtual image model;
determining skeleton transformation information corresponding to the virtual image model relative to the basic virtual image model based on the skeleton characteristics of the target object and the skeleton characteristics of the basic virtual image model;
and adjusting the vertex information of each part in the basic virtual image model based on the skeleton transformation information.
13. The apparatus of claim 12,
the first adjustment module is further configured to determine a corresponding bone scaling factor and a corresponding bone displacement based on the bone transformation information;
and adjusting the positions of all vertexes in the basic virtual image model based on the skeleton scaling coefficient and the skeleton displacement to obtain the virtual image model.
14. The apparatus of claim 10,
the second adjusting module is further configured to determine expression parameters corresponding to a target part in each part of the head of the avatar model based on the expression features of the target object, where the expression parameters are used to indicate an expression state of the target part;
acquiring first expression data of the basic virtual image model and second expression data of the virtual image model;
and adjusting the expression characteristics of the virtual image model based on the expression parameters, the first expression data and the second expression data.
15. The apparatus of claim 14,
the second adjusting module is further configured to interpolate the first expression data and the second expression data based on the expression parameters to obtain an interpolation result;
and adjusting each vertex in the virtual image model based on the interpolation result to obtain the target virtual image model.
16. The apparatus of claim 15,
the second adjusting module is further configured to interpolate the first expression data and the second expression data by using the following formula based on the expression parameters to obtain an interpolation result:
Z=X*(1-a)+Y*a
wherein Z is the expression data of the target avatar model, X is the first expression data, Y is the second expression data, and a is the expression parameter.
17. The apparatus of claim 10, wherein the apparatus further comprises:
the second identification module is used for carrying out key point identification on a plurality of continuous frame images comprising the target object;
acquiring key point change information of the target object in the multiple continuous frame images;
and generating a form updating instruction of the virtual image based on the key point change information so as to dynamically present the virtual image.
18. The apparatus of claim 10, wherein the apparatus further comprises:
a modification module for receiving a modification request for a target portion of the avatar, the modification request carrying an image of the target object including the target portion;
in response to the modification request, updating a target avatar model of the target object based on an image of the target object including the target portion to update an avatar of the target object based on the updated target avatar model.
19. An electronic device, characterized in that the terminal comprises:
a memory for storing executable instructions;
a processor for implementing the avatar generation method of any of claims 1 to 9 when executing said executable instructions.
20. A storage medium characterized in that executable instructions are stored, which when executed, are used to implement the avatar generation method according to any one of claims 1 to 9.
CN201911053622.XA 2019-10-31 2019-10-31 Method and device for generating virtual image, electronic equipment and storage medium Active CN110766777B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911053622.XA CN110766777B (en) 2019-10-31 2019-10-31 Method and device for generating virtual image, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911053622.XA CN110766777B (en) 2019-10-31 2019-10-31 Method and device for generating virtual image, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110766777A true CN110766777A (en) 2020-02-07
CN110766777B CN110766777B (en) 2023-09-29

Family

ID=69335070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911053622.XA Active CN110766777B (en) 2019-10-31 2019-10-31 Method and device for generating virtual image, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110766777B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462204A (en) * 2020-02-13 2020-07-28 腾讯科技(深圳)有限公司 Virtual model generation method, virtual model generation device, storage medium, and electronic device
CN111612876A (en) * 2020-04-27 2020-09-01 北京小米移动软件有限公司 Expression generation method and device and storage medium
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN111935491A (en) * 2020-06-28 2020-11-13 百度在线网络技术(北京)有限公司 Live broadcast special effect processing method and device and server
CN111970535A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Virtual live broadcast method, device, system and storage medium
CN112182194A (en) * 2020-10-21 2021-01-05 南京创维信息技术研究院有限公司 Method, system and readable storage medium for expressing emotional actions of television avatar
CN112184921A (en) * 2020-10-30 2021-01-05 北京百度网讯科技有限公司 Avatar driving method, apparatus, device, and medium
CN112286610A (en) * 2020-10-28 2021-01-29 北京有竹居网络技术有限公司 Interactive processing method and device, electronic equipment and storage medium
CN112330805A (en) * 2020-11-25 2021-02-05 北京百度网讯科技有限公司 Face 3D model generation method, device and equipment and readable storage medium
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN112581571A (en) * 2020-12-02 2021-03-30 北京达佳互联信息技术有限公司 Control method and device of virtual image model, electronic equipment and storage medium
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112785669A (en) * 2021-02-01 2021-05-11 北京字节跳动网络技术有限公司 Virtual image synthesis method, device, equipment and storage medium
CN112819971A (en) * 2021-01-26 2021-05-18 北京百度网讯科技有限公司 Method, device, equipment and medium for generating virtual image
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device
CN112967212A (en) * 2021-02-01 2021-06-15 北京字节跳动网络技术有限公司 Virtual character synthesis method, device, equipment and storage medium
CN113050794A (en) * 2021-03-24 2021-06-29 北京百度网讯科技有限公司 Slider processing method and device for virtual image
CN113099298A (en) * 2021-04-08 2021-07-09 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113350801A (en) * 2021-07-20 2021-09-07 网易(杭州)网络有限公司 Model processing method and device, storage medium and computer equipment
CN114501065A (en) * 2022-02-11 2022-05-13 广州方硅信息技术有限公司 Virtual gift interaction method and system based on face jigsaw and computer equipment
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium
CN115578493A (en) * 2022-10-20 2023-01-06 武汉两点十分文化传播有限公司 Maya expression coding method and system
CN116129091A (en) * 2023-04-17 2023-05-16 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium
WO2024067320A1 (en) * 2022-09-29 2024-04-04 北京字跳网络技术有限公司 Virtual object rendering method and apparatus, and device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292181A1 (en) * 2008-04-16 2011-12-01 Canesta, Inc. Methods and systems using three-dimensional sensing for user interaction with applications
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
CN108564641A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Expression method for catching and device based on UE engines
CN109857311A (en) * 2019-02-14 2019-06-07 北京达佳互联信息技术有限公司 Generate method, apparatus, terminal and the storage medium of human face three-dimensional model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment
CN110135226A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Expression animation data processing method, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110292181A1 (en) * 2008-04-16 2011-12-01 Canesta, Inc. Methods and systems using three-dimensional sensing for user interaction with applications
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN108171789A (en) * 2017-12-21 2018-06-15 迈吉客科技(北京)有限公司 A kind of virtual image generation method and system
CN110135226A (en) * 2018-02-09 2019-08-16 腾讯科技(深圳)有限公司 Expression animation data processing method, device, computer equipment and storage medium
CN108564641A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Expression method for catching and device based on UE engines
CN109857311A (en) * 2019-02-14 2019-06-07 北京达佳互联信息技术有限公司 Generate method, apparatus, terminal and the storage medium of human face three-dimensional model
CN110111247A (en) * 2019-05-15 2019-08-09 浙江商汤科技开发有限公司 Facial metamorphosis processing method, device and equipment

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462204A (en) * 2020-02-13 2020-07-28 腾讯科技(深圳)有限公司 Virtual model generation method, virtual model generation device, storage medium, and electronic device
CN111462204B (en) * 2020-02-13 2023-03-03 腾讯科技(深圳)有限公司 Virtual model generation method, virtual model generation device, storage medium, and electronic device
CN111612876A (en) * 2020-04-27 2020-09-01 北京小米移动软件有限公司 Expression generation method and device and storage medium
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
US11715259B2 (en) 2020-06-02 2023-08-01 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for generating virtual avatar, device and storage medium
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN111935491A (en) * 2020-06-28 2020-11-13 百度在线网络技术(北京)有限公司 Live broadcast special effect processing method and device and server
US11722727B2 (en) 2020-06-28 2023-08-08 Baidu Online Network Technology (Beijing) Co., Ltd. Special effect processing method and apparatus for live broadcasting, and server
US20210321157A1 (en) * 2020-06-28 2021-10-14 Baidu Online Network Technology (Beijing) Co., Ltd. Special effect processing method and apparatus for live broadcasting, and server
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN111970535A (en) * 2020-09-25 2020-11-20 魔珐(上海)信息科技有限公司 Virtual live broadcast method, device, system and storage medium
CN111970535B (en) * 2020-09-25 2021-08-31 魔珐(上海)信息科技有限公司 Virtual live broadcast method, device, system and storage medium
US11785267B1 (en) 2020-09-25 2023-10-10 Mofa (Shanghai) Information Technology Co., Ltd. Virtual livestreaming method, apparatus, system, and storage medium
CN112182194A (en) * 2020-10-21 2021-01-05 南京创维信息技术研究院有限公司 Method, system and readable storage medium for expressing emotional actions of television avatar
CN112286610A (en) * 2020-10-28 2021-01-29 北京有竹居网络技术有限公司 Interactive processing method and device, electronic equipment and storage medium
CN112184921A (en) * 2020-10-30 2021-01-05 北京百度网讯科技有限公司 Avatar driving method, apparatus, device, and medium
CN112184921B (en) * 2020-10-30 2024-02-06 北京百度网讯科技有限公司 Avatar driving method, apparatus, device and medium
WO2022105862A1 (en) * 2020-11-20 2022-05-27 北京字节跳动网络技术有限公司 Method and apparatus for video generation and displaying, device, and medium
CN112423022A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video generation and display method, device, equipment and medium
CN112330805A (en) * 2020-11-25 2021-02-05 北京百度网讯科技有限公司 Face 3D model generation method, device and equipment and readable storage medium
CN112330805B (en) * 2020-11-25 2023-08-08 北京百度网讯科技有限公司 Face 3D model generation method, device, equipment and readable storage medium
CN112581571B (en) * 2020-12-02 2024-03-12 北京达佳互联信息技术有限公司 Control method and device for virtual image model, electronic equipment and storage medium
CN112581571A (en) * 2020-12-02 2021-03-30 北京达佳互联信息技术有限公司 Control method and device of virtual image model, electronic equipment and storage medium
CN112598785A (en) * 2020-12-25 2021-04-02 游艺星际(北京)科技有限公司 Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112819971A (en) * 2021-01-26 2021-05-18 北京百度网讯科技有限公司 Method, device, equipment and medium for generating virtual image
CN112819971B (en) * 2021-01-26 2022-02-25 北京百度网讯科技有限公司 Method, device, equipment and medium for generating virtual image
CN112967212A (en) * 2021-02-01 2021-06-15 北京字节跳动网络技术有限公司 Virtual character synthesis method, device, equipment and storage medium
CN112785669A (en) * 2021-02-01 2021-05-11 北京字节跳动网络技术有限公司 Virtual image synthesis method, device, equipment and storage medium
CN112785669B (en) * 2021-02-01 2024-04-23 北京字节跳动网络技术有限公司 Virtual image synthesis method, device, equipment and storage medium
CN112807688A (en) * 2021-02-08 2021-05-18 网易(杭州)网络有限公司 Method and device for setting expression in game, processor and electronic device
CN113050794A (en) * 2021-03-24 2021-06-29 北京百度网讯科技有限公司 Slider processing method and device for virtual image
US11842457B2 (en) 2021-03-24 2023-12-12 Beijing Baidu Netcom Science Technology Co., Ltd. Method for processing slider for virtual character, electronic device, and storage medium
CN113099298B (en) * 2021-04-08 2022-07-12 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113099298A (en) * 2021-04-08 2021-07-09 广州华多网络科技有限公司 Method and device for changing virtual image and terminal equipment
CN113350801A (en) * 2021-07-20 2021-09-07 网易(杭州)网络有限公司 Model processing method and device, storage medium and computer equipment
CN114501065A (en) * 2022-02-11 2022-05-13 广州方硅信息技术有限公司 Virtual gift interaction method and system based on face jigsaw and computer equipment
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium
CN114612643B (en) * 2022-03-07 2024-04-12 北京字跳网络技术有限公司 Image adjustment method and device for virtual object, electronic equipment and storage medium
WO2024067320A1 (en) * 2022-09-29 2024-04-04 北京字跳网络技术有限公司 Virtual object rendering method and apparatus, and device and storage medium
CN115578493B (en) * 2022-10-20 2023-05-30 武汉两点十分文化传播有限公司 Maya expression coding method and system thereof
CN115578493A (en) * 2022-10-20 2023-01-06 武汉两点十分文化传播有限公司 Maya expression coding method and system
CN116129091B (en) * 2023-04-17 2023-06-13 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium
CN116129091A (en) * 2023-04-17 2023-05-16 海马云(天津)信息技术有限公司 Method and device for generating virtual image video, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110766777B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
US20230017627A1 (en) Providing 3d data for messages in a messaging system
CN110782515A (en) Virtual image generation method and device, electronic equipment and storage medium
US11783556B2 (en) Augmented reality content generators including 3D data in a messaging system
CN110827379A (en) Virtual image generation method, device, terminal and storage medium
WO2021008166A1 (en) Method and apparatus for virtual fitting
CN110827378A (en) Virtual image generation method, device, terminal and storage medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US11790621B2 (en) Procedurally generating augmented reality content generators
CN111833461B (en) Method and device for realizing special effect of image, electronic equipment and storage medium
CN114730483A (en) Generating 3D data in a messaging system
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
EP4272183A1 (en) Detection and obfuscation of display screens in augmented reality content
JP2024506014A (en) Video generation method, device, equipment and readable storage medium
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
US20220101419A1 (en) Ingestion pipeline for generating augmented reality content generators
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
US20230237625A1 (en) Video processing method, electronic device, and storage medium
CN112099712B (en) Face image display method and device, electronic equipment and storage medium
CN112764649B (en) Virtual image generation method, device, equipment and storage medium
CN114049417B (en) Virtual character image generation method and device, readable medium and electronic equipment
US11836437B2 (en) Character display method and apparatus, electronic device, and storage medium
CN114529445A (en) Method and device for drawing special dressing effect, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant