CN106599811B - A kind of VR aobvious facial expression method for tracing - Google Patents

A kind of VR aobvious facial expression method for tracing Download PDF

Info

Publication number
CN106599811B
CN106599811B CN201611101653.4A CN201611101653A CN106599811B CN 106599811 B CN106599811 B CN 106599811B CN 201611101653 A CN201611101653 A CN 201611101653A CN 106599811 B CN106599811 B CN 106599811B
Authority
CN
China
Prior art keywords
user
expression
module
facial expression
aobvious
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611101653.4A
Other languages
Chinese (zh)
Other versions
CN106599811A (en
Inventor
叶飞
殷作伟
张岑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Racing Information Technology (Langfang) Co.,Ltd.
Original Assignee
Suzhou Virtual Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Virtual Digital Technology Co Ltd filed Critical Suzhou Virtual Digital Technology Co Ltd
Priority to CN201611101653.4A priority Critical patent/CN106599811B/en
Publication of CN106599811A publication Critical patent/CN106599811A/en
Application granted granted Critical
Publication of CN106599811B publication Critical patent/CN106599811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The present invention relates to a kind of VR aobvious facial expression method for tracing, pre-establish expression data library and fingerprint database, user carries out fingerprint input in use, first passing through fingerprint identification module, and fingerprint identification module matches all finger print informations in the fingerprint and fingerprint database of user's typing;If fingerprint identification module finds finger print information identical with the user in fingerprint database, the facial expression packet of the user, while infrared photography module real-time capture user's face expression are transferred from expression data library immediately;Image processing module in real time compares the facial expression that the facial expression packet of the user is captured with infrared photography module, when the facial expression that image processing module finds the user is consistent with a certain expression in facial expression packet, then the expression is transmitted to immediately and passes through virtual role reflecting.Expression method for tracing of the invention can quickly and stably track out the facial expression of user, and it is true to nature, accurately synthesize virtual role expression.

Description

A kind of VR aobvious facial expression method for tracing
Technical field
The invention belongs to virtual reality (VirtualReality, abbreviation VR) technical fields, and in particular to a kind of VR aobvious Facial expression method for tracing.
Background technique
With the rapid development of VR technology, occur more and more VR equipment on the market at present, for example VR show, VR Mirror, the address such as glasses VR, utilizes emulation technology and computer graphics human-machine interface technology multimedia technology sensing technology network The product of the multiple technologies set such as technology is the human-computer interaction brand-new by one kind of computer and newest sensor technology creation Means.VR equipment utilization operation simulation generates the virtual world in a three-dimensional space, provides user about vision, the sense of hearing, tactile The simulation of equal sense organs, allows user as on the spot in person, to bring the completely new viewing experience of user.Virtual role animation It usually applies in many important fields such as animation, video display, game, especially needs the amusement game of human-computer interaction.Virtual angle The animation of color includes limbs animation and expression animation two parts, in order to realize that the vivid effect of virtual role, simple limbs are dynamic The demand that can no longer meet user is drawn, lifelike appearance animation is an important factor for promoting user experience.
In consideration of it, proposing a kind of VR aobvious facial expression method for tracing project of the invention to be studied.
Summary of the invention
In view of the above-mentioned problems, the purpose of the present invention is to provide a kind of VR aobvious facial expression method for tracing, it is intended to solve The problem of certainly simple limbs animation can no longer meet user demand, can not accomplish lifelike appearance animation in the prior art.
To achieve the goals above, the invention adopts the following technical scheme: a kind of VR aobvious facial expression method for tracing, Expression tracking platform is built in advance, and expression tracking platform includes VR aobvious, infrared photography module, fingerprint identification module, figures As processing module, described image processing module is shown with VR respectively, fingerprint identification module and infrared photography module connect, and press Facial expression tracking is carried out according to following steps:
The first step pre-establishes an expression data library, which is used to store user information and the user of user Various facial expression packets;Suggest that fingerprint database, the fingerprint database are used to store user information and the user of user in advance Finger print information;
Second step establishes user in expression data library and fingerprint database respectively, and the finger print information of the user is deposited Enter in fingerprint database, will be in the various facial expressions deposit expression data library of the user;
Third step first passes through fingerprint identification module progress fingerprint input, fingerprint recognition when user is when aobvious using the VR Module matches all finger print informations in the fingerprint and fingerprint database of user's typing;
4th step is said if fingerprint identification module finds finger print information identical with the user in fingerprint database The bright user is old user, transfers the facial expression packet of the user from expression data library immediately, and be transferred to the 5th step;If fingerprint When identification module does not find finger print information identical with the user in fingerprint database, illustrate that the user is new user, then It prompts the user to create a user information, and skips to second step;
5th step, the infrared photography module real-time capture user's face expression;
6th step, what described image processing module in real time captured the facial expression packet of the user and infrared photography module Facial expression compares, a certain expression in the facial expression and facial expression packet that image processing module finds the user When consistent, then immediately by the expression be transmitted to VR it is aobvious in, and reflected by virtual role.
As a further improvement of the present invention, the expression tracking platform also packet wireless charging module;The wireless charging Module includes wireless charging transmitting module and wireless charging receiving module;The wireless charging receiving module integrates and machine of sweeping the floor On people;The wireless charging transmitting module is installed on room somewhere;The position of wireless charging transmitting module is preserved in VR glasses Information;The VR glasses can be charged by the wireless charging transmitting module.
As a further improvement of the present invention, the expression tracking platform includes at least two infrared photography modules, this is extremely Few two infrared photography module one are corresponding between the eyes of user, and for capturing the eye pupil information of user, one is located at VR Aobvious bottom, for capturing the mouth and cheek information of user.
As a further improvement of the present invention, the expression tracking platform further includes light luminance adjustment module, according to outer Light luminance inside the light luminance automatic adjustment VR glasses of boundary's environment, the light luminance adjustment module includes light luminance Acquisition unit, light luminance computing unit and light luminance control unit, the light luminance acquisition unit, light luminance meter Unit is calculated to connect with light luminance control unit point respectively.
As a further improvement of the present invention, expression tracking platform further includes that user's expression records module, is used for The various expressions of special typing different user.
As a further improvement of the present invention, expression tracking platform further includes expression virtual module, and being used for will not The corresponding expression of virtual role model is assigned with expression.
As a further improvement of the present invention, when image processing module finds the facial expression and facial expression of the user When a certain expression in packet is consistent, then immediately by the expression be transmitted to VR it is aobvious in, and by expression virtual module virtual It is reflected in role.
Working principle and effect of the present invention are as follows:
The present invention relates to a kind of VR aobvious facial expression method for tracing, pre-establish expression data library and finger print data Library, user carry out fingerprint input in use, first passing through fingerprint identification module, and fingerprint identification module is by the fingerprint and finger of user's typing All finger print informations in line database match;If fingerprint identification module finds identical as the user in fingerprint database Finger print information when, the facial expression packet of the user is transferred from expression data library immediately, while infrared photography module is caught in real time Catch user's face expression;The face that image processing module in real time captures the facial expression packet of the user and infrared photography module Expression compares, when the facial expression that image processing module finds the user is consistent with a certain expression in facial expression packet When, then the expression is transmitted to immediately and passes through virtual role reflecting.Expression method for tracing of the invention can quickly, surely Surely track out the facial expression of user, and it is true to nature, accurately synthesize virtual role expression.
Detailed description of the invention
Attached drawing described here is only used for task of explanation, and is not intended to limit model disclosed in the present application in any way It encloses.In addition, shape and proportional sizes of each component in figure etc. are only schematical, it is used to help the understanding to the application, and It is not the specific shape and proportional sizes for limiting each component of the application.Those skilled in the art, can under teachings of the present application Implement the application to select various possible shapes and proportional sizes as the case may be.In the accompanying drawings:
Attached drawing 1 is the flow chart of expression tracking in the embodiment of the present invention.
Specific embodiment
The present invention will be further illustrated in following example.These embodiments are merely to illustrate the present invention, but not to appoint Where formula limitation is of the invention.
A kind of embodiment: VR aobvious facial expression method for tracing
Expression tracking platform is built in advance, and expression tracking platform includes VR aobvious, infrared photography modules, fingerprint recognition Module, image processing module, aobvious, fingerprint identification module and infrared photography module connect described image processing module with VR respectively It connects, and follows the steps below facial expression tracking.
Referring to attached drawing 1, the first step pre-establishes an expression data library, which is used to store the user of user The various facial expression packets of information and user;Suggest that fingerprint database, the fingerprint database are used to store the user of user in advance The finger print information of information and user;
Second step establishes user in expression data library and fingerprint database respectively, and the finger print information of the user is deposited Enter in fingerprint database, will be in the various facial expressions deposit expression data library of the user;
Third step first passes through fingerprint identification module progress fingerprint input, fingerprint recognition when user is when aobvious using the VR Module matches all finger print informations in the fingerprint and fingerprint database of user's typing;
4th step is said if fingerprint identification module finds finger print information identical with the user in fingerprint database The bright user is old user, transfers the facial expression packet of the user from expression data library immediately, and be transferred to the 5th step;If fingerprint When identification module does not find finger print information identical with the user in fingerprint database, illustrate that the user is new user, then It prompts the user to create a user information, and skips to second step;
5th step, the infrared photography module real-time capture user's face expression;
6th step, what described image processing module in real time captured the facial expression packet of the user and infrared photography module Facial expression compares, a certain expression in the facial expression and facial expression packet that image processing module finds the user When consistent, then immediately by the expression be transmitted to VR it is aobvious in, and reflected by virtual role.
Further, the expression tracking platform also packet wireless charging module;The wireless charging module includes wireless charging Electric transmitting module and wireless charging receiving module;The wireless charging receiving module it is integrated on sweeping robot;It is described wireless Charging transmitting module is installed on room somewhere;The location information of wireless charging transmitting module is preserved in VR glasses;It is VR described Mirror can be charged by the wireless charging transmitting module.
Further, the expression tracking platform includes at least two infrared photography modules, at least two infrared photography Module one is corresponding between the eyes of user, and for capturing the eye pupil information of user, one is located at VR aobvious bottom, for catching Catch the mouth and cheek information of user.
Further, the expression tracking platform further includes light luminance adjustment module, bright according to the light of external environment Light luminance inside degree automatic adjustment VR glasses, the light luminance adjustment module includes light luminance acquisition unit, light Luminance calculation unit and light luminance control unit, the light luminance acquisition unit, light luminance computing unit respectively with The connection of light luminance control unit point.
Further, expression tracking platform further includes that user's expression records module, is used for special typing difference The various expressions at family.
Further, expression tracking platform further includes expression virtual module, virtual for assigning different expressions The corresponding expression of actor model.
Further, when a certain expression in the facial expression and facial expression packet that image processing module finds the user When consistent, then immediately by the expression be transmitted to VR it is aobvious in, and reflected in virtual role by expression virtual module.
The present invention relates to a kind of VR aobvious facial expression method for tracing, pre-establish expression data library and finger print data Library, user carry out fingerprint input in use, first passing through fingerprint identification module, and fingerprint identification module is by the fingerprint and finger of user's typing All finger print informations in line database match;If fingerprint identification module finds identical as the user in fingerprint database Finger print information when, the facial expression packet of the user is transferred from expression data library immediately, while infrared photography module is caught in real time Catch user's face expression;The face that image processing module in real time captures the facial expression packet of the user and infrared photography module Expression compares, when the facial expression that image processing module finds the user is consistent with a certain expression in facial expression packet When, then the expression is transmitted to immediately and passes through virtual role reflecting.Expression method for tracing of the invention can quickly, surely Surely track out the facial expression of user, and it is true to nature, accurately synthesize virtual role expression.
It should be noted that in the description of the present application, unless otherwise indicated, the meaning of " plurality " is two or two with On.
Here the combination of element, ingredient, component or step is described using term "comprising" or " comprising " it is also contemplated that Substantially the embodiment being made of these elements, ingredient, component or step.Here by using term " can with ", it is intended to illustrate Described any attribute that " can with " includes all is optional.
Multiple element, ingredient, component or step can be provided by single integrated component, ingredient, component or step.Optionally Ground, single integrated component, ingredient, component or step can be divided into multiple element, ingredient, component or the step of separation.It is used to The open "a" or "an" for describing element, ingredient, component or step is not said to exclude other elements, ingredient, component Or step.
It should be understood that above description is to illustrate rather than to be limited.By reading above-mentioned retouch It states, many embodiments and many applications except provided example all will be aobvious and easy for a person skilled in the art See.Therefore, the range of this introduction should not be determined referring to foregoing description, but should referring to preceding claims and this The full scope of the equivalent that a little claims are possessed determines.For comprehensive purpose, all articles and with reference to including special The disclosure of benefit application and bulletin is all by reference to being incorporated herein.Theme disclosed herein is omitted in preceding claims Any aspect is not intended to abandon the body matter, also should not be considered as applicant the theme is not thought of as it is disclosed Apply for a part of theme.
The series of detailed descriptions listed above only for the application feasible embodiment specifically Bright, they are not the protection scope to limit the application, all without departing from equivalent implementations made by the application skill spirit Or change should be included within the scope of protection of this application.

Claims (6)

1. a kind of VR aobvious facial expression method for tracing, it is characterised in that: build expression tracking platform in advance, which chases after Track platform includes VR aobvious, infrared photography module, fingerprint identification module, image processing modules, described image processing module difference It is connect with VR aobvious, fingerprint identification modules and infrared photography module, and follows the steps below facial expression tracking: is described It further includes light luminance adjustment module that expression, which tracks platform, is automatically adjusted inside VR glasses according to the light luminance of external environment Light luminance, the light luminance adjustment module include that light luminance acquisition unit, light luminance computing unit and light are bright Control unit is spent, the light luminance acquisition unit, light luminance computing unit are connect with light luminance control unit point respectively;
The first step pre-establishes an expression data library, the expression data library be used for store user user information and user it is each Kind facial expression packet;Fingerprint database is pre-established, which is used to store the user information of user and the finger of user Line information;
Second step establishes user in expression data library and fingerprint database respectively, and the deposit of the finger print information of the user is referred to In line database, will the user various facial expressions deposit expression data library in;
Third step first passes through fingerprint identification module progress fingerprint input, fingerprint identification module when user is when aobvious using the VR All finger print informations in the fingerprint and fingerprint database of user's typing are matched;
4th step illustrates this if fingerprint identification module finds finger print information identical with the user in fingerprint database User is old user, transfers the facial expression packet of the user from expression data library immediately, and is transferred to the 5th step;If fingerprint recognition When module does not find finger print information identical with the user in fingerprint database, illustrates that the user is new user, then prompt The user creates a user information, and skips to second step;
5th step, the infrared photography module real-time capture user's face expression;
6th step, the face that described image processing module in real time captures the facial expression packet of the user and infrared photography module Expression compares, when the facial expression that image processing module finds the user is consistent with a certain expression in facial expression packet When, then immediately by the expression be transmitted to VR it is aobvious in, and reflected by virtual role.
2. VR according to claim 1 aobvious facial expression method for tracing, it is characterised in that: the expression tracks platform Also packet wireless charging module;The wireless charging module includes wireless charging transmitting module and wireless charging receiving module;It is described Wireless charging receiving module is integrated on sweeping robot;The wireless charging transmitting module is installed on room somewhere;VR glasses In preserve the location information of wireless charging transmitting module;The VR glasses can be filled by the wireless charging transmitting module Electricity.
3. VR according to claim 1 aobvious facial expression method for tracing, it is characterised in that: the expression tracks platform Including at least two infrared photography modules, at least two infrared photographies module one is corresponding between the eyes of user, is used for The eye pupil information of user is captured, one is located at VR aobvious bottom, for capturing the mouth and cheek information of user.
4. VR according to claim 1 aobvious facial expression method for tracing, it is characterised in that: the expression tracking is flat Platform further includes that user's expression records module, the various expressions for special typing different user.
5. VR according to claim 1 to 4 aobvious facial expression method for tracing, it is characterised in that: the expression Tracking platform further includes expression virtual module, for different expressions to be assigned to the corresponding expression of virtual role model.
6. VR according to claim 5 aobvious facial expression method for tracing, it is characterised in that: when image processing module is looked into To the user facial expression it is consistent with a certain expression in facial expression packet when, then immediately by the expression be transmitted to VR it is aobvious In, and reflected in virtual role by expression virtual module.
CN201611101653.4A 2016-11-29 2016-11-29 A kind of VR aobvious facial expression method for tracing Active CN106599811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611101653.4A CN106599811B (en) 2016-11-29 2016-11-29 A kind of VR aobvious facial expression method for tracing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611101653.4A CN106599811B (en) 2016-11-29 2016-11-29 A kind of VR aobvious facial expression method for tracing

Publications (2)

Publication Number Publication Date
CN106599811A CN106599811A (en) 2017-04-26
CN106599811B true CN106599811B (en) 2019-11-05

Family

ID=58595718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611101653.4A Active CN106599811B (en) 2016-11-29 2016-11-29 A kind of VR aobvious facial expression method for tracing

Country Status (1)

Country Link
CN (1) CN106599811B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369354B (en) * 2017-08-16 2019-12-24 南昌市龙诚电器设备有限公司 Automobile driving simulation device based on virtual reality technology
CN108491075A (en) * 2017-08-26 2018-09-04 温力航 A kind of method of virtual reality head-wearing display device and its facial motion capture
CN108549153A (en) * 2018-06-01 2018-09-18 烟台市安特洛普网络科技有限公司 A kind of intelligent VR glasses based on appearance Expression Recognition
CN112416125A (en) * 2020-11-17 2021-02-26 青岛小鸟看看科技有限公司 VR head-mounted all-in-one machine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677226A (en) * 2012-09-04 2014-03-26 北方工业大学 expression recognition input method
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN105656873A (en) * 2015-07-30 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Access control method and device
CN105978114A (en) * 2016-05-03 2016-09-28 青岛众海汇智能源科技有限责任公司 Wireless charging system, method and sweeping robot
CN106023288A (en) * 2016-05-18 2016-10-12 浙江大学 Image-based dynamic substitute construction method
EP3096208A1 (en) * 2015-05-18 2016-11-23 Samsung Electronics Co., Ltd. Image processing for head mounted display devices

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002951473A0 (en) * 2002-09-18 2002-10-03 Canon Kabushiki Kaisha Method for tracking facial features in video sequence
US8311973B1 (en) * 2011-09-24 2012-11-13 Zadeh Lotfi A Methods and systems for applications for Z-numbers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677226A (en) * 2012-09-04 2014-03-26 北方工业大学 expression recognition input method
EP3096208A1 (en) * 2015-05-18 2016-11-23 Samsung Electronics Co., Ltd. Image processing for head mounted display devices
CN105656873A (en) * 2015-07-30 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Access control method and device
CN105654537A (en) * 2015-12-30 2016-06-08 中国科学院自动化研究所 Expression cloning method and device capable of realizing real-time interaction with virtual character
CN105978114A (en) * 2016-05-03 2016-09-28 青岛众海汇智能源科技有限责任公司 Wireless charging system, method and sweeping robot
CN106023288A (en) * 2016-05-18 2016-10-12 浙江大学 Image-based dynamic substitute construction method

Also Published As

Publication number Publication date
CN106599811A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
US11379996B2 (en) Deformable object tracking
US20210177124A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN106599811B (en) A kind of VR aobvious facial expression method for tracing
CN103400119B (en) Face recognition technology-based mixed reality spectacle interactive display method
US11928765B2 (en) Animation implementation method and apparatus, electronic device, and storage medium
EP3096208B1 (en) Image processing for head mounted display devices
US9990780B2 (en) Using computed facial feature points to position a product model relative to a model of a face
CN105389539B (en) A kind of three-dimension gesture Attitude estimation method and system based on depth data
CN105528082A (en) Three-dimensional space and hand gesture recognition tracing interactive method, device and system
US20120162384A1 (en) Three-Dimensional Collaboration
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
CN105869216A (en) Method and apparatus for presenting object target
WO2013185714A1 (en) Method, system, and computer for identifying object in augmented reality
CN104978548A (en) Visual line estimation method and visual line estimation device based on three-dimensional active shape model
CN112198959A (en) Virtual reality interaction method, device and system
CN107656619A (en) A kind of intelligent projecting method, system and intelligent terminal
US10964083B1 (en) Facial animation models
CN112232310B (en) Face recognition system and method for expression capture
CN109635752A (en) Localization method, face image processing process and the relevant apparatus of face key point
CN110209285A (en) A kind of sand table display systems based on gesture control
CN106570747A (en) Glasses online adaption method and system combining hand gesture recognition
CN109739353A (en) A kind of virtual reality interactive system identified based on gesture, voice, Eye-controlling focus
CN109445598A (en) A kind of augmented reality system and device of view-based access control model
CN103700128B (en) Mobile equipment and enhanced display method thereof
CN108553889A (en) Dummy model exchange method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20171101

Address after: Xinping street Suzhou City Industrial Park 215000 Jiangsu Province Tengfei Technology Park No. 388 building 21 unit 210

Applicant after: SUZHOU XUXIAN DIGITAL TECHNOLOGY CO.,LTD.

Address before: 215000 Spring Lane 43, Suzhou District, Jiangsu, Suzhou

Applicant before: Ye Fei

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221123

Address after: 065000 Zijincheng Commercial and Residential Building 1-1-1413, Guangyang District, Langfang City, Hebei Province

Patentee after: Racing Information Technology (Langfang) Co.,Ltd.

Address before: Unit 210, Building 21, Tengfei Science Park, No. 388, Xinping Street, Suzhou Industrial Park, Jiangsu Province, 215000

Patentee before: SUZHOU XUXIAN DIGITAL TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A facial expression tracking method for VR head display

Effective date of registration: 20230828

Granted publication date: 20191105

Pledgee: China Construction Bank Co.,Ltd. Langfang Airport Economic Zone Sub branch

Pledgor: Racing Information Technology (Langfang) Co.,Ltd.

Registration number: Y2023980054002

PE01 Entry into force of the registration of the contract for pledge of patent right