CN115729356B - Habit analysis-based 3D remote interaction action optimization system - Google Patents

Habit analysis-based 3D remote interaction action optimization system Download PDF

Info

Publication number
CN115729356B
CN115729356B CN202310035578.XA CN202310035578A CN115729356B CN 115729356 B CN115729356 B CN 115729356B CN 202310035578 A CN202310035578 A CN 202310035578A CN 115729356 B CN115729356 B CN 115729356B
Authority
CN
China
Prior art keywords
priority
database
low
action
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310035578.XA
Other languages
Chinese (zh)
Other versions
CN115729356A (en
Inventor
王亚刚
李元元
程思锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Feidie Virtual Reality Technology Co ltd
Original Assignee
Xi'an Feidie Virtual Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Feidie Virtual Reality Technology Co ltd filed Critical Xi'an Feidie Virtual Reality Technology Co ltd
Priority to CN202310035578.XA priority Critical patent/CN115729356B/en
Publication of CN115729356A publication Critical patent/CN115729356A/en
Application granted granted Critical
Publication of CN115729356B publication Critical patent/CN115729356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of 3D remote interaction, in particular to a habit analysis-based 3D remote interaction action optimization system. The human body limb movement information processing system comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing human body limb movements, the 3D vision module inputs and stores acquired information into a database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by a deep learning module, the output end of the database storage unit is connected with an interaction output unit, and the interaction output unit matches the movement information acquired by the 3D vision module with the database storage unit. By collecting and analyzing action behavior habits of target objects, a behavior storage database is established, the interactive actions are classified according to priority by a depth classification algorithm, and response optimization of higher priority is performed on high-frequency actions, so that an interactive output unit can respond quickly in a remote interaction process, and further 3D interaction experience is improved.

Description

Habit analysis-based 3D remote interaction action optimization system
Technical Field
The invention relates to the technical field of 3D remote interaction, in particular to a habit analysis-based 3D remote interaction action optimization system.
Background
The 3D remote interaction is that the visual module extracts the texture features of the image, extracts the shape features and researches on the related feedback algorithm, extracts the texture features of the image by utilizing Fourier transformation, detects the boundary of the image by utilizing boundary moment, acquires the shape features of the image, matches the similarity of the image by utilizing similarity measurement function, and introduces the related feedback algorithm to enable the user to interact with the interaction terminal, and finally, the interaction terminal (such as an interaction robot) outputs interaction actions to realize the 3D remote interaction.
When a user performs remote limb interaction with an interaction terminal, the extracted image information is required to be retrieved and matched for multiple times from the database, in the process, the data in the whole database is required to be extracted and compared one by one, and the interaction response speed of the output terminal can be greatly influenced.
Disclosure of Invention
The invention aims to provide a habit analysis-based 3D remote interactive action optimization system so as to solve the problems in the background art.
In order to achieve the above purpose, the invention aims to provide a habit analysis-based 3D remote interaction motion optimization system, which comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing human body limb motions, the 3D vision module inputs and stores acquired information into a database storage unit, a deepening learning module is integrated in the database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by the deepening learning module, the output end of the database storage unit is connected with an interaction output unit, and the interaction output unit matches the motion information acquired by the 3D vision module with the database storage unit and realizes remote interaction by outputting response motions through an interaction terminal.
As a further improvement of the technical scheme, the 3D vision module comprises a 3D vision sensor, an action acquisition unit and an action recognition module, wherein the 3D vision sensor is used for carrying out image recognition, transmission and processing on action information, the action information after recognition processing records and stores action data into a database storage unit through the action acquisition unit, meanwhile, the action information is searched and compared with information in the database storage unit through the action recognition module, and the matched action is extracted and outputted by an interaction terminal through an interaction output unit to respond to the action so as to realize remote interaction.
As a further improvement of the technical scheme, the deepened learning module carries out algorithm training analysis on the limb interaction information acquired by the action acquisition unit to complete learning and classification of the information in the database storage unit, the deepened learning module divides the limb action into high-frequency action and low-frequency action, extracts the high-frequency action in the database and temporarily stores the high-frequency action in the high-priority database, temporarily stores the low-frequency action in the low-priority database, and realizes the transfer and exchange of data between the high-priority database and the low-priority database through the deepened learning module.
As a further improvement of the technical scheme, the deepened learning module adopts a depth classification algorithm to establish an algorithm model, and establishes two classifiers: the high-priority classifier and the low-priority classifier are used for optimizing and classifying the action data in the database storage unit, the high-frequency data are classified into the high-priority classifier, and when the action recognition module extracts the data in the database storage unit, the action recognition module preferentially extracts the data from the high-priority classifier, so that the interactive response speed of the output terminal is improved.
As a further improvement of the technical scheme, the deep learning module adopts a deep classification algorithm, which comprises the following method steps:
s1, determining data to be classified: low frequency data, high frequency data;
s2, establishing a classifier for describing predefined data categories: a high priority classifier and a low priority classifier;
s3, training a high-priority classifier by using the high frequency data, and training a low-priority classifier by using the low frequency data;
and S4, adopting incremental training, namely sequentially selecting partial unlabeled documents of the high-frequency data, adding the partial unlabeled documents into the high-priority classifier in an incremental manner, and adding the low-frequency data into the low-priority classifier in an incremental manner according to a depth classification algorithm formula.
As a further improvement of the technical scheme, the formula of the depth classification algorithm is as follows:
wherein each action information is regarded as a characteristic item;/>Representing characteristic item->Weights of (2) represent->Priority in the database;
for characteristic item frequency, refer to a characteristic item +.>In the classification database->The number of occurrences in a classification database->Is the sum of a high-priority database and a low-priority database;
classification database frequencyMeans that the whole database set contains the characteristic items +.>Is stored in the memory;
is->Inversely proportional class database frequency, N, is the total number of all class databases.
As the technical prescriptionFurther improvement of the scheme, the high-frequency actions are classified into a high-priority classifier, and each characteristic item in the high-priority classifier is calculated by a depth classification algorithm in the high-priority classifierAnd downgrade the low frequency data with lower weight into the low priority classifier.
As a further improvement of the technical scheme, the low-frequency actions are classified into a low-priority classifier, and each characteristic item in the low-priority classifier is calculated by a depth classification algorithm in the low-priority classifierAnd downgrade the higher frequency data with higher weight into the high priority classifier.
Compared with the prior art, the invention has the beneficial effects that:
in the habit analysis-based 3D remote interaction action optimization system, the action behavior habits of the target objects are collected and analyzed, a behavior storage database is established, priority classification is carried out on the database storage units, the interaction actions are classified according to the priority through a depth classification algorithm, higher-priority response optimization is carried out on the high-frequency actions, so that the interaction output unit can respond quickly in the remote interaction process, and further 3D interaction experience is improved.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a block diagram of a depth algorithm model of the present invention.
The meaning of each reference sign in the figure is:
1. a 3D vision module;
2. an action acquisition unit; 3. a motion recognition module; 4. a database storage unit; 5. a deepening learning module; 6. a high priority database; 7. a low priority database; 8. an interactive output unit;
51. high frequency operation; 52. a low frequency action; 53. a high priority classifier; 54. a low priority classifier; 55. low frequency data; 56. high frequency data.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1-2, a habit analysis-based 3D remote interactive motion optimization system is provided, which comprises a 3D vision module 1, a 3D vision module 1 for remote tracking and capturing of human body limb motions, tracking and capturing of facial and hand interactive motions by the 3D vision module 1, image recognition and processing of captured limb information by the 3D vision module 1, inputting of limb information to a motion acquisition unit 2, connection of an output end of the 3D vision module 1 and an input end of the motion acquisition unit 2, inputting and storing of acquired information by the 3D vision module 1 to a database storage unit 4, integration of a deep learning module 5 in the database storage unit 4, division of the database storage unit 4 into a high priority database 6 and a low priority database 7 by the deep learning module 5, the data exchange of the high-priority database 6 and the low-priority database 7 is realized through the deepening learning module 5, the output end of the database storage unit 4 is connected with the interaction output unit 8, the interaction output unit 8 matches the action information acquired by the 3D vision module 1 with the database storage unit 4, finally, the remote interaction is realized through the output response action of the interaction terminal, the high-frequency action information and the low-frequency action information are recorded, the output end of the action acquisition unit 2 is connected with the action recognition module 3, the action recognition module 3 retrieves and extracts the matched action from the database storage unit 4, the interaction output unit 8 outputs the response action through the interaction terminal to realize the remote interaction, and the interaction terminal: for example, the robot carries out remote collection on interaction information through the 3D vision module 1, so that 3D remote interaction between a user and the robot is realized.
The 3D vision module 1 comprises a 3D vision sensor, a motion acquisition unit 2 and a motion recognition module 3, wherein the 3D vision sensor is used for carrying out image recognition, transmission and processing on motion information, the 3D vision sensor is an instrument for acquiring external environment image information by utilizing an optical element and an imaging device, a depth camera is generally adopted for remotely tracking and capturing human limb motions, the 3D vision sensor is a small machine vision system with the functions of image acquisition, image processing and information transmission, the 3D vision sensor is an embedded computer vision system, the image sensor, a digital processor, a communication module and other peripherals are integrated into a single camera, the motion information after recognition processing records and stores motion data into the database storage unit 4 through the motion acquisition unit 2, meanwhile, the motion recognition module 3 searches and compares the motion information with the information in the database storage unit 4, and extracts matching motions, and the matching motions are output by the interaction terminal to respond the motions to realize remote interactions;
the motion recognition module 3 performs preprocessing on motion information, which is a process of performing a series of processing on an original image set to generate an image description feature library, and mainly includes: the method is characterized by comprising the following steps of unified scale, format conversion, gray level processing and the like, wherein the aim of preprocessing is to facilitate the extraction of image features and the calculation of similarity measurement so as to improve the retrieval efficiency of images;
further, the motion recognition module 3 includes a feature extraction unit, which is a core of the motion recognition module 3 and is responsible for extracting visual features of an image from the database storage unit 4, wherein the visual features include features such as color, shape, texture, spatial position relation and the like, and the extracted features can effectively represent the image or have the capability of distinguishing the image;
the action recognition module 3 feeds back the extracted image result from the image retrieval, and automatically adjusts the characteristic representation weight value of the retrieval according to the selection of a user, so that the retrieval is carried out for a plurality of times, the information in the database storage unit 4 is compared and matched one by one, and the action recognition module 3 preferentially retrieves and matches from the high-priority database 6 by adding the high-priority database 6, so that the operation amount of the retrieval is reduced, and the retrieval efficiency is greatly improved.
The deep learning module 5 performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit 2 to complete learning and classification of the information in the database storage unit 4, the deep learning module 5 divides the limb actions into high-frequency actions 51 and low-frequency actions 52, extracts and temporarily stores the high-frequency actions 51 in the database into the high-priority database 6, and degrades the low-frequency data 55 in the high-priority database 6 into the low-priority database 7, temporarily stores the low-frequency actions 52 into the low-priority database 7, upgrades the high-frequency data 56 in the low-priority database 7 into the high-priority database 6, and realizes data transfer and exchange between the high-priority database 6 and the low-priority database 7 through the deep learning module 5.
The deep learning module 5 adopts a deep classification algorithm, establishes an algorithm model and establishes two classifiers: the high-priority classifier 53 and the low-priority classifier 54 perform optimization classification on the motion data in the database storage unit 4, the high-frequency data 56 is classified into the high-priority classifier 53, and when the motion recognition module 3 extracts the data in the database storage unit 4, the data is preferably extracted from the high-priority classifier 53, so that the interactive response speed of the output terminal is improved.
To illustrate how the deep learning module 5 divides the database memory unit 4 into a high priority database 6 and a low priority database 7, the deep classification algorithm employed by the deep learning module 5 comprises the following method steps:
s1, determining data to be classified: low frequency data 55, high frequency data 56; classifying the motion data recorded in the database storage unit 4, wherein the motion data is high-frequency data 56 with relatively high weight and low-frequency data 55 with relatively low weight;
s2, establishing a classifier for describing predefined data categories: a high priority classifier 53 and a low priority classifier 54;
s3, training a high-priority classifier 53 by using high-frequency data 56, and training a low-priority classifier 54 by using low-frequency data 55;
s4, according to a depth classification algorithm formula, incremental training is adopted, namely partial unlabeled documents of the high-frequency data 56 are selected successively, the partial unlabeled documents are added into the high-priority classifier 53 in an incremental mode, and the low-frequency data 55 are added into the low-priority classifier 54 in an incremental mode.
The depth classification algorithm formula is as follows:
wherein each action information is regarded as a characteristic item;/>Representing characteristic item->Weights of (2) represent->Priority in the database;
for characteristic item frequency, refer to a characteristic item +.>In the classification database->The number of occurrences in a classification database->Is a generic name for the high priority database 6 and the low priority database 7;
classification database frequencyMeans that the whole database set contains the characteristic items +.>Is stored in the memory;
is->Inversely proportional class database frequency, N, is the total number of all class databases.
Classifying the action information in the database storage unit 4 through the established algorithm model, classifying the high-frequency actions 51 into the high-priority classifier 53, and calculating each characteristic item in the high-priority classifier 53 through a depth classification algorithm in the high-priority classifier 53And downgrade the low frequency data 55 with lower weight into the low priority classifier 54; similarly, the low-frequency actions 52 are classified into a low-priority classifier 54, and each characteristic item +_in the low-priority classifier 54 is calculated in the low-priority classifier 54 through a depth classification algorithm>And downgrade the higher frequency data 56 with higher weight into the high priority classifier 53; when searching the matching action, the action recognition module 3 can search the high-priority database 6 preferentially, so that the data information searching and matching time can be effectively reduced, and the response speed of the interaction output unit 8 can be improved.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (2)

1. The 3D remote interaction action optimization system based on habit analysis is characterized in that: the human limb motion remote control system comprises a 3D vision module (1), wherein the 3D vision module (1) is used for remotely tracking and capturing human limb motions, the 3D vision module (1) inputs and stores collected information into a database storage unit (4), a deep learning module (5) is integrated in the database storage unit (4), the database storage unit (4) is divided into a high-priority database (6) and a low-priority database (7) by the deep learning module (5), an interaction output unit (8) is connected to the output end of the database storage unit (4), and the interaction output unit (8) matches motion information collected by the 3D vision module (1) with the database storage unit (4) and realizes remote interaction by outputting response motions through an interaction terminal;
the deep learning module (5) performs algorithm training analysis on limb interaction information acquired by the action acquisition unit (2) to complete learning and classification on information in the database storage unit (4), the deep learning module (5) divides limb actions into high-frequency actions (51) and low-frequency actions (52), extracts the high-frequency actions (51) in the database and temporarily stores the high-frequency actions into the high-priority database (6), the low-frequency actions (52) temporarily store into the low-priority database (7), and data transfer and exchange between the high-priority database (6) and the low-priority database (7) are realized through the deep learning module (5);
the deep learning module (5) adopts a deep classification algorithm to establish an algorithm model and two classifiers: the high-priority classifier (53) and the low-priority classifier (54) are used for optimizing and classifying the action data in the database storage unit (4), the high-frequency data (56) is divided into the high-priority classifier (53), and when the action recognition module (3) extracts the data in the database storage unit (4), the data is preferably extracted from the high-priority classifier (53), so that the interactive response speed of the output terminal is improved;
the deep learning module (5) adopts a deep classification algorithm, which comprises the following method steps:
s1, determining data to be classified: low frequency data (55), high frequency data (56);
s2, establishing a classifier for describing predefined data categories: a high priority classifier (53) and a low priority classifier (54);
s3, training a high-priority classifier (53) by using a high-frequency data (56), and training a low-priority classifier (54) by using a low-frequency data (55);
s4, obtaining weight information of the high-frequency data (56) and the low-frequency data (55) according to a depth classification algorithm formula, adopting incremental training, namely sequentially selecting the high-frequency data (56) in the low-priority classifier (54), adding the increment into the high-priority classifier (53), and adding the increment of the low-frequency data (55) in the high-priority classifier (53) into the low-priority classifier (54);
the depth classification algorithm formula is as follows:
wherein each action information is regarded as a characteristic item;/>Representing characteristic item->Weights of (2) represent->Priority in the database;
for characteristic item frequency, refer to a characteristic item +.>In the classification database->The number of occurrences in a classification database->Is a generic name for the high priority database (6) and the low priority database (7);
classification database frequencyMeans that the whole database set contains characteristic items +.>Is stored in the memory;
is->Inversely proportional classification database frequencies, N being the total number of all classification databases;
the high-frequency actions (51) are classified into a high-priority classifier (53), and each characteristic item in the high-priority classifier (53) is calculated in the high-priority classifier (53) through a depth classification algorithmAnd downgrade the low frequency data (55) with lower weight to the low priority classifier54 A) is arranged in the inner part;
the low-frequency actions (52) are divided into low-priority classifiers (54), and each characteristic item in the low-priority classifiers (54) is calculated by a depth classification algorithm in the low-priority classifiers (54)And downgrade the higher frequency data (56) with higher weight into the high priority classifier (53).
2. The habit analysis based 3D remote interactive motion optimization system of claim 1, wherein: the 3D vision module (1) comprises a 3D vision sensor, an action acquisition unit (2) and an action recognition module (3), wherein the 3D vision sensor is used for carrying out image recognition, transmission and processing on action information, the action information after recognition processing is recorded and stored in the database storage unit (4) through the action acquisition unit (2), meanwhile, the action information is searched and compared with information in the database storage unit (4) through the action recognition module (3), and the matched action is extracted and outputted by the interaction terminal through the interaction output unit (8) to respond to the action so as to realize remote interaction.
CN202310035578.XA 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system Active CN115729356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310035578.XA CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310035578.XA CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Publications (2)

Publication Number Publication Date
CN115729356A CN115729356A (en) 2023-03-03
CN115729356B true CN115729356B (en) 2023-07-21

Family

ID=85301994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310035578.XA Active CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Country Status (1)

Country Link
CN (1) CN115729356B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662471A (en) * 2012-04-09 2012-09-12 沈阳航空航天大学 Computer vision mouse
CN113692551A (en) * 2019-04-30 2021-11-23 依视路国际公司 Method for determining an oriented 3D representation of a person's head in a natural vision pose

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11087184B2 (en) * 2018-09-25 2021-08-10 Nec Corporation Network reparameterization for new class categorization
CN115268651A (en) * 2022-08-09 2022-11-01 重庆理工大学 Implicit gesture interaction method and system for steering wheel
CN115497133A (en) * 2022-09-14 2022-12-20 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662471A (en) * 2012-04-09 2012-09-12 沈阳航空航天大学 Computer vision mouse
CN113692551A (en) * 2019-04-30 2021-11-23 依视路国际公司 Method for determining an oriented 3D representation of a person's head in a natural vision pose

Also Published As

Publication number Publication date
CN115729356A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
EP3968179A1 (en) Place recognition method and apparatus, model training method and apparatus for place recognition, and electronic device
WO2014132349A1 (en) Image analysis device, image analysis system, and image analysis method
CN110235138A (en) System and method for appearance search
Mumtaz et al. Clustering dynamic textures with the hierarchical em algorithm for modeling video
WO2008105962A2 (en) Real-time computerized annotation of pictures
US20120287304A1 (en) Image recognition system
CN108536780B (en) Cross-modal object material retrieval method based on tactile texture features
KR20120086728A (en) Automatically mining person models of celebrities for visual search applications
US20180046721A1 (en) Systems and Methods for Automatic Customization of Content Filtering
KR101917369B1 (en) Method and apparatus for retrieving image using convolution neural network
KR101976081B1 (en) Method, system and computer program for semantic image retrieval based on topic modeling
CN112365423A (en) Image data enhancement method, device, medium and equipment
Kumar Jain et al. (Retracted) Modeling of human action recognition using hyperparameter tuned deep learning model
CN112597324A (en) Image hash index construction method, system and equipment based on correlation filtering
CN104977038B (en) Identifying movement using motion sensing device coupled with associative memory
Alhersh et al. Learning human activity from visual data using deep learning
Abdu-Aguye et al. VersaTL: Versatile Transfer Learning for IMU-based Activity Recognition using Convolutional Neural Networks.
Muhamada et al. Review on recent computer vision methods for human action recognition
CN116561649B (en) Diver motion state identification method and system based on multi-source sensor data
Shamsipour et al. Improve the efficiency of handcrafted features in image retrieval by adding selected feature generating layers of deep convolutional neural networks
CN115729356B (en) Habit analysis-based 3D remote interaction action optimization system
Kokilambal Intelligent content based image retrieval model using adadelta optimized residual network
Chandrakala et al. Application of artificial bee colony optimization algorithm for image classification using color and texture feature similarity fusion
CN115937910A (en) Palm print image identification method based on small sample measurement network
Daschiel et al. Design and evaluation of human-machine communication for image information mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 518000

Applicant after: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: 518000 3311, Floor 3, Building 1, Aerospace Building, No. 51, Gaoxin South 9th Road, High tech Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong

Applicant before: Shenzhen FEIDIE Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20230628

Address after: 710000 Building D, National Digital Publishing Base, No. 996 Tiangu 7th Road, Yuhua Street Office, High tech Zone, Xi'an City, Shaanxi Province

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 518000

Applicant before: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant