CN115729356A - 3D remote interaction action optimization system based on habit analysis - Google Patents

3D remote interaction action optimization system based on habit analysis Download PDF

Info

Publication number
CN115729356A
CN115729356A CN202310035578.XA CN202310035578A CN115729356A CN 115729356 A CN115729356 A CN 115729356A CN 202310035578 A CN202310035578 A CN 202310035578A CN 115729356 A CN115729356 A CN 115729356A
Authority
CN
China
Prior art keywords
priority
database
action
low
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310035578.XA
Other languages
Chinese (zh)
Other versions
CN115729356B (en
Inventor
王亚刚
李元元
程思锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Feidie Virtual Reality Technology Co ltd
Original Assignee
Xi'an Feidie Virtual Reality Technology Co ltd
Shenzhen Feidie Virtual Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Feidie Virtual Reality Technology Co ltd, Shenzhen Feidie Virtual Reality Technology Co ltd filed Critical Xi'an Feidie Virtual Reality Technology Co ltd
Priority to CN202310035578.XA priority Critical patent/CN115729356B/en
Publication of CN115729356A publication Critical patent/CN115729356A/en
Application granted granted Critical
Publication of CN115729356B publication Critical patent/CN115729356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of 3D remote interaction, in particular to a habit analysis-based 3D remote interaction action optimization system. The robot comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing the motion of the limbs of a human body, the 3D vision module inputs and stores collected information to a database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by a deepened learning module, the output end of the database storage unit is connected with an interactive output unit, and the interactive output unit matches the motion information collected by the 3D vision module with the database storage unit. The action behavior habits of the target object are collected and analyzed, a behavior storage database is established, the interactive actions are subjected to priority classification through a depth classification algorithm, and high-priority response optimization is performed on the high-frequency actions, so that the interactive output unit can make a response faster in the remote interaction process, and the 3D interactive experience is improved.

Description

3D remote interaction action optimization system based on habit analysis
Technical Field
The invention relates to the technical field of 3D remote interaction, in particular to a habit analysis-based 3D remote interaction action optimization system.
Background
The 3D remote interaction comprises the steps of extracting texture features of an image through a visual module, extracting shape features, researching a related feedback algorithm, extracting the texture features of the image through Fourier transformation, detecting the image boundary through a boundary moment, obtaining the shape features of the image, matching the similarity of the image through a similarity measurement function, introducing the related feedback algorithm, enabling a user to interact with an interaction terminal, and finally outputting an interaction action through the interaction terminal (such as an interaction robot) to achieve 3D remote interaction.
When a user carries out remote limb interaction with an interactive terminal, the user needs to retrieve and match extracted image information from the database for many times, in the process, the data in the whole database needs to be extracted and compared one by one, and the interactive response speed of the output terminal can be greatly influenced.
Disclosure of Invention
The invention aims to provide a 3D remote interaction action optimization system based on habit analysis, so as to solve the problems in the background technology.
In order to achieve the above purpose, the invention aims to provide a habit analysis-based 3D remote interaction action optimization system, which comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing the actions of limbs of a human body, the 3D vision module inputs and stores collected information into a database storage unit, a deepened learning module is integrated in the database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by the deepened learning module, the output end of the database storage unit is connected with an interaction output unit, the interaction output unit matches action information collected by the 3D vision module with the database storage unit, and response actions are output through an interaction terminal to achieve remote interaction.
As a further improvement of the technical scheme, the 3D vision module comprises a 3D vision sensor, an action acquisition unit and an action recognition module, the 3D vision sensor is used for carrying out image recognition, transmission and processing on action information, the action information after recognition processing is used for recording and storing the action data into the database storage unit through the action acquisition unit, meanwhile, the action information and the information in the database storage unit are retrieved and compared through the action recognition module, and the matching action is extracted and is output through the interaction output unit and the response action is output through the interaction terminal to realize remote interaction.
As a further improvement of the technical scheme, the deepening learning module performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit to complete learning and classification of information in the database storage unit, the deepening learning module divides the limb actions into high-frequency actions and low-frequency actions, extracts the high-frequency actions in the database and temporarily stores the high-frequency actions in the high-priority database, the low-frequency actions are temporarily stored in the low-priority database, and the deepening learning module is used for transferring and exchanging data between the high-priority database and the low-priority database.
As a further improvement of the technical scheme, the deepening learning module adopts a depth classification algorithm to establish an algorithm model, and establishes two classifiers: the high-priority classifier and the low-priority classifier optimally classify the action data in the database storage unit, high-frequency data are classified into the high-priority classifier, when the action recognition module extracts the data in the database storage unit, the data are extracted from the high-priority classifier, and the interactive response speed of the output terminal is improved.
As a further improvement of the technical solution, the deep classification algorithm adopted by the deepening learning module includes the following steps:
s1, determining data to be classified: low frequency data, high frequency data;
s2, establishing a classifier for describing predefined data categories: a high priority classifier and a low priority classifier;
s3, training a high-priority classifier by using high-frequency data, and training a low-priority classifier by using low-frequency data;
and S4, according to a deep classification algorithm formula, adopting increment training, namely selecting part of unmarked documents of the high-frequency data successively, adding the part of unmarked documents into the high-priority classifier in an increment manner, and adding the part of unmarked documents of the low-frequency data into the low-priority classifier in an increment manner.
As a further improvement of the technical solution, the depth classification algorithm formula is as follows:
Figure DEST_PATH_IMAGE001
Figure 81907DEST_PATH_IMAGE002
wherein each action information is regarded as a characteristic item
Figure DEST_PATH_IMAGE003
Figure 264758DEST_PATH_IMAGE004
Representing motion information
Figure 686512DEST_PATH_IMAGE003
Weight of (2) represents
Figure 749146DEST_PATH_IMAGE003
A priority in the database;
Figure DEST_PATH_IMAGE005
for the frequency of motion information, a certain characteristic item is pointed out
Figure 403987DEST_PATH_IMAGE003
In a classification database
Figure 843059DEST_PATH_IMAGE006
The occurrence number in (2), classification database
Figure 119319DEST_PATH_IMAGE006
The database is a general name of a high-priority database and a low-priority database;
categorizing database frequencies
Figure DEST_PATH_IMAGE007
Refers to the whole database collection, including characteristic items
Figure 884013DEST_PATH_IMAGE003
The number of the memories;
Figure 245724DEST_PATH_IMAGE008
is and a characteristic item
Figure 973640DEST_PATH_IMAGE003
Inversely proportional classification database frequency, N is the total number of all classification databases.
As a further improvement of the technical scheme, the high-frequency actions are divided into high-priority classifiers, and the information of each action in the high-priority classifiers is calculated by a deep classification algorithm in the high-priority classifiers
Figure 104407DEST_PATH_IMAGE003
And downgrading low-frequency data with a low weight fraction to a low-priority classifier.
As a further improvement of the technical scheme, the low-frequency actions are divided into low-priority classifiers, and the information of each action in the low-priority classifiers is calculated by a depth classification algorithm in the low-priority classifiers
Figure 508843DEST_PATH_IMAGE003
And upgrading the high-frequency data with high weight ratio into a high-priority classifier.
Compared with the prior art, the invention has the following beneficial effects:
in the 3D remote interaction action optimization system based on habit analysis, the action behavior habits of a target object are collected and analyzed, the action storage database is established, the priority classification is carried out on the database storage unit, the priority classification is carried out on the interaction action through a depth classification algorithm, the high-priority response optimization is carried out on the high-frequency action, the interaction output unit can make a response faster in the remote interaction process, and the 3D interaction experience is further improved.
Drawings
FIG. 1 is an overall flow block diagram of the present invention;
FIG. 2 is a block diagram of a depth algorithm model flow according to the present invention.
The various reference numbers in the figures mean:
1. a 3D vision module;
2. an action acquisition unit; 3. an action recognition module; 4. a database storage unit; 5. a deepening learning module; 6. a high priority database; 7. a low priority database; 8. an interactive output unit;
51. high-frequency operation; 52. low frequency motion; 53. a high priority classifier; 54. a low priority classifier; 55. low frequency data; 56. high frequency data.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Please refer to fig. 1-2, which provide a 3D remote interactive action optimization system based on habit analysis, comprising a 3D vision module 1, a 3D vision module 1 for tracking and capturing the actions of the limbs of the human body remotely, tracking and capturing the interactive actions of the face and hand by the 3D vision module 1, the 3D vision module 1 performing image recognition and processing on the captured limb information, inputting the limb information to an action acquisition unit 2, the output end of the 3D vision module 1 is connected with the input end of the action acquisition unit 2, the 3D vision module 1 inputting and storing the acquired information to a database storage unit 4, a deepened learning module 5 integrated in the database storage unit 4, the database storage unit 4 divided into a high priority database 6 and a low priority database 7 by the deepened learning module 5, realize the data exchange of high priority database 6 and low priority database 7 through deepening learning module 5, database storage unit 4 output is connected with interactive output unit 8, interactive output unit 8 matches the action information that 3D vision module 1 gathered with database storage unit 4, finally realize long-range interdynamic through interactive terminal output response action, record high frequency action information and low frequency action information, 2 outputs of action collection unit are connected with action identification module 3, action identification module 3 retrieves and draws the matching action from database storage unit 4, realize long-range interdynamic by interactive terminal output response action through interactive output unit 8, interactive terminal: for example, the robot remotely collects the interactive information through the 3D vision module 1, and 3D remote interaction between the user and the robot is realized.
The 3D vision module 1 comprises a 3D vision sensor, a motion acquisition unit 2 and a motion recognition module 3, the 3D vision sensor is used for carrying out image recognition, transmission and processing on motion information, the 3D vision sensor is an instrument for acquiring image information of an external environment by using an optical element and an imaging device, a depth camera is usually adopted, the tracking and capturing of the motions of human limbs are realized remotely, the 3D vision sensor is a small machine vision system with image acquisition, image processing and information transmission functions, and is an embedded computer vision system which integrates an image sensor, a digital processor, a communication module and other peripheral equipment into a single camera, the motion information after recognition processing is recorded and stored in a database storage unit 4 through the motion acquisition unit 2, meanwhile, the motion information and the information in the database storage unit 4 are retrieved and compared through the motion recognition module 3, and matching motions are extracted and are output through an interaction output unit 8 and response motions are output through an interaction terminal to realize remote interaction;
the action recognition module 3 performs preprocessing on the action information, the preprocessing is a process of performing a series of processing on an original image set to generate an image description feature library, and the preprocessing mainly includes: the method comprises the following steps of (1) unifying the scale, converting the format, carrying out gray scale processing and the like, wherein the purpose of preprocessing is to facilitate the extraction of image characteristics and the calculation of similarity measurement so as to improve the retrieval efficiency of the image;
further, the action recognition module 3 comprises a feature extraction unit, which is the core of the action recognition module 3 and is responsible for extracting visual features of the image from the database storage unit 4, wherein the visual features comprise features such as color, shape, texture, spatial position relationship and the like, and the extracted features can effectively represent the image or have the capability of distinguishing the image;
the action recognition module 3 feeds back the result of the extracted image from the image retrieval, the retrieved characteristic representation weight value can be automatically adjusted according to the selection of a user, so that multiple times of retrieval are carried out, the information in the database storage unit 4 is compared and matched one by one, and by adding the high-priority database 6, the action recognition module 3 firstly retrieves and matches from the high-priority database 6, so that the retrieval computation amount is reduced, and the retrieval efficiency is greatly improved.
The deepening learning module 5 performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit 2 to complete learning and classification of information in the database storage unit 4, the deepening learning module 5 divides limb actions into high-frequency actions 51 and low-frequency actions 52, extracts the high-frequency actions 51 in the database and temporarily stores the high-frequency actions 51 in the high-priority database 6, degrades the low-frequency data 55 in the high-priority database 6 to the low-priority database 7, temporarily stores the low-frequency actions 52 in the low-priority database 7, upgrades the high-frequency data 56 in the low-priority database 7 to the high-priority database 6, and realizes data transfer and exchange between the high-priority database 6 and the low-priority database 7 through the deepening learning module 5.
The deepening learning module 5 adopts a depth classification algorithm, establishes an algorithm model and establishes two classifiers: the high-priority classifier 53 and the low-priority classifier 54 optimally classify the motion data in the database storage unit 4, classify the high-frequency data 56 into the high-priority classifier 53, and preferably extract the data from the high-priority classifier 53 when the motion recognition module 3 extracts the data in the database storage unit 4, thereby improving the interactive response speed of the output terminal.
To illustrate how the deepening learning module 5 divides the database storage unit 4 into the high-priority database 6 and the low-priority database 7, the deepening learning module 5 adopts a deep classification algorithm, which includes the following method steps:
s1, determining data to be classified: low frequency data 55, high frequency data 56; classifying the motion data recorded in the database storage unit 4, high frequency data 56 being high in weight ratio and low frequency data 55 being low in weight ratio;
s2, establishing a classifier for describing predefined data categories: a high priority classifier 53 and a low priority classifier 54;
s3, training a high-priority classifier 53 by using high-frequency data 56, and training a low-priority classifier 54 by using low-frequency data 55;
and S4, according to a deep classification algorithm formula, performing incremental training, namely selecting part of unlabeled documents of the high-frequency data 56 one by one, adding the selected documents into the high-priority classifier 53 in an incremental manner, and adding the low-frequency data 55 into the low-priority classifier 54 in an incremental manner.
The depth classification algorithm formula is as follows:
Figure DEST_PATH_IMAGE009
Figure 623430DEST_PATH_IMAGE010
wherein each action information is regarded as a characteristic item
Figure 138725DEST_PATH_IMAGE003
Figure 655157DEST_PATH_IMAGE004
Representing motion information
Figure 494411DEST_PATH_IMAGE003
Weight of (2) represents
Figure 830714DEST_PATH_IMAGE003
A priority in the database;
Figure 149700DEST_PATH_IMAGE005
for action information frequency, a certain characteristic item is indicated
Figure 255059DEST_PATH_IMAGE003
In a classification database
Figure 266878DEST_PATH_IMAGE006
The occurrence number in (2), classification database
Figure 824898DEST_PATH_IMAGE006
Is a general term for the high priority database 6 and the low priority database 7;
categorizing database frequencies
Figure 947575DEST_PATH_IMAGE007
Refers to the whole database collection, including characteristic items
Figure 658173DEST_PATH_IMAGE003
The number of the memories;
Figure 575313DEST_PATH_IMAGE008
is and a characteristic item
Figure 886209DEST_PATH_IMAGE003
Inversely proportional classification database frequency, N is the total number of all classification databases.
Classifying the action information in the database storage unit 4 through the established algorithm model, dividing the high-frequency action 51 into a high-priority classifier 53, and calculating each action information in the high-priority classifier 53 through a deep classification algorithm in the high-priority classifier 53
Figure 812577DEST_PATH_IMAGE003
And downgrades the low frequency data 55 with a low weight fraction to the low priority classifier 54; similarly, the low frequency actions 52 are classified into the low priority classifier 54, and the information of each action in the low priority classifier 54 is calculated by the depth classification algorithm in the low priority classifier 54
Figure 626949DEST_PATH_IMAGE003
And upgrade the high-frequency data 56 with high weight ratio into the high-priority classifier 53; when the action recognition module 3 searches and matches actions, the high-priority database 6 can be searched preferentially, so that the time for searching and matching data information can be effectively shortened, and the response speed of the interactive output unit 8 is increased.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. 3D remote interaction's action optimization system based on habit analysis, its characterized in that: including 3D visual module (1), 3D visual module (1) is used for carrying out remote tracking, catching to human limb action, 3D visual module (1) is with the information input of gathering, storage to database storage unit (4), integrated in database storage unit (4) has deepened learning module (5), by deepening learning module (5) divide database storage unit (4) into high priority database (6) and low priority database (7), database storage unit (4) output is connected with interactive output unit (8), interactive output unit (8) match the action information that 3D visual module (1) gathered with database storage unit (4), realize remote interaction through interactive terminal output response action.
2. The habit analysis based 3D remote interactive action optimization system according to claim 1, wherein: 3D vision module (1) includes 3D vision sensor, action acquisition unit (2) and action identification module (3), 3D vision sensor is used for carrying out image recognition to action information, transmission and processing, action information after the identification processing passes through action acquisition unit (2) with action data record and save to database storage unit (4) in, simultaneously, retrieve through action identification module (3) with action information and the information in the database storage unit (4), compare, and draw and match the action and pass through interactive output unit (8) and realize remote interaction by interactive terminal output response action.
3. The habit-analysis based 3D remote interactive action optimization system according to claim 1, wherein: the deep learning module (5) conducts algorithm training analysis on the limb interaction information collected by the action collection unit (2) to complete learning and classification on information in the database storage unit (4), the deep learning module (5) divides the limb actions into high-frequency actions (51) and low-frequency actions (52), extracts the high-frequency actions (51) in the database and temporarily stores the high-frequency actions in the high-priority database (6), temporarily stores the low-frequency actions (52) in the low-priority database (7), and transfers and exchanges data between the high-priority database (6) and the low-priority database (7) through the deep learning module (5).
4. The habit analysis based 3D remote interactive action optimization system according to claim 3, wherein: the deepening learning module (5) adopts a depth classification algorithm to establish an algorithm model, and establishes two classifiers: and a high-priority classifier (53) and a low-priority classifier (54) which optimally classify the motion data in the database storage unit (4), classify the high-frequency data (56) into the high-priority classifier (53), and extract data from the high-priority classifier (53) when the motion recognition module (3) extracts the data in the database storage unit (4), thereby improving the interactive response speed of the output terminal.
5. The habit analysis based 3D remote interactive action optimization system according to claim 4, wherein: the deep classification algorithm adopted by the deepening learning module (5) comprises the following method steps:
s1, determining data to be classified: low frequency data (55), high frequency data (56);
s2, establishing a classifier for describing predefined data categories: a high priority classifier (53) and a low priority classifier (54);
s3, training a high-priority classifier (53) by using high-frequency data (56), and training a low-priority classifier (54) by using low-frequency data (55);
and S4, obtaining weight information of the high-frequency data (56) and the low-frequency data (55) according to a deep classification algorithm formula, adopting increment training, namely successively selecting the high-frequency data (56) in the low-priority classifier (54), adding the high-frequency data to the high-priority classifier (53) in an increment mode, and adding the low-frequency data (55) in the high-priority classifier (53) to the low-priority classifier (54) in an increment mode.
6. The habit analysis based 3D remote interactive action optimization system according to claim 5, wherein: the depth classification algorithm formula is as follows:
Figure 279061DEST_PATH_IMAGE001
Figure 656953DEST_PATH_IMAGE002
wherein each action information is regarded as a characteristic item
Figure 726540DEST_PATH_IMAGE003
Figure 189883DEST_PATH_IMAGE004
Representing motion information
Figure 3118DEST_PATH_IMAGE003
Weight of (2) represents
Figure 235516DEST_PATH_IMAGE003
A priority in the database;
Figure 741584DEST_PATH_IMAGE005
for the frequency of motion information, a certain characteristic item is pointed out
Figure 925178DEST_PATH_IMAGE003
In a classification database
Figure 542104DEST_PATH_IMAGE006
The occurrence number in (2), classification database
Figure 629009DEST_PATH_IMAGE006
Is the general name of the high-priority database (6) and the low-priority database (7);
categorizing database frequencies
Figure 305978DEST_PATH_IMAGE007
Refers to the whole database collection, including characteristic items
Figure 478333DEST_PATH_IMAGE003
The number of the memories;
Figure 367792DEST_PATH_IMAGE008
is and a characteristic item
Figure 574782DEST_PATH_IMAGE003
Inversely proportional classification database frequency, N is the total number of all classification databases.
7. The habit analysis based 3D remote interactive action optimization system according to claim 5, wherein: the high-frequency actions (51) are classified into a high-priority classifier (53), and each action information in the high-priority classifier (53) is calculated in the high-priority classifier (53) through a depth classification algorithm
Figure 688232DEST_PATH_IMAGE003
And downgrades the low frequency data (55) with a lower weight ratio to the low priority classifier (54).
8. The habit analysis based 3D remote interactive action optimization system according to claim 5, wherein: the low frequency actions (52) are classified into a low priority classifier (54), and the information of each action in the low priority classifier (54) is calculated in the low priority classifier (54) through a depth classification algorithm
Figure 347883DEST_PATH_IMAGE003
And upgrading the high-frequency data (56) with high weight ratio into a high-priority classifier (53).
CN202310035578.XA 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system Active CN115729356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310035578.XA CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310035578.XA CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Publications (2)

Publication Number Publication Date
CN115729356A true CN115729356A (en) 2023-03-03
CN115729356B CN115729356B (en) 2023-07-21

Family

ID=85301994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310035578.XA Active CN115729356B (en) 2023-01-10 2023-01-10 Habit analysis-based 3D remote interaction action optimization system

Country Status (1)

Country Link
CN (1) CN115729356B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662471A (en) * 2012-04-09 2012-09-12 沈阳航空航天大学 Computer vision mouse
US20200097757A1 (en) * 2018-09-25 2020-03-26 Nec Laboratories America, Inc. Network reparameterization for new class categorization
CN113692551A (en) * 2019-04-30 2021-11-23 依视路国际公司 Method for determining an oriented 3D representation of a person's head in a natural vision pose
CN115268651A (en) * 2022-08-09 2022-11-01 重庆理工大学 Implicit gesture interaction method and system for steering wheel
CN115497133A (en) * 2022-09-14 2022-12-20 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662471A (en) * 2012-04-09 2012-09-12 沈阳航空航天大学 Computer vision mouse
US20200097757A1 (en) * 2018-09-25 2020-03-26 Nec Laboratories America, Inc. Network reparameterization for new class categorization
CN113692551A (en) * 2019-04-30 2021-11-23 依视路国际公司 Method for determining an oriented 3D representation of a person's head in a natural vision pose
CN115268651A (en) * 2022-08-09 2022-11-01 重庆理工大学 Implicit gesture interaction method and system for steering wheel
CN115497133A (en) * 2022-09-14 2022-12-20 深圳市欧瑞博科技股份有限公司 User intelligent identification method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115729356B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
Sachar et al. Survey of feature extraction and classification techniques to identify plant through leaves
JP6005837B2 (en) Image analysis apparatus, image analysis system, and image analysis method
CN111046731B (en) Transfer learning method and recognition method for gesture recognition based on surface electromyographic signals
CN108536780B (en) Cross-modal object material retrieval method based on tactile texture features
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN112184734A (en) Long-time animal posture recognition system based on infrared images and wearable optical fibers
Kumar Jain et al. (Retracted) Modeling of human action recognition using hyperparameter tuned deep learning model
CN106845375A (en) A kind of action identification method based on hierarchical feature learning
Chalasani et al. Egocentric gesture recognition for head-mounted ar devices
Astonkar et al. Detection and analysis of plant diseases using image processing
Lu et al. Toward good practices for fine-grained maize cultivar identification with filter-specific convolutional activations
CN110516638B (en) Sign language recognition method based on track and random forest
CN111898418A (en) Human body abnormal behavior detection method based on T-TINY-YOLO network
Alhersh et al. Learning human activity from visual data using deep learning
Hadid et al. An overview of content-based image retrieval methods and techniques
CN116561649B (en) Diver motion state identification method and system based on multi-source sensor data
CN115729356B (en) Habit analysis-based 3D remote interaction action optimization system
JP2011053952A (en) Image-retrieving device and image-retrieving method
CN115937910A (en) Palm print image identification method based on small sample measurement network
CN103824058A (en) Face recognition system and method based on locally distributed linear embedding algorithm
US20230245495A1 (en) Face recognition systems data collection process
Mumtaz et al. A novel texture image retrieval system based on dual tree complex wavelet transform and support vector machines
CN109919162B (en) Model for outputting MR image feature point description vector symbol and establishing method thereof
Dhar et al. Fish image classification by XgBoost based on Gist and GLCM Features
Burić et al. An overview of action recognition in videos

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 518000

Applicant after: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: 518000 3311, Floor 3, Building 1, Aerospace Building, No. 51, Gaoxin South 9th Road, High tech Community, Yuehai Street, Nanshan District, Shenzhen, Guangdong

Applicant before: Shenzhen FEIDIE Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230628

Address after: 710000 Building D, National Digital Publishing Base, No. 996 Tiangu 7th Road, Yuhua Street Office, High tech Zone, Xi'an City, Shaanxi Province

Applicant after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

Address before: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 518000

Applicant before: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Applicant before: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant