Disclosure of Invention
The invention aims to provide a 3D remote interaction action optimization system based on habit analysis, so as to solve the problems in the background technology.
In order to achieve the above purpose, the invention aims to provide a habit analysis-based 3D remote interaction action optimization system, which comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing the actions of limbs of a human body, the 3D vision module inputs and stores collected information into a database storage unit, a deepened learning module is integrated in the database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by the deepened learning module, the output end of the database storage unit is connected with an interaction output unit, the interaction output unit matches action information collected by the 3D vision module with the database storage unit, and response actions are output through an interaction terminal to achieve remote interaction.
As a further improvement of the technical scheme, the 3D vision module comprises a 3D vision sensor, an action acquisition unit and an action recognition module, the 3D vision sensor is used for carrying out image recognition, transmission and processing on action information, the action information after recognition processing is used for recording and storing the action data into the database storage unit through the action acquisition unit, meanwhile, the action information and the information in the database storage unit are retrieved and compared through the action recognition module, and the matching action is extracted and is output through the interaction output unit and the response action is output through the interaction terminal to realize remote interaction.
As a further improvement of the technical scheme, the deepening learning module performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit to complete learning and classification of information in the database storage unit, the deepening learning module divides the limb actions into high-frequency actions and low-frequency actions, extracts the high-frequency actions in the database and temporarily stores the high-frequency actions in the high-priority database, the low-frequency actions are temporarily stored in the low-priority database, and the deepening learning module is used for transferring and exchanging data between the high-priority database and the low-priority database.
As a further improvement of the technical scheme, the deepening learning module adopts a depth classification algorithm to establish an algorithm model, and establishes two classifiers: the high-priority classifier and the low-priority classifier optimally classify the action data in the database storage unit, high-frequency data are classified into the high-priority classifier, when the action recognition module extracts the data in the database storage unit, the data are extracted from the high-priority classifier, and the interactive response speed of the output terminal is improved.
As a further improvement of the technical solution, the deep classification algorithm adopted by the deepening learning module includes the following steps:
s1, determining data to be classified: low frequency data, high frequency data;
s2, establishing a classifier for describing predefined data categories: a high priority classifier and a low priority classifier;
s3, training a high-priority classifier by using high-frequency data, and training a low-priority classifier by using low-frequency data;
and S4, according to a deep classification algorithm formula, adopting increment training, namely selecting part of unmarked documents of the high-frequency data successively, adding the part of unmarked documents into the high-priority classifier in an increment manner, and adding the part of unmarked documents of the low-frequency data into the low-priority classifier in an increment manner.
As a further improvement of the technical solution, the depth classification algorithm formula is as follows:
wherein each action information is regarded as a characteristic item
;
Representing motion information
Weight of (2) represents
A priority in the database;
for the frequency of motion information, a certain characteristic item is pointed out
In a classification database
The occurrence number in (2), classification database
The database is a general name of a high-priority database and a low-priority database;
categorizing database frequencies
Refers to the whole database collection, including characteristic items
The number of the memories;
is and a characteristic item
Inversely proportional classification database frequency, N is the total number of all classification databases.
As a further improvement of the technical scheme, the high-frequency actions are divided into high-priority classifiers, and the information of each action in the high-priority classifiers is calculated by a deep classification algorithm in the high-priority classifiers
And downgrading low-frequency data with a low weight fraction to a low-priority classifier.
As a further improvement of the technical scheme, the low-frequency actions are divided into low-priority classifiers, and the information of each action in the low-priority classifiers is calculated by a depth classification algorithm in the low-priority classifiers
And upgrading the high-frequency data with high weight ratio into a high-priority classifier.
Compared with the prior art, the invention has the following beneficial effects:
in the 3D remote interaction action optimization system based on habit analysis, the action behavior habits of a target object are collected and analyzed, the action storage database is established, the priority classification is carried out on the database storage unit, the priority classification is carried out on the interaction action through a depth classification algorithm, the high-priority response optimization is carried out on the high-frequency action, the interaction output unit can make a response faster in the remote interaction process, and the 3D interaction experience is further improved.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Please refer to fig. 1-2, which provide a 3D remote interactive action optimization system based on habit analysis, comprising a 3D vision module 1, a 3D vision module 1 for tracking and capturing the actions of the limbs of the human body remotely, tracking and capturing the interactive actions of the face and hand by the 3D vision module 1, the 3D vision module 1 performing image recognition and processing on the captured limb information, inputting the limb information to an action acquisition unit 2, the output end of the 3D vision module 1 is connected with the input end of the action acquisition unit 2, the 3D vision module 1 inputting and storing the acquired information to a database storage unit 4, a deepened learning module 5 integrated in the database storage unit 4, the database storage unit 4 divided into a high priority database 6 and a low priority database 7 by the deepened learning module 5, realize the data exchange of high priority database 6 and low priority database 7 through deepening learning module 5, database storage unit 4 output is connected with interactive output unit 8, interactive output unit 8 matches the action information that 3D vision module 1 gathered with database storage unit 4, finally realize long-range interdynamic through interactive terminal output response action, record high frequency action information and low frequency action information, 2 outputs of action collection unit are connected with action identification module 3, action identification module 3 retrieves and draws the matching action from database storage unit 4, realize long-range interdynamic by interactive terminal output response action through interactive output unit 8, interactive terminal: for example, the robot remotely collects the interactive information through the 3D vision module 1, and 3D remote interaction between the user and the robot is realized.
The 3D vision module 1 comprises a 3D vision sensor, a motion acquisition unit 2 and a motion recognition module 3, the 3D vision sensor is used for carrying out image recognition, transmission and processing on motion information, the 3D vision sensor is an instrument for acquiring image information of an external environment by using an optical element and an imaging device, a depth camera is usually adopted, the tracking and capturing of the motions of human limbs are realized remotely, the 3D vision sensor is a small machine vision system with image acquisition, image processing and information transmission functions, and is an embedded computer vision system which integrates an image sensor, a digital processor, a communication module and other peripheral equipment into a single camera, the motion information after recognition processing is recorded and stored in a database storage unit 4 through the motion acquisition unit 2, meanwhile, the motion information and the information in the database storage unit 4 are retrieved and compared through the motion recognition module 3, and matching motions are extracted and are output through an interaction output unit 8 and response motions are output through an interaction terminal to realize remote interaction;
the action recognition module 3 performs preprocessing on the action information, the preprocessing is a process of performing a series of processing on an original image set to generate an image description feature library, and the preprocessing mainly includes: the method comprises the following steps of (1) unifying the scale, converting the format, carrying out gray scale processing and the like, wherein the purpose of preprocessing is to facilitate the extraction of image characteristics and the calculation of similarity measurement so as to improve the retrieval efficiency of the image;
further, the action recognition module 3 comprises a feature extraction unit, which is the core of the action recognition module 3 and is responsible for extracting visual features of the image from the database storage unit 4, wherein the visual features comprise features such as color, shape, texture, spatial position relationship and the like, and the extracted features can effectively represent the image or have the capability of distinguishing the image;
the action recognition module 3 feeds back the result of the extracted image from the image retrieval, the retrieved characteristic representation weight value can be automatically adjusted according to the selection of a user, so that multiple times of retrieval are carried out, the information in the database storage unit 4 is compared and matched one by one, and by adding the high-priority database 6, the action recognition module 3 firstly retrieves and matches from the high-priority database 6, so that the retrieval computation amount is reduced, and the retrieval efficiency is greatly improved.
The deepening learning module 5 performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit 2 to complete learning and classification of information in the database storage unit 4, the deepening learning module 5 divides limb actions into high-frequency actions 51 and low-frequency actions 52, extracts the high-frequency actions 51 in the database and temporarily stores the high-frequency actions 51 in the high-priority database 6, degrades the low-frequency data 55 in the high-priority database 6 to the low-priority database 7, temporarily stores the low-frequency actions 52 in the low-priority database 7, upgrades the high-frequency data 56 in the low-priority database 7 to the high-priority database 6, and realizes data transfer and exchange between the high-priority database 6 and the low-priority database 7 through the deepening learning module 5.
The deepening learning module 5 adopts a depth classification algorithm, establishes an algorithm model and establishes two classifiers: the high-priority classifier 53 and the low-priority classifier 54 optimally classify the motion data in the database storage unit 4, classify the high-frequency data 56 into the high-priority classifier 53, and preferably extract the data from the high-priority classifier 53 when the motion recognition module 3 extracts the data in the database storage unit 4, thereby improving the interactive response speed of the output terminal.
To illustrate how the deepening learning module 5 divides the database storage unit 4 into the high-priority database 6 and the low-priority database 7, the deepening learning module 5 adopts a deep classification algorithm, which includes the following method steps:
s1, determining data to be classified: low frequency data 55, high frequency data 56; classifying the motion data recorded in the database storage unit 4, high frequency data 56 being high in weight ratio and low frequency data 55 being low in weight ratio;
s2, establishing a classifier for describing predefined data categories: a high priority classifier 53 and a low priority classifier 54;
s3, training a high-priority classifier 53 by using high-frequency data 56, and training a low-priority classifier 54 by using low-frequency data 55;
and S4, according to a deep classification algorithm formula, performing incremental training, namely selecting part of unlabeled documents of the high-frequency data 56 one by one, adding the selected documents into the high-priority classifier 53 in an incremental manner, and adding the low-frequency data 55 into the low-priority classifier 54 in an incremental manner.
The depth classification algorithm formula is as follows:
wherein each action information is regarded as a characteristic item
;
Representing motion information
Weight of (2) represents
A priority in the database;
for action information frequency, a certain characteristic item is indicated
In a classification database
The occurrence number in (2), classification database
Is a general term for the high priority database 6 and the low priority database 7;
categorizing database frequencies
Refers to the whole database collection, including characteristic items
The number of the memories;
is and a characteristic item
Inversely proportional classification database frequency, N is the total number of all classification databases.
Classifying the action information in the database storage unit 4 through the established algorithm model, dividing the high-
frequency action 51 into a high-
priority classifier 53, and calculating each action information in the high-
priority classifier 53 through a deep classification algorithm in the high-
priority classifier 53
And downgrades the
low frequency data 55 with a low weight fraction to the
low priority classifier 54; similarly, the
low frequency actions 52 are classified into the
low priority classifier 54, and the information of each action in the
low priority classifier 54 is calculated by the depth classification algorithm in the
low priority classifier 54
And upgrade the high-
frequency data 56 with high weight ratio into the high-
priority classifier 53; when the action recognition module 3 searches and matches actions, the high-priority database 6 can be searched preferentially, so that the time for searching and matching data information can be effectively shortened, and the response speed of the interactive output unit 8 is increased.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.