Disclosure of Invention
The invention aims to provide a habit analysis-based 3D remote interactive action optimization system so as to solve the problems in the background art.
In order to achieve the above purpose, the invention aims to provide a habit analysis-based 3D remote interaction motion optimization system, which comprises a 3D vision module, wherein the 3D vision module is used for remotely tracking and capturing human body limb motions, the 3D vision module inputs and stores acquired information into a database storage unit, a deepening learning module is integrated in the database storage unit, the database storage unit is divided into a high-priority database and a low-priority database by the deepening learning module, the output end of the database storage unit is connected with an interaction output unit, and the interaction output unit matches the motion information acquired by the 3D vision module with the database storage unit and realizes remote interaction by outputting response motions through an interaction terminal.
As a further improvement of the technical scheme, the 3D vision module comprises a 3D vision sensor, an action acquisition unit and an action recognition module, wherein the 3D vision sensor is used for carrying out image recognition, transmission and processing on action information, the action information after recognition processing records and stores action data into a database storage unit through the action acquisition unit, meanwhile, the action information is searched and compared with information in the database storage unit through the action recognition module, and the matched action is extracted and outputted by an interaction terminal through an interaction output unit to respond to the action so as to realize remote interaction.
As a further improvement of the technical scheme, the deepened learning module carries out algorithm training analysis on the limb interaction information acquired by the action acquisition unit to complete learning and classification of the information in the database storage unit, the deepened learning module divides the limb action into high-frequency action and low-frequency action, extracts the high-frequency action in the database and temporarily stores the high-frequency action in the high-priority database, temporarily stores the low-frequency action in the low-priority database, and realizes the transfer and exchange of data between the high-priority database and the low-priority database through the deepened learning module.
As a further improvement of the technical scheme, the deepened learning module adopts a depth classification algorithm to establish an algorithm model, and establishes two classifiers: the high-priority classifier and the low-priority classifier are used for optimizing and classifying the action data in the database storage unit, the high-frequency data are classified into the high-priority classifier, and when the action recognition module extracts the data in the database storage unit, the action recognition module preferentially extracts the data from the high-priority classifier, so that the interactive response speed of the output terminal is improved.
As a further improvement of the technical scheme, the deep learning module adopts a deep classification algorithm, which comprises the following method steps:
s1, determining data to be classified: low frequency data, high frequency data;
s2, establishing a classifier for describing predefined data categories: a high priority classifier and a low priority classifier;
s3, training a high-priority classifier by using the high frequency data, and training a low-priority classifier by using the low frequency data;
and S4, adopting incremental training, namely sequentially selecting partial unlabeled documents of the high-frequency data, adding the partial unlabeled documents into the high-priority classifier in an incremental manner, and adding the low-frequency data into the low-priority classifier in an incremental manner according to a depth classification algorithm formula.
As a further improvement of the technical scheme, the formula of the depth classification algorithm is as follows:
。
wherein each action information is regarded as a characteristic item;/>Representing characteristic item->Weights of (2) represent->Priority in the database;
for characteristic item frequency, refer to a characteristic item +.>In the classification database->The number of occurrences in a classification database->Is the sum of a high-priority database and a low-priority database;
classification database frequencyMeans that the whole database set contains the characteristic items +.>Is stored in the memory;
is->Inversely proportional class database frequency, N, is the total number of all class databases.
As the technical prescriptionFurther improvement of the scheme, the high-frequency actions are classified into a high-priority classifier, and each characteristic item in the high-priority classifier is calculated by a depth classification algorithm in the high-priority classifierAnd downgrade the low frequency data with lower weight into the low priority classifier.
As a further improvement of the technical scheme, the low-frequency actions are classified into a low-priority classifier, and each characteristic item in the low-priority classifier is calculated by a depth classification algorithm in the low-priority classifierAnd downgrade the higher frequency data with higher weight into the high priority classifier.
Compared with the prior art, the invention has the beneficial effects that:
in the habit analysis-based 3D remote interaction action optimization system, the action behavior habits of the target objects are collected and analyzed, a behavior storage database is established, priority classification is carried out on the database storage units, the interaction actions are classified according to the priority through a depth classification algorithm, higher-priority response optimization is carried out on the high-frequency actions, so that the interaction output unit can respond quickly in the remote interaction process, and further 3D interaction experience is improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
Referring to fig. 1-2, a habit analysis-based 3D remote interactive motion optimization system is provided, which comprises a 3D vision module 1, a 3D vision module 1 for remote tracking and capturing of human body limb motions, tracking and capturing of facial and hand interactive motions by the 3D vision module 1, image recognition and processing of captured limb information by the 3D vision module 1, inputting of limb information to a motion acquisition unit 2, connection of an output end of the 3D vision module 1 and an input end of the motion acquisition unit 2, inputting and storing of acquired information by the 3D vision module 1 to a database storage unit 4, integration of a deep learning module 5 in the database storage unit 4, division of the database storage unit 4 into a high priority database 6 and a low priority database 7 by the deep learning module 5, the data exchange of the high-priority database 6 and the low-priority database 7 is realized through the deepening learning module 5, the output end of the database storage unit 4 is connected with the interaction output unit 8, the interaction output unit 8 matches the action information acquired by the 3D vision module 1 with the database storage unit 4, finally, the remote interaction is realized through the output response action of the interaction terminal, the high-frequency action information and the low-frequency action information are recorded, the output end of the action acquisition unit 2 is connected with the action recognition module 3, the action recognition module 3 retrieves and extracts the matched action from the database storage unit 4, the interaction output unit 8 outputs the response action through the interaction terminal to realize the remote interaction, and the interaction terminal: for example, the robot carries out remote collection on interaction information through the 3D vision module 1, so that 3D remote interaction between a user and the robot is realized.
The 3D vision module 1 comprises a 3D vision sensor, a motion acquisition unit 2 and a motion recognition module 3, wherein the 3D vision sensor is used for carrying out image recognition, transmission and processing on motion information, the 3D vision sensor is an instrument for acquiring external environment image information by utilizing an optical element and an imaging device, a depth camera is generally adopted for remotely tracking and capturing human limb motions, the 3D vision sensor is a small machine vision system with the functions of image acquisition, image processing and information transmission, the 3D vision sensor is an embedded computer vision system, the image sensor, a digital processor, a communication module and other peripherals are integrated into a single camera, the motion information after recognition processing records and stores motion data into the database storage unit 4 through the motion acquisition unit 2, meanwhile, the motion recognition module 3 searches and compares the motion information with the information in the database storage unit 4, and extracts matching motions, and the matching motions are output by the interaction terminal to respond the motions to realize remote interactions;
the motion recognition module 3 performs preprocessing on motion information, which is a process of performing a series of processing on an original image set to generate an image description feature library, and mainly includes: the method is characterized by comprising the following steps of unified scale, format conversion, gray level processing and the like, wherein the aim of preprocessing is to facilitate the extraction of image features and the calculation of similarity measurement so as to improve the retrieval efficiency of images;
further, the motion recognition module 3 includes a feature extraction unit, which is a core of the motion recognition module 3 and is responsible for extracting visual features of an image from the database storage unit 4, wherein the visual features include features such as color, shape, texture, spatial position relation and the like, and the extracted features can effectively represent the image or have the capability of distinguishing the image;
the action recognition module 3 feeds back the extracted image result from the image retrieval, and automatically adjusts the characteristic representation weight value of the retrieval according to the selection of a user, so that the retrieval is carried out for a plurality of times, the information in the database storage unit 4 is compared and matched one by one, and the action recognition module 3 preferentially retrieves and matches from the high-priority database 6 by adding the high-priority database 6, so that the operation amount of the retrieval is reduced, and the retrieval efficiency is greatly improved.
The deep learning module 5 performs algorithm training analysis on the limb interaction information acquired by the action acquisition unit 2 to complete learning and classification of the information in the database storage unit 4, the deep learning module 5 divides the limb actions into high-frequency actions 51 and low-frequency actions 52, extracts and temporarily stores the high-frequency actions 51 in the database into the high-priority database 6, and degrades the low-frequency data 55 in the high-priority database 6 into the low-priority database 7, temporarily stores the low-frequency actions 52 into the low-priority database 7, upgrades the high-frequency data 56 in the low-priority database 7 into the high-priority database 6, and realizes data transfer and exchange between the high-priority database 6 and the low-priority database 7 through the deep learning module 5.
The deep learning module 5 adopts a deep classification algorithm, establishes an algorithm model and establishes two classifiers: the high-priority classifier 53 and the low-priority classifier 54 perform optimization classification on the motion data in the database storage unit 4, the high-frequency data 56 is classified into the high-priority classifier 53, and when the motion recognition module 3 extracts the data in the database storage unit 4, the data is preferably extracted from the high-priority classifier 53, so that the interactive response speed of the output terminal is improved.
To illustrate how the deep learning module 5 divides the database memory unit 4 into a high priority database 6 and a low priority database 7, the deep classification algorithm employed by the deep learning module 5 comprises the following method steps:
s1, determining data to be classified: low frequency data 55, high frequency data 56; classifying the motion data recorded in the database storage unit 4, wherein the motion data is high-frequency data 56 with relatively high weight and low-frequency data 55 with relatively low weight;
s2, establishing a classifier for describing predefined data categories: a high priority classifier 53 and a low priority classifier 54;
s3, training a high-priority classifier 53 by using high-frequency data 56, and training a low-priority classifier 54 by using low-frequency data 55;
s4, according to a depth classification algorithm formula, incremental training is adopted, namely partial unlabeled documents of the high-frequency data 56 are selected successively, the partial unlabeled documents are added into the high-priority classifier 53 in an incremental mode, and the low-frequency data 55 are added into the low-priority classifier 54 in an incremental mode.
The depth classification algorithm formula is as follows:
。
wherein each action information is regarded as a characteristic item;/>Representing characteristic item->Weights of (2) represent->Priority in the database;
for characteristic item frequency, refer to a characteristic item +.>In the classification database->The number of occurrences in a classification database->Is a generic name for the high priority database 6 and the low priority database 7;
classification database frequencyMeans that the whole database set contains the characteristic items +.>Is stored in the memory;
is->Inversely proportional class database frequency, N, is the total number of all class databases.
Classifying the action information in the database storage unit 4 through the established algorithm model, classifying the high-frequency actions 51 into the high-priority classifier 53, and calculating each characteristic item in the high-priority classifier 53 through a depth classification algorithm in the high-priority classifier 53And downgrade the low frequency data 55 with lower weight into the low priority classifier 54; similarly, the low-frequency actions 52 are classified into a low-priority classifier 54, and each characteristic item +_in the low-priority classifier 54 is calculated in the low-priority classifier 54 through a depth classification algorithm>And downgrade the higher frequency data 56 with higher weight into the high priority classifier 53; when searching the matching action, the action recognition module 3 can search the high-priority database 6 preferentially, so that the data information searching and matching time can be effectively reduced, and the response speed of the interaction output unit 8 can be improved.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.