CN113807280A - Kinect-based virtual ship cabin system and method - Google Patents

Kinect-based virtual ship cabin system and method Download PDF

Info

Publication number
CN113807280A
CN113807280A CN202111115447.XA CN202111115447A CN113807280A CN 113807280 A CN113807280 A CN 113807280A CN 202111115447 A CN202111115447 A CN 202111115447A CN 113807280 A CN113807280 A CN 113807280A
Authority
CN
China
Prior art keywords
skeleton
bone
data
class
kinect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111115447.XA
Other languages
Chinese (zh)
Inventor
甘辉兵
李治显
王鑫鑫
李佳伟
朱嘉涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202111115447.XA priority Critical patent/CN113807280A/en
Publication of CN113807280A publication Critical patent/CN113807280A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a Kinect-based virtual ship cabin system and a Kinect-based virtual ship cabin method. The invention includes an action gesture library for storing known classes of bones and bone chains; the three-dimensional skeleton joint point acquisition system is used for acquiring three-dimensional skeleton joint points of human body postures through the 3D motion sensing camera and constructing a skeleton node data matrix; the data analysis module is used for constructing a sample covariance matrix of the bone node data based on the obtained bone joint points and solving eigenvectors and eigenvalues; the skeleton sample is projected to the feature vector to obtain a principal component feature set; the system also comprises a data acquisition module used for calculating the Euclidean distance of the principal component characteristics of the skeleton data and calculating the similarity between the skeleton of the unknown class and the skeleton of the known class, and an output module used for identifying the skeleton of the unknown class as the skeleton of the known class, mapping the skeleton information to drive the virtual human, calculating the interactive semantic class and triggering the feedback of the corresponding posture action of the virtual human in the action posture library. The invention has low interaction burden for operators and low human-computer interaction efficiency.

Description

Kinect-based virtual ship cabin system and method
Technical Field
The invention relates to the technical field of automation and intellectualization of turbines, in particular to a Kinect-based virtual ship engine room system and method.
Background
The international convention of maritime work organization IMO issued "maritime personal training, issuing certification and duty standard" has clear regulations: the high-grade wheelers need to get the competent certificates of the governing bodies after officials are officially on duty. The approach to obtain the certification compliance is to train and obtain the certification compliance by a turbine simulator approved by the authority. Currently, turbine simulators used by training institutions all adopt semi-mechanical simulation combined with computer two-dimensional software interface or three-dimensional software interface simulation, and cannot bring real immersive experience to turbine trainers. The semi-mechanical simulation mode is also a man-machine interaction mode, namely human-machine interaction (HMI), and the interaction mode can only realize local simulation in the teaching and training of the turbine simulator and cannot completely simulate the instruments of the whole ship into a mechanical model. The two-dimensional simulation interface or the three-dimensional virtual reality interface is combined and supplemented with a semi-mechanical simulation form, the internal cabin structure of the ship can be completely reflected on a display screen in a computer, and an operator can operate equipment on the display screen through a keyboard and a mark, but the complex and unnatural interaction of the operating equipment can still be brought, the interaction burden of the operator is high, and the human-computer interaction efficiency is low.
Disclosure of Invention
In light of the above-mentioned technical problems, a virtual ship engine room system and method based on Kinect are provided. The technical means adopted by the invention are as follows:
a Kinect-based virtual marine engine room system comprising:
the motion posture library is used for storing bones and bone chains of known classes, wherein different motion forms map different mode instructions;
the data acquisition module is used for acquiring three-dimensional bone joint points of human body postures through the 3D motion sensing camera and constructing a bone joint data matrix;
the data analysis module is used for constructing a sample covariance matrix of the bone node data based on the obtained bone joint points and solving eigenvectors and eigenvalues; the skeleton sample is projected to the feature vector to obtain a principal component feature set; the method is also used for calculating the Euclidean distance of the principal component characteristics of the skeleton data and calculating the similarity between the skeleton of an unknown class and the skeleton of a known class;
and the output module is used for identifying the bones of unknown classes as the bones of known classes, mapping the bone information to drive the virtual human, calculating the interactive semantic classes, and triggering the feedback of the corresponding posture actions of the virtual human in the action posture library.
Further, in the data acquisition process, the data acquisition module performs noise reduction on distorted bones by adopting gaussian filtering, and discretizes to obtain a gaussian noise elimination function:
Figure BDA0003275407750000021
wherein, sigma is the bone data variance, k is the dimension of the kernel matrix, i is the horizontal coordinate of the bone node, and j is the vertical coordinate of the bone node.
Further, the data acquisition module acquires human body bone joint coordinate data [ X, Y, Z ] in the data acquisition process]Based on the distance coordinate data of a camera coordinate system, a maximum-minimum normalization method is adopted to normalize 25 joint points of human skeleton of each frame, and a skeleton data matrix is substituted into a maximum-minimum normalization formula to obtain a new skeleton node data matrix
Figure BDA0003275407750000022
Figure BDA0003275407750000023
Wherein the content of the first and second substances,
Figure BDA0003275407750000024
further, the data analysis module is further configured to identify a bone pose without associated feature dimension removal.
Further, a sample covariance matrix of the bone node data is constructed by using a PCA algorithm, including:
Figure BDA0003275407750000025
solving the eigenvector and the eigenvalue: i R-Lambda Ip|=0
λ is the required characteristic value, IPIs an identity matrix;
projecting the skeleton sample on the feature vector to obtain a principal component:
Figure BDA0003275407750000031
the basic formula for calculation of the euclidean distance is as follows:
Figure BDA0003275407750000032
wherein, ak、bkBone features to be identified and known classes, respectively;
the similarity calculation formula among the main component feature vectors of the skeleton is as follows:
Figure BDA0003275407750000033
wherein A, B represent unknown class skeletal frames and known class skeletal frames respectively,
Figure BDA0003275407750000034
representing skeleton framesA's corresponding set of principal component features,
Figure BDA0003275407750000035
representing the corresponding principal component feature set of skeletal frame B.
Furthermore, the output module also adopts a nonlinear smoothing technology median filtering method, real-time skeleton frame data at the current moment is solved based on the first 12 frames and the last 12 frames of the skeleton frame at each moment, nonlinear signal noise of unstable bones is inhibited through a sequencing statistical theory, and disordered skeleton data is eliminated.
A Kinect-based virtual ship cabin using method comprises the following steps:
step 1, acquiring three-dimensional bone joint points of human body postures through a 3D motion sensing camera, and acquiring bone data of continuous actions of an interactor;
step 2, adopting Gaussian filtering to perform noise reduction treatment on the collected distorted bones;
step 3, in the skeleton input of the posture mode recognition, adopting a maximum-minimum normalization method to perform normalization processing on 25 joint points of the human skeleton of each frame;
step 4, eliminating features irrelevant to the class labels and features weakly relevant to the class labels, and then calculating the similarity between the bones of the unknown class and the bones of the known class;
and 5, recognizing the skeleton of the unknown class as the skeleton of the known class, mapping the skeleton information to drive the virtual human, calculating the interactive semantic class, and triggering feedback of the corresponding posture action of the virtual human in the action posture library.
The invention has the following advantages:
the method reduces the application threshold of an operator and improves the natural friendliness of interaction, human body action interaction of a human body is taken as a new man-machine interaction mode, and the method has the characteristics of rich interaction semantic expression, more friendly interaction and the like. In the virtual simulation oriented operation interaction process, the human body action interaction process is integrated to be more natural and efficient. For the training field, the real immersive experience is brought to turbine trainers, the operation equipment is simple and clear, the interaction is natural, the interaction burden of operators is low, and the human-computer interaction efficiency is low.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of the data acquisition process of the present invention.
FIG. 2 is a schematic diagram of the data analysis process of the present invention.
FIG. 3 is an overall flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The virtual simulation of the ship engine room is an important component of a crew training system, so that trained personnel can know the relation between engine room equipment and the system. In order to realize the natural experience of human-computer interaction in virtual reality, special input equipment needs to be designed. The traditional interaction modes mainly comprise a keyboard and a mouse, a three-dimensional space locator and a data glove, and are complex to operate and unnatural in interaction experience. Aiming at the problems, the human body action recognition interaction system based on the Kinect is provided, and the human-computer interaction efficiency is improved. As shown in fig. 1 to 3, the device comprises:
the data acquisition module is used for acquiring three-dimensional bone joint points of human body postures through the 3D motion sensing camera and constructing a bone joint data matrix. Specifically, skeletal data of continuous actions of the interactor is obtained through application of Kinect underlying hardware and the SDK.
The data analysis module is used for constructing a sample covariance matrix of the bone node data based on the obtained bone joint points and solving eigenvectors and eigenvalues; the skeleton sample is projected to the feature vector to obtain a principal component feature set; the method is also used for calculating the Euclidean distance of the principal component characteristics of the skeleton data and calculating the similarity between the skeleton of an unknown class and the skeleton of a known class;
the output module is used for identifying the bones of unknown classes as the bones of known classes, mapping the bone information to drive the virtual human, calculating the interactive semantic classes, and triggering the feedback of the corresponding posture actions of the virtual human in the action posture library;
and the motion posture library is used for storing the bones of the known classes and the bone chains, wherein different motion forms map different mode instructions.
In the data acquisition process of the data acquisition module, the three-dimensional bone joint points of the human body posture are obtained by carrying out decision tree classification estimation through three-dimensional forms of a human body depth image by Microsoft, the algorithm has certain limitation, and the estimated joint coordinate data is distorted or wrong under some complex actions or under the condition that the posture joint is shielded by calling the Kinect SDK, so that the difficulty in identifying the action posture mode of an interactor is increased. In order to realize the input of stable posture mode recognition, Gaussian filtering is adopted for the distorted bones to carry out noise reduction processing. And (3) carrying out noise reduction treatment on the distorted skeleton by adopting Gaussian filtering, and discretizing to obtain a Gaussian noise elimination function:
Figure BDA0003275407750000051
wherein, sigma is the bone data variance, k is the dimension of the kernel matrix, i is the horizontal coordinate of the bone node, and j is the vertical coordinate of the bone node.
The data acquisition module acquires coordinate data [ X, Y, Z ] of human skeleton joints in the data acquisition process]Is distance coordinate data based on a camera coordinate system, which has a unit of m, and such data is a dimensional physical system value representing cartesian coordinate data of a real space. After the filtering pretreatment, the dimension characteristic of the bone posture is still kept. In the skeleton input of the gesture mode recognition, dimensionless data input is beneficial to the calculation, recognition and classification of an algorithm model and parameter setting and adjustment. Carrying out normalization processing on 25 joint points of human bones of each frame by adopting a maximum-minimum normalization method, substituting a bone data matrix into a maximum-minimum normalization formula to obtain a new bone node data matrix
Figure BDA0003275407750000061
Figure BDA0003275407750000062
Wherein the content of the first and second substances,
Figure BDA0003275407750000063
in which the left side is the new x calculatedi
The different human body interaction actions, the different bone postures and the different expressed interaction semantics. Therefore, the action gestures with different semantics of the interactors need to be identified, each frame of skeleton data is the feature set of the gesture actions according to the definition and classification of the semantic tags of the action gestures, and the gesture actions can be conveniently described and depicted by the feature set and have the same interactive meanings as the gesture action features of the same type. The K-nearest neighbor algorithm has the advantages of easy realization and no need of training for the recognition and classification problems of the multi-class labels. Because the number of the joint points of the bone posture is 25, each joint point has three-dimensional coordinate data x, y and z, the length of a feature set of a frame of bone is 75, and the feature set is as follows.
fn=[x0,y0,z0,x1,y1,z1,…,x23,y23,z23,x24,y24,z24]Wherein f isnIs the original skeleton frame characteristic set.
Set of features for skeletal pose fnAnd each joint point has x, y and z three-dimensional coordinate data, so that the structure type of the skeleton point is constructed to store the coordinate data of each joint point. Meanwhile, a human body skeleton data processing class is created for collecting and processing skeleton node data, wherein the artisConnection array is a connection relation array of skeleton joint chains, and each skeleton node set is a section of connected skeleton chains.
Different modes of action map different mode instructions, such as a forward walking mode, a backward walking mode, a leftward walking mode, a rightward walking mode, a squatting mode and a standing mode, and each mode generates corresponding instructions which are front, back, left, right, square and stand in sequence.
Different action posture modes of an interactor need to be identified and classified, posture forms of the interactor are expressed, the characteristics which are supposed to be removed mainly comprise characteristics irrelevant to class labels and characteristics weakly relevant to the class labels, then the similarity between bones of unknown classes and bones of known classes is calculated, and in order to solve the problem, a PCA-KNN fusion type algorithm is provided to obtain classification labels of the bones. The algorithm flow chart is shown in figure 3.
Identification of skeletal poses by elements of many dimensions is not too relevant, such as finger tip and thumb joint data, and thus these feature dimensions need to be removed.
Constructing a sample covariance matrix of the bone node data by using a PCA algorithm, wherein the sample covariance matrix comprises:
Figure BDA0003275407750000071
solving the eigenvector and the eigenvalue: i R-Lambda Ip|=0
λ is the required characteristic value, IPIs an identity matrix;
Figure BDA0003275407750000072
denotes x1、x2、…x25An average of 25 joint points in total;
Figure BDA0003275407750000073
the same is true.
Projecting the skeleton sample on the feature vector to obtain a principal component:
Figure BDA0003275407750000074
Figure BDA0003275407750000075
the expression arranges the eigenvectors into a matrix from top to bottom according to the size of the corresponding eigenvalue, and takes the first k rows to form a matrix z.
And calculating the Euclidean distance of the principal component features of the skeleton data, setting a parameter k for sorting the Euclidean distances, and establishing a queue with the size of k and sorting the k from large to small according to the distance, wherein the queue is used for storing the principal component features of the most recently trained skeleton. Randomly selecting k skeletons from skeletons of known classes as initial nearest skeletons, respectively calculating the distances from the skeletons to be identified to the k skeletons, storing skeleton labels and distances of the known classes into a priority queue, traversing a skeleton set of the known classes, calculating the distance between the current skeleton and the skeletons to be identified, and comparing the obtained distance with the maximum distance in the priority queue. And after traversing, calculating a plurality of classes of k skeletons in the priority queue, and taking the classes as the classes of the skeletons to be identified.
The basic formula for calculation of the euclidean distance is as follows:
Figure BDA0003275407750000076
wherein, ak、bkBone features to be identified and known classes, respectively;
the similarity calculation formula among the main component feature vectors of the skeleton is as follows:
Figure BDA0003275407750000077
wherein A, B represent unknown class skeletal frames and known class skeletal frames respectively,
Figure BDA0003275407750000078
a corresponding set of principal component features representing skeletal frame a,
Figure BDA0003275407750000079
representing the corresponding principal component feature set of skeletal frame B.
Different action postures correspond to different mode instructions, so that the action postures in the whole virtual equipment operation flow are divided into two categories, namely a mobile action posture and an operation action posture. The moving type action refers to that an operator moves and walks in the front-back and left-right directions, and the skeleton information mapping drives the virtual human, so that the moving type action posture is basically not particularly distinguished, and is an action posture mode that the virtual human changes along with the human action in real time; the operation type action gesture is a type of action of an interactive person operating the equipment in the process operation. A library of interaction gestures for device operation in a virtual environment is established. The constructed motion gesture library is convenient for training and recognizing by using a human body motion skeleton gesture recognition algorithm, the interaction semantic category is calculated, and the feedback of the corresponding gesture motion of the virtual human is triggered.
The motion of the virtual human is changed along with the bone information data of the operator captured by the Kinect, the real-time bone data coordinates of the Kinect are solved by a bone joint estimation algorithm which is made by Microsoft through a depth map, and for such a frame of bone data set C ═ C1,c2,...,cn-1,cn]It is not guaranteed that each frame of data in real time isAnd (4) the product is stable. However, when the following motion of the virtual human is solved, if unstable bones occur, the virtual human can have the phenomenon of joint disorder, so that the real-time skeleton frame data at the current moment is solved based on the first 12 frames and the last 12 frames of the skeleton frame at each moment by adopting a nonlinear smoothing technology median filtering method, namely the skeleton frame data at each moment is set as the median of all the skeleton data coordinates in the neighborhood window of the 25 frames of the frame. Nonlinear signal noise of unstable bones is effectively inhibited through a sequencing statistical theory, and surrounding bone frame data are close to real bone frames, so that disordered bone data are eliminated.
X(i)=Med[x(i-N),...,x(i),...,x(i+N)]
Y(i)=Med[y(i-N),...,y(i),...,y(i+N)]
Z(i)=Med[z(i-N),...,z(i),...,z(i+N)]
In the formula, N is 12 frames, that is, the length of the filtered time window is 25 frames.
A Kinect-based virtual ship cabin using method comprises the following steps:
step 1, acquiring three-dimensional bone joint points of human body postures through a 3D motion sensing camera, and acquiring bone data of continuous actions of an interactor;
step 2, adopting Gaussian filtering to perform noise reduction treatment on the collected distorted bones;
step 3, in the skeleton input of the posture mode recognition, adopting a maximum-minimum normalization method to perform normalization processing on 25 joint points of the human skeleton of each frame;
step 4, eliminating features irrelevant to the class labels and features weakly relevant to the class labels, and then calculating the similarity between the bones of the unknown class and the bones of the known class;
and 5, recognizing the skeleton of the unknown class as the skeleton of the known class, mapping the skeleton information to drive the virtual human, calculating the interactive semantic class, and triggering feedback of the corresponding posture action of the virtual human in the action posture library.
In the field of marine teaching, a turbine simulator based on a virtual simulation technology becomes an essential facility for training and examining crews. Aiming at the operation process of cabin equipment, a human body skeleton posture virtual interaction system different from the traditional keyboard and mouse is developed in a man-machine interaction mode, so that an operator can be familiar with the operation process of related equipment, and is familiar with the method steps of how to eliminate faults, recover the normal operation state of the equipment and the like.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A Kinect-based virtual marine engine room system, comprising:
the motion posture library is used for storing bones and bone chains of known classes, wherein different motion forms map different mode instructions;
the data acquisition module is used for acquiring three-dimensional bone joint points of human body postures through the 3D motion sensing camera and constructing a bone joint data matrix;
the data analysis module is used for constructing a sample covariance matrix of the bone node data based on the obtained bone joint points and solving eigenvectors and eigenvalues; the skeleton sample is projected to the feature vector to obtain a principal component feature set; the method is also used for calculating the Euclidean distance of the principal component characteristics of the skeleton data and calculating the similarity between the skeleton of an unknown class and the skeleton of a known class;
and the output module is used for identifying the bones of unknown classes as the bones of known classes, mapping the bone information to drive the virtual human, calculating the interactive semantic classes, and triggering the feedback of the corresponding posture actions of the virtual human in the action posture library.
2. The Kinect-based virtual marine engine room system of claim 1, wherein the data acquisition module performs noise reduction on a distorted bone by Gaussian filtering in a data acquisition process, and discretizes to obtain a Gaussian noise elimination function:
Figure FDA0003275407740000011
wherein, sigma is the bone data variance, k is the dimension of the kernel matrix, i is the horizontal coordinate of the bone node, and j is the vertical coordinate of the bone node.
3. The Kinect-based virtual marine engine room system of claim 2, wherein the data acquisition module is used for acquiring human body bone joint coordinate data [ X, Y, Z ] during data acquisition]Based on the distance coordinate data of a camera coordinate system, a maximum-minimum normalization method is adopted to normalize 25 joint points of human skeleton of each frame, and a skeleton data matrix is substituted into a maximum-minimum normalization formula to obtain a new skeleton node data matrix
Figure FDA0003275407740000012
Figure FDA0003275407740000021
Wherein the content of the first and second substances,
Figure FDA0003275407740000022
4. the Kinect-based virtual marine engine room system of claim 3, wherein the data analysis module is further configured for identifying bone poses without associated feature dimension removal.
5. The Kinect-based virtual marine engine room system of claim 4, wherein a sample covariance matrix of bone node data is constructed using a PCA algorithm, comprising:
Figure FDA0003275407740000023
solving the eigenvector and the eigenvalue: i R-Lambda Ip|=0
λ is the required characteristic value, IPIs an identity matrix;
projecting the skeleton sample on the feature vector to obtain a principal component:
Figure FDA0003275407740000024
the basic formula for calculation of the euclidean distance is as follows:
Figure FDA0003275407740000025
wherein, ak、bkBone features to be identified and known classes, respectively;
the similarity calculation formula among the main component feature vectors of the skeleton is as follows:
Figure FDA0003275407740000026
6. the Kinect-based virtual marine engine room system of claim 5, wherein A, B represent the unknown class skeleton frame and the known class skeleton frame, respectively,
Figure FDA0003275407740000027
a corresponding set of principal component features representing skeletal frame a,
Figure FDA0003275407740000028
representing the corresponding principal component feature set of skeletal frame B.
7. The Kinect-based virtual marine engine room system of claim 6, wherein the output module further adopts a nonlinear smoothing median filtering method to solve the real-time bone frame data at the current moment based on the first 12 frames and the last 12 frames of the bone frame at each moment, and suppresses nonlinear signal noise of bone instability by a sequencing statistical theory to eliminate disordered bone data.
8. A Kinect-based virtual ship cabin using method is characterized by comprising the following steps:
step 1, acquiring three-dimensional bone joint points of human body postures through a 3D motion sensing camera, and acquiring bone data of continuous actions of an interactor;
step 2, adopting Gaussian filtering to perform noise reduction treatment on the collected distorted bones;
step 3, in the skeleton input of the posture mode recognition, adopting a maximum-minimum normalization method to perform normalization processing on 25 joint points of the human skeleton of each frame;
step 4, eliminating features irrelevant to the class labels and features weakly relevant to the class labels, and then calculating the similarity between the bones of the unknown class and the bones of the known class;
and 5, recognizing the skeleton of the unknown class as the skeleton of the known class, mapping the skeleton information to drive the virtual human, calculating the interactive semantic class, and triggering feedback of the corresponding posture action of the virtual human in the action posture library.
CN202111115447.XA 2021-09-23 2021-09-23 Kinect-based virtual ship cabin system and method Pending CN113807280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111115447.XA CN113807280A (en) 2021-09-23 2021-09-23 Kinect-based virtual ship cabin system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111115447.XA CN113807280A (en) 2021-09-23 2021-09-23 Kinect-based virtual ship cabin system and method

Publications (1)

Publication Number Publication Date
CN113807280A true CN113807280A (en) 2021-12-17

Family

ID=78940342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111115447.XA Pending CN113807280A (en) 2021-09-23 2021-09-23 Kinect-based virtual ship cabin system and method

Country Status (1)

Country Link
CN (1) CN113807280A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671738A (en) * 2024-02-01 2024-03-08 山东大学 Human body posture recognition system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107080940A (en) * 2017-03-07 2017-08-22 中国农业大学 Body feeling interaction conversion method and device based on depth camera Kinect
CN107272882A (en) * 2017-05-03 2017-10-20 江苏大学 The holographic long-range presentation implementation method of one species
CN108876881A (en) * 2018-06-04 2018-11-23 浙江大学 Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107080940A (en) * 2017-03-07 2017-08-22 中国农业大学 Body feeling interaction conversion method and device based on depth camera Kinect
CN107272882A (en) * 2017-05-03 2017-10-20 江苏大学 The holographic long-range presentation implementation method of one species
CN108876881A (en) * 2018-06-04 2018-11-23 浙江大学 Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱幸尔: "面向设备作业虚拟仿真的人体动作和手势识别交互技术研究", 中国优秀硕士学位论文全文数据库信息科技辑, pages 16 - 25 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117671738A (en) * 2024-02-01 2024-03-08 山东大学 Human body posture recognition system based on artificial intelligence
CN117671738B (en) * 2024-02-01 2024-04-23 山东大学 Human body posture recognition system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US20170046568A1 (en) Systems and methods of identifying a gesture using gesture data compressed by principal joint variable analysis
US20130335318A1 (en) Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers
Wen et al. A robust method of detecting hand gestures using depth sensors
Balasuriya et al. Learning platform for visually impaired children through artificial intelligence and computer vision
Karthick et al. Transforming Indian sign language into text using leap motion
Nooruddin et al. HGR: Hand-gesture-recognition based text input method for AR/VR wearable devices
Krishnaraj et al. A Glove based approach to recognize Indian Sign Languages
Devi et al. Dance gesture recognition: a survey
CN114967937A (en) Virtual human motion generation method and system
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
Tiwari et al. Sign language recognition through kinect based depth images and neural network
CN113807280A (en) Kinect-based virtual ship cabin system and method
Chavan et al. Indian sign language to forecast text using leap motion sensor and RF classifier
Zhang Computer-assisted human-computer interaction in visual communication
JP6623366B1 (en) Route recognition method, route recognition device, route recognition program, and route recognition program recording medium
CN108108648A (en) A kind of new gesture recognition system device and method
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof
CN111860086A (en) Gesture recognition method, device and system based on deep neural network
Dhamanskar et al. Human computer interaction using hand gestures and voice
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition
Liu et al. Gesture recognition based on Kinect
Nguyen et al. A fully automatic hand gesture recognition system for human-robot interaction
Prasad et al. Fuzzy classifier for continuous sign language recognition from tracking and shape features
WO2007066953A1 (en) Apparatus for recognizing three-dimensional motion using linear discriminant analysis
Pierard et al. A technique for building databases of annotated and realistic human silhouettes based on an avatar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination