CN109032361A - Intelligent 3D shadow casting technique - Google Patents
Intelligent 3D shadow casting technique Download PDFInfo
- Publication number
- CN109032361A CN109032361A CN201811002477.8A CN201811002477A CN109032361A CN 109032361 A CN109032361 A CN 109032361A CN 201811002477 A CN201811002477 A CN 201811002477A CN 109032361 A CN109032361 A CN 109032361A
- Authority
- CN
- China
- Prior art keywords
- projection
- video
- intelligent
- motion
- person
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000005266 casting Methods 0.000 title abstract 2
- 238000004458 analytical method Methods 0.000 claims abstract description 4
- 230000000007 visual effect Effects 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 24
- 230000001133 acceleration Effects 0.000 claims description 18
- 230000005021 gait Effects 0.000 claims description 18
- 238000012706 support-vector machine Methods 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 238000000354 decomposition reaction Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 238000007635 classification algorithm Methods 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 3
- 238000002790 cross-validation Methods 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000002068 genetic effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000513 principal component analysis Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000035807 sensation Effects 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 abstract description 3
- 239000011521 glass Substances 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B29/00—Combinations of cameras, projectors or photographic printing apparatus with non-photographic non-optical apparatus, e.g. clocks or weapons; Cameras having the shape of other objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/403—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Otolaryngology (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention relates to intelligent 3D shadow casting techniques, including carrying out projection and sounding to image by projection speaker, and by personal as the wearable movement of carrying judges that sensor realizes modeling to the acquisition for being carried out data-signal by sampler, to control projection speaker for the feedback control and audio output of image according to modeling analysis data.The beneficial effects of the present invention are: the present invention proposes a kind of method, in the case where the full nature without wearing VR glasses and earphone is without wearing conditions, by the adaptive cooperation of multi-directional projection type speaker, the complete natural VR visual field and VR sound of learning adjustment everyone use habit and use environment.
Description
Technical Field
The invention relates to intelligent 3D projection technology.
Background
The human exploration that is closest to real seeing and hearing experience is never stopped in the simulation, moves towards virtual reality from the plane display, moves towards three-dimensional surround sound from one-way sound, and present virtual reality technique can simulate the video impression of different directions, also someone proposes VR sound, simulates the sound effect of different distances far and near through the earphone promptly, but VR glasses and earphone need be worn to these VR picture and VR sound, still have certain uncomfortable sense.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an intelligent 3D projection technology, which can solve the technical problems.
The invention realizes the purpose through the following technical scheme:
intelligent 3D projection technique includes that projection and sound production are carried out to the image through projection formula audio amplifier to judge the sensor along with the wearable action that carries through the individual and carry out data signal's collection to being sampled the person and realize the modeling, thereby according to the feedback control and the audio output of modeling analysis data control projection formula audio amplifier to the image, the data acquisition modeling goes on in proper order according to following step:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: for the process from the first step to the ninth step, which is continuously repeated by a person to be sampled, (refer to fig. 1) the SVM classifier can adaptively and continuously optimize and perfect new samples input each time along with the increase of the sample amount of the samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross-validation method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than the prior one, the training parameters are set as optimal parameters, otherwise, operations such as selection, cross and variation are carried out to further optimize the training parameters, the adaptive perfection of the model is realized, finally, an individual model for the person is formed according to the viewing habits and action habits of the person, and then intelligent holographic projection for each person is carried out by using the projection type sound box in the.
In this embodiment, the projection audio amplifier in step one is outside to be constituted by upper and lower two parts, one of them part is 3 direction evenly distributed's projecting apparatus (projection 120 degrees respectively), another part is the panorama audio amplifier, the earphone that has 3 direction evenly distributed in the audio amplifier (carry out the audio frequency of three direction and the feedback discernment of volume), the audio amplifier is inside to include marginal calculation module, communication module, be connected with high in the clouds central server, wearable action judges the sensor, include and not be limited to the bracelet, wrist-watch, the waistband, the sensor of judging is made in drive such as shoes, contain marginal calculation module, communication module, be connected with high in the clouds central server through communication module. The central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
In this embodiment, the shaking noise balance in the sixth step is to superimpose geometric mean values, which can be acquired by the velocity sensor and include three-dimensional acceleration, three-dimensional magnetic field, and three-dimensional angular velocity, according to a weight coefficient to obtain a weighted geometric mean value: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the extracted feature parameters in step six is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
The invention has the beneficial effects that:
the invention provides a method, which is used for learning the full-natural VR vision and VR sound adapting to the use habits and use environments of everyone through the self-adaptive matching of a multi-direction projection type sound box under the full-natural no-wear condition without wearing VR glasses and earphones.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
as shown in fig. 1, the intelligent 3D projection technology includes projecting and sounding an image through a projection speaker, and implementing modeling by acquiring a data signal from a person to be sampled through a wearable motion determination sensor carried by the person, so as to control feedback control and audio output of the projection speaker to the image according to modeling analysis data, wherein the modeling of the acquired data is performed in sequence according to the following steps:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: for the process from the first step to the ninth step, which is continuously repeated by a person to be sampled, (refer to fig. 1) the SVM classifier can adaptively and continuously optimize and perfect new samples input each time along with the increase of the sample amount of the samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross-validation method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than the prior one, the training parameters are set as optimal parameters, otherwise, operations such as selection, cross and variation are carried out to further optimize the training parameters, the adaptive perfection of the model is realized, finally, an individual model for the person is formed according to the viewing habits and action habits of the person, and then intelligent holographic projection for each person is carried out by using the projection type sound box in the.
In this embodiment, the projection audio amplifier in step one is outside to be constituted by upper and lower two parts, one of them part is 3 direction evenly distributed's projecting apparatus (projection 120 degrees respectively), another part is the panorama audio amplifier, the earphone that has 3 direction evenly distributed in the audio amplifier (carry out the audio frequency of three direction and the feedback discernment of volume), the audio amplifier is inside to include marginal calculation module, communication module, be connected with high in the clouds central server, wearable action judges the sensor, include and not be limited to the bracelet, wrist-watch, the waistband, the sensor of judging is made in drive such as shoes, contain marginal calculation module, communication module, be connected with high in the clouds central server through communication module. The central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
In this embodiment, the shaking noise balance in the sixth step is obtained by superposing the geometric mean values, which can be acquired by the velocity sensor and include three-dimensional acceleration, three-dimensional magnetic field and three-dimensional angular velocity, according to the weight coefficientTo weighted geometric mean: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the extracted feature parameters in step six is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.
Claims (7)
1. Intelligent 3D projection technique, its characterized in that: including projecting and the sound production to the image through projection formula audio amplifier to judge the sensor along with the wearable action that carries through the individual and carry out data signal's collection to the person being sampled and realize the modeling, thereby according to the feedback control and the audio output of modeling analysis data control projection formula audio amplifier to the image, the data acquisition is modeled and is gone on in proper order according to following step:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: and for the sampled person, continuously repeating the process from the first step to the ninth step, continuously optimizing and perfecting each time of inputting a new sample by the SVM classifier in a self-adaptive manner along with the increase of the sample amount, calculating the recognition rate of the SVM classifier according to a cross-validation principle, evaluating the fitness without setting a termination value of a genetic algorithm, adopting a higher-than-high method for termination conditions, setting the training parameters as optimal parameters if the recognition rate of the training is higher than that of the existing training parameters, otherwise, executing operations such as selection, crossing and variation to further optimize the training parameters, realizing the self-adaptive perfection of the model, finally forming an individual personalized model aiming at the person according to the viewing habits and action habits of the person, and then carrying out intelligent holographic projection aiming at each person by using the projection type loudspeaker box in the first step.
2. The intelligent 3D projection technology of claim 1, wherein: the projection type sound box in the first step is composed of an upper part and a lower part, wherein one part is a projector (with 120-degree projection respectively) with 3 directions uniformly distributed, the other part is a panoramic sound box, and the sound box is provided with earphones with 3 directions uniformly distributed.
3. The method of claim 1Intelligent 3D projection technique, its characterized in that: and step six, the jitter noise balance is that geometric mean values which can be acquired by a speed sensor and comprise three-dimensional acceleration, a three-dimensional magnetic field and three-dimensional angular velocity are superposed according to a weight coefficient to obtain a weighted geometric mean value: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
4. The intelligent 3D projection technology of claim 1, wherein: in the sixth step, the original motion vector group (F1, F2, …, Fm) of the feature parameters is extracted, m is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "" "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "" ", m;" k ═ 1, "" 9).
5. The intelligent 3D projection technique of claim 2, wherein: the interior of the sound box comprises an edge computing module, a communication module and a cloud central server.
6. The intelligent 3D projection technology of claim 1, wherein: wearable motion judgment sensor includes but is not limited to the sensor that drives and make the judgement such as bracelet, wrist-watch, waistband, shoes.
7. The intelligent 3D projection technique of claim 5, wherein: the central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811002477.8A CN109032361A (en) | 2018-08-29 | 2018-08-29 | Intelligent 3D shadow casting technique |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811002477.8A CN109032361A (en) | 2018-08-29 | 2018-08-29 | Intelligent 3D shadow casting technique |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109032361A true CN109032361A (en) | 2018-12-18 |
Family
ID=64625595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811002477.8A Pending CN109032361A (en) | 2018-08-29 | 2018-08-29 | Intelligent 3D shadow casting technique |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109032361A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI724858B (en) * | 2020-04-08 | 2021-04-11 | 國軍花蓮總醫院 | Mixed Reality Evaluation System Based on Gesture Action |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202677083U (en) * | 2012-06-01 | 2013-01-16 | 中国人民解放军第四军医大学 | Sleep and fatigue monitoring type watch apparatus |
CN102945079A (en) * | 2012-11-16 | 2013-02-27 | 武汉大学 | Intelligent recognition and control-based stereographic projection system and method |
CN103584840A (en) * | 2013-11-25 | 2014-02-19 | 天津大学 | Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability |
CN106971059A (en) * | 2017-03-01 | 2017-07-21 | 福州云开智能科技有限公司 | A kind of wearable device based on the adaptive health monitoring of neutral net |
CN107015646A (en) * | 2017-03-28 | 2017-08-04 | 北京犀牛数字互动科技有限公司 | The recognition methods of motion state and device |
CN107102728A (en) * | 2017-03-28 | 2017-08-29 | 北京犀牛数字互动科技有限公司 | Display methods and system based on virtual reality technology |
CN107205140A (en) * | 2017-07-12 | 2017-09-26 | 赵政宇 | A kind of panoramic video segmentation projecting method and apply its system |
CN107465850A (en) * | 2016-06-03 | 2017-12-12 | 王建文 | Virtual reality system |
CN107753026A (en) * | 2017-09-28 | 2018-03-06 | 古琳达姬(厦门)股份有限公司 | For the intelligent shoe self-adaptive monitoring method of backbone leg health |
CN108107578A (en) * | 2017-12-14 | 2018-06-01 | 腾讯科技(深圳)有限公司 | View angle regulating method, device, computing device and the storage medium of virtual reality |
-
2018
- 2018-08-29 CN CN201811002477.8A patent/CN109032361A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202677083U (en) * | 2012-06-01 | 2013-01-16 | 中国人民解放军第四军医大学 | Sleep and fatigue monitoring type watch apparatus |
CN102945079A (en) * | 2012-11-16 | 2013-02-27 | 武汉大学 | Intelligent recognition and control-based stereographic projection system and method |
CN103584840A (en) * | 2013-11-25 | 2014-02-19 | 天津大学 | Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability |
CN107465850A (en) * | 2016-06-03 | 2017-12-12 | 王建文 | Virtual reality system |
CN106971059A (en) * | 2017-03-01 | 2017-07-21 | 福州云开智能科技有限公司 | A kind of wearable device based on the adaptive health monitoring of neutral net |
CN107015646A (en) * | 2017-03-28 | 2017-08-04 | 北京犀牛数字互动科技有限公司 | The recognition methods of motion state and device |
CN107102728A (en) * | 2017-03-28 | 2017-08-29 | 北京犀牛数字互动科技有限公司 | Display methods and system based on virtual reality technology |
CN107205140A (en) * | 2017-07-12 | 2017-09-26 | 赵政宇 | A kind of panoramic video segmentation projecting method and apply its system |
CN107753026A (en) * | 2017-09-28 | 2018-03-06 | 古琳达姬(厦门)股份有限公司 | For the intelligent shoe self-adaptive monitoring method of backbone leg health |
CN108107578A (en) * | 2017-12-14 | 2018-06-01 | 腾讯科技(深圳)有限公司 | View angle regulating method, device, computing device and the storage medium of virtual reality |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI724858B (en) * | 2020-04-08 | 2021-04-11 | 國軍花蓮總醫院 | Mixed Reality Evaluation System Based on Gesture Action |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10701506B2 (en) | Personalized head related transfer function (HRTF) based on video capture | |
JP7231676B2 (en) | Eyelid Shape Estimation Using Eye Pose Measurement | |
US11238568B2 (en) | Method and system for reconstructing obstructed face portions for virtual reality environment | |
JP7181928B2 (en) | A Gradient Normalization System and Method for Adaptive Loss Balancing in Deep Multitasking Networks | |
CN110119703B (en) | Human body action recognition method fusing attention mechanism and spatio-temporal graph convolutional neural network in security scene | |
US11747898B2 (en) | Method and apparatus with gaze estimation | |
US20180032135A1 (en) | Method for gaze tracking | |
CN108596106B (en) | Visual fatigue recognition method and device based on VR equipment and VR equipment | |
WO2023071964A1 (en) | Data processing method and apparatus, and electronic device and computer-readable storage medium | |
CN107102728A (en) | Display methods and system based on virtual reality technology | |
CN108198130B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN108885799A (en) | Information processing equipment, information processing system and information processing method | |
CN114258687A (en) | Determining spatialized virtual acoustic scenes from traditional audiovisual media | |
CN107015646A (en) | The recognition methods of motion state and device | |
CN108932060A (en) | Gesture three-dimensional interaction shadow casting technique | |
CN111643098A (en) | Gait recognition and emotion perception method and system based on intelligent acoustic equipment | |
CN114270879A (en) | Personalized equalization of audio output using 3D reconstruction of user's ear | |
CN113822136A (en) | Video material image selection method, device, equipment and storage medium | |
CN114365510A (en) | Selecting spatial positioning for audio personalization | |
CN114223215A (en) | Dynamic customization of head-related transfer functions for rendering audio content | |
US11281293B1 (en) | Systems and methods for improving handstate representation model estimates | |
JP2022546176A (en) | Personalized Equalization of Audio Output Using Identified Features of User's Ear | |
WO2019094114A1 (en) | Personalized head related transfer function (hrtf) based on video capture | |
CN109086690A (en) | Image characteristic extracting method, target identification method and corresponding intrument | |
CN116560512A (en) | Virtual digital human interaction method, electronic equipment, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |