CN109032361A - Intelligent 3D shadow casting technique - Google Patents

Intelligent 3D shadow casting technique Download PDF

Info

Publication number
CN109032361A
CN109032361A CN201811002477.8A CN201811002477A CN109032361A CN 109032361 A CN109032361 A CN 109032361A CN 201811002477 A CN201811002477 A CN 201811002477A CN 109032361 A CN109032361 A CN 109032361A
Authority
CN
China
Prior art keywords
projection
video
intelligent
motion
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811002477.8A
Other languages
Chinese (zh)
Inventor
薛爱凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Win Win Time Technology Co Ltd
Original Assignee
Shenzhen Win Win Time Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Win Win Time Technology Co Ltd filed Critical Shenzhen Win Win Time Technology Co Ltd
Priority to CN201811002477.8A priority Critical patent/CN109032361A/en
Publication of CN109032361A publication Critical patent/CN109032361A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B29/00Combinations of cameras, projectors or photographic printing apparatus with non-photographic non-optical apparatus, e.g. clocks or weapons; Cameras having the shape of other objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Otolaryngology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to intelligent 3D shadow casting techniques, including carrying out projection and sounding to image by projection speaker, and by personal as the wearable movement of carrying judges that sensor realizes modeling to the acquisition for being carried out data-signal by sampler, to control projection speaker for the feedback control and audio output of image according to modeling analysis data.The beneficial effects of the present invention are: the present invention proposes a kind of method, in the case where the full nature without wearing VR glasses and earphone is without wearing conditions, by the adaptive cooperation of multi-directional projection type speaker, the complete natural VR visual field and VR sound of learning adjustment everyone use habit and use environment.

Description

Intelligent 3D projection technology
Technical Field
The invention relates to intelligent 3D projection technology.
Background
The human exploration that is closest to real seeing and hearing experience is never stopped in the simulation, moves towards virtual reality from the plane display, moves towards three-dimensional surround sound from one-way sound, and present virtual reality technique can simulate the video impression of different directions, also someone proposes VR sound, simulates the sound effect of different distances far and near through the earphone promptly, but VR glasses and earphone need be worn to these VR picture and VR sound, still have certain uncomfortable sense.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an intelligent 3D projection technology, which can solve the technical problems.
The invention realizes the purpose through the following technical scheme:
intelligent 3D projection technique includes that projection and sound production are carried out to the image through projection formula audio amplifier to judge the sensor along with the wearable action that carries through the individual and carry out data signal's collection to being sampled the person and realize the modeling, thereby according to the feedback control and the audio output of modeling analysis data control projection formula audio amplifier to the image, the data acquisition modeling goes on in proper order according to following step:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: for the process from the first step to the ninth step, which is continuously repeated by a person to be sampled, (refer to fig. 1) the SVM classifier can adaptively and continuously optimize and perfect new samples input each time along with the increase of the sample amount of the samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross-validation method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than the prior one, the training parameters are set as optimal parameters, otherwise, operations such as selection, cross and variation are carried out to further optimize the training parameters, the adaptive perfection of the model is realized, finally, an individual model for the person is formed according to the viewing habits and action habits of the person, and then intelligent holographic projection for each person is carried out by using the projection type sound box in the.
In this embodiment, the projection audio amplifier in step one is outside to be constituted by upper and lower two parts, one of them part is 3 direction evenly distributed's projecting apparatus (projection 120 degrees respectively), another part is the panorama audio amplifier, the earphone that has 3 direction evenly distributed in the audio amplifier (carry out the audio frequency of three direction and the feedback discernment of volume), the audio amplifier is inside to include marginal calculation module, communication module, be connected with high in the clouds central server, wearable action judges the sensor, include and not be limited to the bracelet, wrist-watch, the waistband, the sensor of judging is made in drive such as shoes, contain marginal calculation module, communication module, be connected with high in the clouds central server through communication module. The central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
In this embodiment, the shaking noise balance in the sixth step is to superimpose geometric mean values, which can be acquired by the velocity sensor and include three-dimensional acceleration, three-dimensional magnetic field, and three-dimensional angular velocity, according to a weight coefficient to obtain a weighted geometric mean value: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the extracted feature parameters in step six is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
The invention has the beneficial effects that:
the invention provides a method, which is used for learning the full-natural VR vision and VR sound adapting to the use habits and use environments of everyone through the self-adaptive matching of a multi-direction projection type sound box under the full-natural no-wear condition without wearing VR glasses and earphones.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
as shown in fig. 1, the intelligent 3D projection technology includes projecting and sounding an image through a projection speaker, and implementing modeling by acquiring a data signal from a person to be sampled through a wearable motion determination sensor carried by the person, so as to control feedback control and audio output of the projection speaker to the image according to modeling analysis data, wherein the modeling of the acquired data is performed in sequence according to the following steps:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: for the process from the first step to the ninth step, which is continuously repeated by a person to be sampled, (refer to fig. 1) the SVM classifier can adaptively and continuously optimize and perfect new samples input each time along with the increase of the sample amount of the samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross-validation method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than the prior one, the training parameters are set as optimal parameters, otherwise, operations such as selection, cross and variation are carried out to further optimize the training parameters, the adaptive perfection of the model is realized, finally, an individual model for the person is formed according to the viewing habits and action habits of the person, and then intelligent holographic projection for each person is carried out by using the projection type sound box in the.
In this embodiment, the projection audio amplifier in step one is outside to be constituted by upper and lower two parts, one of them part is 3 direction evenly distributed's projecting apparatus (projection 120 degrees respectively), another part is the panorama audio amplifier, the earphone that has 3 direction evenly distributed in the audio amplifier (carry out the audio frequency of three direction and the feedback discernment of volume), the audio amplifier is inside to include marginal calculation module, communication module, be connected with high in the clouds central server, wearable action judges the sensor, include and not be limited to the bracelet, wrist-watch, the waistband, the sensor of judging is made in drive such as shoes, contain marginal calculation module, communication module, be connected with high in the clouds central server through communication module. The central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
In this embodiment, the shaking noise balance in the sixth step is obtained by superposing the geometric mean values, which can be acquired by the velocity sensor and include three-dimensional acceleration, three-dimensional magnetic field and three-dimensional angular velocity, according to the weight coefficientTo weighted geometric mean: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the extracted feature parameters in step six is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.

Claims (7)

1. Intelligent 3D projection technique, its characterized in that: including projecting and the sound production to the image through projection formula audio amplifier to judge the sensor along with the wearable action that carries through the individual and carry out data signal's collection to the person being sampled and realize the modeling, thereby according to the feedback control and the audio output of modeling analysis data control projection formula audio amplifier to the image, the data acquisition is modeled and is gone on in proper order according to following step:
the method comprises the following steps: video and audio are put in through the projection type sound box, a person to be sampled judges that the sensor watches and listens to the video and the audio through wearing the wearable motion, the motion is captured through the audio and video acquisition unit in the process, highly accurate voice recognition, gesture recognition and facial feature recognition are carried out, the acquired data are sent to the network switching device in the form of video data streams, the video data are processed through the video acquisition card, and the processed video data streams are transmitted to a network in the form of IP data streams;
step two: the gait of the sampled person is identified through the acceleration sensor and the wristwatch, and the direction of the current acceleration can be judged through the change of the vector length.
Step three: denoising electromagnetic interference (namely high-frequency noise) in the circuit in the acquisition process in the first step and the second step by using a wavelet transform threshold method;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of the four region pressures by adopting wavelet packet decomposition and difference algorithm, and identifying by using SVM;
step five: selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the gait frequency domain characteristics extracted in the fifth step by using a fuzzy C mean value method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set by using the fuzzy C mean value method based on fuzzy membership ranking to obtain a minimum optimal gait frequency domain characteristic subset, combining the minimum optimal wavelet packet decomposition coefficient with gait time domain characteristics to obtain a fused gait characteristic set, then adopting an SVM (support vector machine) to carry out gait recognition, and adopting a nonlinear mapping radial basis kernel function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space so as to recognize and model;
step six: performing operation processing by adopting a layered and graded mode, firstly, balancing dithering noise in the projection process, filtering and denoising, then performing layered and graded dimensionality reduction modeling, judging the motion type of a human body by utilizing the output data of the acceleration sensor and utilizing median filtering, judging whether the human body moves statically, moves parts and types hierarchically, judging main characteristics by graded sampling, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep and the like such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude through an accelerometer, and judging that the human body is static if the synthesized amplitude is positioned between a given upper threshold and a given lower threshold; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state and extracting characteristic parameters;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing depth intercourse mode recognition of a projection portrait model corresponding to the whole crowd, inputting samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to input values, if the type exceeds the range of (1, N), newly registering the type of N +1, and then updating the classifier again;
step nine: combining the calculation sub-results, respectively reading the output files of each process in the current time step by the main process, performing splicing processing on the acquired multi-channel video streams to generate a panoramic video stream carrying the timestamp, combining and restoring the results according to a regional decomposition algorithm, and temporarily storing the results in an ASCLL format; when a user wears the virtual reality terminal, whether the virtual reality terminal is in a motion state is detected, if so, video frequency frames to be played are adjusted according to the acceleration so as to provide synchronous video information for the user, and the information is displayed in a visual area of the virtual reality terminal.
Step ten: and for the sampled person, continuously repeating the process from the first step to the ninth step, continuously optimizing and perfecting each time of inputting a new sample by the SVM classifier in a self-adaptive manner along with the increase of the sample amount, calculating the recognition rate of the SVM classifier according to a cross-validation principle, evaluating the fitness without setting a termination value of a genetic algorithm, adopting a higher-than-high method for termination conditions, setting the training parameters as optimal parameters if the recognition rate of the training is higher than that of the existing training parameters, otherwise, executing operations such as selection, crossing and variation to further optimize the training parameters, realizing the self-adaptive perfection of the model, finally forming an individual personalized model aiming at the person according to the viewing habits and action habits of the person, and then carrying out intelligent holographic projection aiming at each person by using the projection type loudspeaker box in the first step.
2. The intelligent 3D projection technology of claim 1, wherein: the projection type sound box in the first step is composed of an upper part and a lower part, wherein one part is a projector (with 120-degree projection respectively) with 3 directions uniformly distributed, the other part is a panoramic sound box, and the sound box is provided with earphones with 3 directions uniformly distributed.
3. The method of claim 1Intelligent 3D projection technique, its characterized in that: and step six, the jitter noise balance is that geometric mean values which can be acquired by a speed sensor and comprise three-dimensional acceleration, a three-dimensional magnetic field and three-dimensional angular velocity are superposed according to a weight coefficient to obtain a weighted geometric mean value: k is1Y1+k2Y2+k3Y3Wherein Y is1As geometric mean of acceleration, Y2Is the geometric mean of the magnetic field, Y3Is the geometric mean value of angular velocity, and k is a constant modulus weighting coefficient.
4. The intelligent 3D projection technology of claim 1, wherein: in the sixth step, the original motion vector group (F1, F2, …, Fm) of the feature parameters is extracted, m is smaller than 9, and the extraction matrix is:
the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "" "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "" ", m;" k ═ 1, "" 9).
5. The intelligent 3D projection technique of claim 2, wherein: the interior of the sound box comprises an edge computing module, a communication module and a cloud central server.
6. The intelligent 3D projection technology of claim 1, wherein: wearable motion judgment sensor includes but is not limited to the sensor that drives and make the judgement such as bracelet, wrist-watch, waistband, shoes.
7. The intelligent 3D projection technique of claim 5, wherein: the central server integrally coordinates the data of the wearable motion judgment sensor carried by the loudspeaker box and the individual, and comprehensively judges to perform audio feedback and output.
CN201811002477.8A 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique Pending CN109032361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811002477.8A CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811002477.8A CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Publications (1)

Publication Number Publication Date
CN109032361A true CN109032361A (en) 2018-12-18

Family

ID=64625595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811002477.8A Pending CN109032361A (en) 2018-08-29 2018-08-29 Intelligent 3D shadow casting technique

Country Status (1)

Country Link
CN (1) CN109032361A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI724858B (en) * 2020-04-08 2021-04-11 國軍花蓮總醫院 Mixed Reality Evaluation System Based on Gesture Action

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202677083U (en) * 2012-06-01 2013-01-16 中国人民解放军第四军医大学 Sleep and fatigue monitoring type watch apparatus
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107015646A (en) * 2017-03-28 2017-08-04 北京犀牛数字互动科技有限公司 The recognition methods of motion state and device
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107205140A (en) * 2017-07-12 2017-09-26 赵政宇 A kind of panoramic video segmentation projecting method and apply its system
CN107465850A (en) * 2016-06-03 2017-12-12 王建文 Virtual reality system
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health
CN108107578A (en) * 2017-12-14 2018-06-01 腾讯科技(深圳)有限公司 View angle regulating method, device, computing device and the storage medium of virtual reality

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202677083U (en) * 2012-06-01 2013-01-16 中国人民解放军第四军医大学 Sleep and fatigue monitoring type watch apparatus
CN102945079A (en) * 2012-11-16 2013-02-27 武汉大学 Intelligent recognition and control-based stereographic projection system and method
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN107465850A (en) * 2016-06-03 2017-12-12 王建文 Virtual reality system
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107015646A (en) * 2017-03-28 2017-08-04 北京犀牛数字互动科技有限公司 The recognition methods of motion state and device
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107205140A (en) * 2017-07-12 2017-09-26 赵政宇 A kind of panoramic video segmentation projecting method and apply its system
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health
CN108107578A (en) * 2017-12-14 2018-06-01 腾讯科技(深圳)有限公司 View angle regulating method, device, computing device and the storage medium of virtual reality

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI724858B (en) * 2020-04-08 2021-04-11 國軍花蓮總醫院 Mixed Reality Evaluation System Based on Gesture Action

Similar Documents

Publication Publication Date Title
US10701506B2 (en) Personalized head related transfer function (HRTF) based on video capture
JP7231676B2 (en) Eyelid Shape Estimation Using Eye Pose Measurement
US11238568B2 (en) Method and system for reconstructing obstructed face portions for virtual reality environment
JP7181928B2 (en) A Gradient Normalization System and Method for Adaptive Loss Balancing in Deep Multitasking Networks
US11747898B2 (en) Method and apparatus with gaze estimation
US20180032135A1 (en) Method for gaze tracking
CN108596106B (en) Visual fatigue recognition method and device based on VR equipment and VR equipment
WO2023071964A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
CN107102728A (en) Display methods and system based on virtual reality technology
CN108885799A (en) Information processing equipment, information processing system and information processing method
CN114258687A (en) Determining spatialized virtual acoustic scenes from traditional audiovisual media
CN107015646A (en) The recognition methods of motion state and device
CN114631127A (en) Synthesis of small samples of speaking heads
CN108932060A (en) Gesture three-dimensional interaction shadow casting technique
CN111643098A (en) Gait recognition and emotion perception method and system based on intelligent acoustic equipment
CN113822136A (en) Video material image selection method, device, equipment and storage medium
CN114365510A (en) Selecting spatial positioning for audio personalization
CN114223215A (en) Dynamic customization of head-related transfer functions for rendering audio content
US11281293B1 (en) Systems and methods for improving handstate representation model estimates
JP2022546176A (en) Personalized Equalization of Audio Output Using Identified Features of User's Ear
WO2019094114A1 (en) Personalized head related transfer function (hrtf) based on video capture
CN113705302A (en) Training method and device for image generation model, computer equipment and storage medium
CN109086690A (en) Image characteristic extracting method, target identification method and corresponding intrument
CN116560512A (en) Virtual digital human interaction method, electronic equipment, system and storage medium
CN108769640A (en) Automatically adjust visual angle shadow casting technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181218