CN109285598A - The mobile phone projection technology for having color mood regulation - Google Patents
The mobile phone projection technology for having color mood regulation Download PDFInfo
- Publication number
- CN109285598A CN109285598A CN201810998874.9A CN201810998874A CN109285598A CN 109285598 A CN109285598 A CN 109285598A CN 201810998874 A CN201810998874 A CN 201810998874A CN 109285598 A CN109285598 A CN 109285598A
- Authority
- CN
- China
- Prior art keywords
- projection
- force
- acting force
- curve
- mobile phone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005516 engineering process Methods 0.000 title claims abstract description 13
- 230000036651 mood Effects 0.000 title claims abstract description 8
- 230000033228 biological regulation Effects 0.000 title claims abstract description 6
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 20
- 230000002068 genetic effect Effects 0.000 claims abstract description 5
- 238000000034 method Methods 0.000 claims description 44
- 230000009471 action Effects 0.000 claims description 29
- 230000001133 acceleration Effects 0.000 claims description 27
- 238000000418 atomic force spectrum Methods 0.000 claims description 21
- 238000012706 support-vector machine Methods 0.000 claims description 21
- 239000003990 capacitor Substances 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 15
- 230000008451 emotion Effects 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 12
- 230000003068 static effect Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 9
- 238000000354 decomposition reaction Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 230000007613 environmental effect Effects 0.000 claims description 7
- 238000007635 classification algorithm Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 239000003086 colorant Substances 0.000 claims description 4
- 206010028347 Muscle twitching Diseases 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000513 principal component analysis Methods 0.000 claims description 3
- 230000000644 propagated effect Effects 0.000 claims description 3
- 230000035807 sensation Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000013210 evaluation model Methods 0.000 abstract description 2
- 230000003044 adaptive effect Effects 0.000 abstract 1
- 239000013307 optical fiber Substances 0.000 abstract 1
- YJPIGAIKUZMOQA-UHFFFAOYSA-N Melatonin Natural products COC1=CC=C2N(C(C)=O)C=C(CCN)C2=C1 YJPIGAIKUZMOQA-UHFFFAOYSA-N 0.000 description 5
- 229960003987 melatonin Drugs 0.000 description 5
- DRLFMBDRBRZALE-UHFFFAOYSA-N melatonin Chemical compound COC1=CC=C2NC=C(CCNC(C)=O)C2=C1 DRLFMBDRBRZALE-UHFFFAOYSA-N 0.000 description 5
- 230000028327 secretion Effects 0.000 description 5
- 210000004560 pineal gland Anatomy 0.000 description 3
- UCTWMZQNUQWSLP-UHFFFAOYSA-N adrenaline Chemical compound CNCC(O)C1=CC=C(O)C(O)=C1 UCTWMZQNUQWSLP-UHFFFAOYSA-N 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- XUIIKFGFIJCVMT-GFCCVEGCSA-N D-thyroxine Chemical compound IC1=CC(C[C@@H](N)C(O)=O)=CC(I)=C1OC1=CC(I)=C(O)C(I)=C1 XUIIKFGFIJCVMT-GFCCVEGCSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000994 depressogenic effect Effects 0.000 description 1
- 230000010482 emotional regulation Effects 0.000 description 1
- 229940088597 hormone Drugs 0.000 description 1
- 239000005556 hormone Substances 0.000 description 1
- 210000005260 human cell Anatomy 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 229940034208 thyroxine Drugs 0.000 description 1
- XUIIKFGFIJCVMT-UHFFFAOYSA-N thyroxine-binding globulin Natural products IC1=CC(CC([NH3+])C([O-])=O)=CC(I)=C1OC1=CC(I)=C(O)C(I)=C1 XUIIKFGFIJCVMT-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72409—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
- H04M1/72412—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3182—Colour adjustment, e.g. white balance, shading or gamut
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- Psychiatry (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Biology (AREA)
- Epidemiology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Environmental & Geological Engineering (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Telephone Function (AREA)
Abstract
The present invention relates to the mobile phone projection technologies for having color mood regulation, are projected by smart phone to user's surrounding enviroment, adjust the optical fiber of living environment locating for user.The beneficial effects of the present invention are: the present invention acquires the movement of user by Intelligent bracelet, heart rate, colored light projection is carried out by intelligent projection mobile phone, it is adaptive to improve study, the environment of work, in conjunction with different wave length light, season, personal sign situation, pass through everyone most suitable color mood regulation model of portrait model prediction corresponding in whole crowd, the individual mood Evaluation Model on Effectiveness established according to caused heart rate and movement variation, carry out the study for having supervision, to establish the color mood mobile phone projection model for being most suitable for individual, it is constantly corrected according to genetic algorithm.
Description
Technical Field
The invention relates to a mobile phone projection technology with color emotion regulation.
Background
People find that the light color has certain correlation with emotion in daily life; in haze, the thinking is slow, the reaction is slow, and depression is easy to occur; the sunny day is easily excited. Warm colored lights (e.g., pink and light purple) provide a warm, relaxing atmosphere throughout the space. In summer, the blue and green light can make people feel cool; and in winter, the red color makes people feel warm. Munich psychologist spends 3 years to explore the influence of environmental colors on learning and intelligence of children, and finds that children become agile and creative as soon as they enter light blue, yellow green, orange and other bright-tone environments, the average intelligence quotient is higher than the normal level by 12 points, and becomes dull after entering black, brown and other dark-tone environments, and the intelligence quotient test result is lower than the normal level. The light control device is actually related to melatonin secretion of pineal bodies in human bodies, melatonin secretion is controlled by light, light rays are more in daytime, melatonin secretion is inhibited, people look excited, and melatonin secretion of pineal bodies is increased at night when two eyes are blackened, so that worry is worried. In autumn and winter, the days are short and night, the sunshine is reduced, the weather is cool, melatonin secreted by pineal bodies is obviously increased, the secretion of hormones such as thyroxine and adrenaline which can stimulate the mood is reduced, the activity of human cells is reduced, the metabolism is slowed down, the mood of people is depressed and subsided, the sad in autumn and winter is induced, and how to adjust the living light environment of people is always the direction of exploration.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a mobile phone projection technology with color emotion adjustment, and can solve the technical problems.
The invention realizes the purpose through the following technical scheme:
the mobile phone projection technology with the function of adjusting the color emotion is sequentially carried out according to the following steps:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure respectively by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above 0 point and time on the force-time curve) and braking impulse (integral of force below 0 point and time on the force-time curve), and the frequency domain characteristic extraction is to align the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
In this embodiment, be provided with rhythm of the heart detection device, acceleration induction element and communication module in the intelligent bracelet of step one.
In this embodiment, the sampling calculation formula of the motion state in the step six is as followsWherein a, b and c are users respectivelyThree directional acceleration/angular velocity values.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the feature parameters extracted in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
In this embodiment, the smart phone in the first step is connected to the projection device through bluetooth.
In this embodiment, the color spectrum control module in the first step adjusts the generated light beam frequency to generate different colors.
In this embodiment, the acceleration acquisition unit is arranged in the smart bracelet in the first step, the acceleration acquisition unit adopts an MEMS, the key part is a middle capacitor plate with a cantilever structure, when the speed change or the acceleration reaches a sufficient value, the inertial force applied to the middle capacitor plate exceeds the force for fixing or supporting the middle capacitor plate, and then the middle capacitor plate moves, the distance between the middle capacitor plate and the upper capacitor plate changes, and the upper capacitor plate and the lower capacitor plate change accordingly
The invention has the beneficial effects that:
the intelligent mobile phone intelligent learning system collects the action and the heart rate of a user through the intelligent bracelet, performs color light projection through the intelligent projection mobile phone, adaptively improves learning and working environments, combines the conditions of light with different wavelengths, seasons and personal physical signs, predicts the most suitable color emotion adjusting model for each person through the corresponding portrait model in the whole crowd, performs supervised learning according to the individual emotion effect evaluation model established by the induced heart rate and action change, thereby establishing the most suitable color emotion mobile phone projection model for the individual, and continuously corrects the model according to a genetic algorithm.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the mobile phone projection technology with the function of adjusting the color emotion is sequentially carried out according to the following steps:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure respectively by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above 0 point and time on the force-time curve) and braking impulse (integral of force below 0 point and time on the force-time curve), and the frequency domain characteristic extraction is to align the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
In this embodiment, be provided with rhythm of the heart detection device, acceleration induction element and communication module in the intelligent bracelet of step one.
In this embodiment, the sampling calculation formula of the motion state in the step six is as followsWherein, a, b and c are acceleration/angular velocity values of three directions of the user respectively.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the feature parameters extracted in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
In this embodiment, the smart phone in the first step is connected to the projection device through bluetooth.
In this embodiment, the color spectrum control module in the first step adjusts the generated light beam frequency to generate different colors.
In this embodiment, the acceleration acquisition unit is arranged in the smart bracelet in the first step, the acceleration acquisition unit adopts an MEMS, and the key part is a middle capacitor plate of a cantilever structure, and when the speed change or the acceleration reaches a sufficient value, the inertial force applied to the middle capacitor plate exceeds the force for fixing or supporting the middle capacitor plate, and then the middle capacitor plate moves, the distance between the middle capacitor plate and the upper capacitor plate changes, and the upper capacitor plate and the lower capacitor plate change accordingly.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.
Claims (7)
1. Possess the cell-phone projection technique that the color mood was adjusted, its characterized in that: the method comprises the following steps of:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: respectively extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above O point and time on the force-time curve) and braking impulse (integral of force below O point and time on the force-time curve), and the frequency domain characteristic extraction refers to aligning the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
2. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: and a heart rate detection device, an acceleration sensing unit and a communication module are arranged in the smart bracelet obtained in the first step.
3. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the sampling calculation formula of the motion state in the step six isWherein, a, b and c are acceleration/angular velocity values of three directions of the user respectively.
4. The mobile phone projection technology with color emotion adjustment as claimed in claim 2, wherein: the original motion vector group (F1, F2, …, Fm) of the extracted feature parameters in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
5. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: and the smart phone in the first step is connected with the projection device through Bluetooth.
6. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the chromatographic control module in the first step adjusts and produces different colors by adjusting the frequency of the generated light beams.
7. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the intelligent bracelet in the step one is internally provided with an acceleration acquisition unit, the acceleration unit adopts an MEMS (micro electro mechanical system), the key part is a middle capacitor plate with a cantilever structure, when the speed change or the acceleration reaches enough magnitude, the inertia force borne by the intelligent bracelet exceeds the force for fixing or supporting the intelligent bracelet, at the moment, the intelligent bracelet moves, the distance between the intelligent bracelet and the upper capacitor plate is changed, and the upper capacitor and the lower capacitor are changed accordingly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810998874.9A CN109285598A (en) | 2018-08-29 | 2018-08-29 | The mobile phone projection technology for having color mood regulation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810998874.9A CN109285598A (en) | 2018-08-29 | 2018-08-29 | The mobile phone projection technology for having color mood regulation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109285598A true CN109285598A (en) | 2019-01-29 |
Family
ID=65184253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810998874.9A Pending CN109285598A (en) | 2018-08-29 | 2018-08-29 | The mobile phone projection technology for having color mood regulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109285598A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111830834A (en) * | 2019-04-15 | 2020-10-27 | 泰州市康平医疗科技有限公司 | Equipment control method based on environment analysis |
CN112860170A (en) * | 2021-02-23 | 2021-05-28 | 深圳市沃特沃德信息有限公司 | Smart watch control method, smart watch, computer device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1787454A1 (en) * | 2004-09-08 | 2007-05-23 | Sony Ericsson Mobile Communications AB | Changeable soft cover for mobile devices |
CN103584840A (en) * | 2013-11-25 | 2014-02-19 | 天津大学 | Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability |
CN105652571A (en) * | 2014-11-14 | 2016-06-08 | 中强光电股份有限公司 | Projection device and projection system thereof |
CN106971059A (en) * | 2017-03-01 | 2017-07-21 | 福州云开智能科技有限公司 | A kind of wearable device based on the adaptive health monitoring of neutral net |
CN107102728A (en) * | 2017-03-28 | 2017-08-29 | 北京犀牛数字互动科技有限公司 | Display methods and system based on virtual reality technology |
CN107753026A (en) * | 2017-09-28 | 2018-03-06 | 古琳达姬(厦门)股份有限公司 | For the intelligent shoe self-adaptive monitoring method of backbone leg health |
-
2018
- 2018-08-29 CN CN201810998874.9A patent/CN109285598A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1787454A1 (en) * | 2004-09-08 | 2007-05-23 | Sony Ericsson Mobile Communications AB | Changeable soft cover for mobile devices |
CN103584840A (en) * | 2013-11-25 | 2014-02-19 | 天津大学 | Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability |
CN105652571A (en) * | 2014-11-14 | 2016-06-08 | 中强光电股份有限公司 | Projection device and projection system thereof |
CN106971059A (en) * | 2017-03-01 | 2017-07-21 | 福州云开智能科技有限公司 | A kind of wearable device based on the adaptive health monitoring of neutral net |
CN107102728A (en) * | 2017-03-28 | 2017-08-29 | 北京犀牛数字互动科技有限公司 | Display methods and system based on virtual reality technology |
CN107753026A (en) * | 2017-09-28 | 2018-03-06 | 古琳达姬(厦门)股份有限公司 | For the intelligent shoe self-adaptive monitoring method of backbone leg health |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111830834A (en) * | 2019-04-15 | 2020-10-27 | 泰州市康平医疗科技有限公司 | Equipment control method based on environment analysis |
CN112860170A (en) * | 2021-02-23 | 2021-05-28 | 深圳市沃特沃德信息有限公司 | Smart watch control method, smart watch, computer device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112784798B (en) | Multi-modal emotion recognition method based on feature-time attention mechanism | |
CN106951867A (en) | Face identification method, device, system and equipment based on convolutional neural networks | |
Wysoski et al. | Evolving spiking neural networks for audiovisual information processing | |
CN111985650B (en) | Activity recognition model and system considering both universality and individuation | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN105426875A (en) | Face identification method and attendance system based on deep convolution neural network | |
CN107403154A (en) | A kind of gait recognition method based on dynamic visual sensor | |
CN106407889A (en) | Video human body interaction motion identification method based on optical flow graph depth learning model | |
CN113743471B (en) | Driving evaluation method and system | |
CN105787458A (en) | Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature | |
CN104318221A (en) | Facial expression recognition method based on ELM | |
CN109815892A (en) | The signal recognition method of distributed fiber grating sensing network based on CNN | |
CN104134364B (en) | Real-time traffic sign identification method and system with self-learning capacity | |
CN110909637A (en) | Outdoor mobile robot terrain recognition method based on visual-touch fusion | |
CN115294658B (en) | Personalized gesture recognition system and gesture recognition method for multiple application scenes | |
CN110472649A (en) | Brain electricity sensibility classification method and system based on multiscale analysis and integrated tree-model | |
CN111714118A (en) | Brain cognition model fusion method based on ensemble learning | |
CN109377429A (en) | A kind of recognition of face quality-oriented education wisdom evaluation system | |
CN114578967B (en) | Emotion recognition method and system based on electroencephalogram signals | |
Ocquaye et al. | Dual exclusive attentive transfer for unsupervised deep convolutional domain adaptation in speech emotion recognition | |
CN109977867A (en) | A kind of infrared biopsy method based on machine learning multiple features fusion | |
CN109858553A (en) | Monitoring model update method, updating device and the storage medium of driving condition | |
CN109285598A (en) | The mobile phone projection technology for having color mood regulation | |
CN111772629B (en) | Brain cognitive skill transplanting method | |
CN109543637A (en) | A kind of face identification method, device, equipment and readable storage medium storing program for executing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190129 |