CN109285598A - The mobile phone projection technology for having color mood regulation - Google Patents

The mobile phone projection technology for having color mood regulation Download PDF

Info

Publication number
CN109285598A
CN109285598A CN201810998874.9A CN201810998874A CN109285598A CN 109285598 A CN109285598 A CN 109285598A CN 201810998874 A CN201810998874 A CN 201810998874A CN 109285598 A CN109285598 A CN 109285598A
Authority
CN
China
Prior art keywords
projection
force
acting force
curve
mobile phone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810998874.9A
Other languages
Chinese (zh)
Inventor
薛爱凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Win Win Time Technology Co Ltd
Original Assignee
Shenzhen Win Win Time Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Win Win Time Technology Co Ltd filed Critical Shenzhen Win Win Time Technology Co Ltd
Priority to CN201810998874.9A priority Critical patent/CN109285598A/en
Publication of CN109285598A publication Critical patent/CN109285598A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3182Colour adjustment, e.g. white balance, shading or gamut

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Epidemiology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Environmental & Geological Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Telephone Function (AREA)

Abstract

The present invention relates to the mobile phone projection technologies for having color mood regulation, are projected by smart phone to user's surrounding enviroment, adjust the optical fiber of living environment locating for user.The beneficial effects of the present invention are: the present invention acquires the movement of user by Intelligent bracelet, heart rate, colored light projection is carried out by intelligent projection mobile phone, it is adaptive to improve study, the environment of work, in conjunction with different wave length light, season, personal sign situation, pass through everyone most suitable color mood regulation model of portrait model prediction corresponding in whole crowd, the individual mood Evaluation Model on Effectiveness established according to caused heart rate and movement variation, carry out the study for having supervision, to establish the color mood mobile phone projection model for being most suitable for individual, it is constantly corrected according to genetic algorithm.

Description

Mobile phone projection technology with color emotion adjustment function
Technical Field
The invention relates to a mobile phone projection technology with color emotion regulation.
Background
People find that the light color has certain correlation with emotion in daily life; in haze, the thinking is slow, the reaction is slow, and depression is easy to occur; the sunny day is easily excited. Warm colored lights (e.g., pink and light purple) provide a warm, relaxing atmosphere throughout the space. In summer, the blue and green light can make people feel cool; and in winter, the red color makes people feel warm. Munich psychologist spends 3 years to explore the influence of environmental colors on learning and intelligence of children, and finds that children become agile and creative as soon as they enter light blue, yellow green, orange and other bright-tone environments, the average intelligence quotient is higher than the normal level by 12 points, and becomes dull after entering black, brown and other dark-tone environments, and the intelligence quotient test result is lower than the normal level. The light control device is actually related to melatonin secretion of pineal bodies in human bodies, melatonin secretion is controlled by light, light rays are more in daytime, melatonin secretion is inhibited, people look excited, and melatonin secretion of pineal bodies is increased at night when two eyes are blackened, so that worry is worried. In autumn and winter, the days are short and night, the sunshine is reduced, the weather is cool, melatonin secreted by pineal bodies is obviously increased, the secretion of hormones such as thyroxine and adrenaline which can stimulate the mood is reduced, the activity of human cells is reduced, the metabolism is slowed down, the mood of people is depressed and subsided, the sad in autumn and winter is induced, and how to adjust the living light environment of people is always the direction of exploration.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a mobile phone projection technology with color emotion adjustment, and can solve the technical problems.
The invention realizes the purpose through the following technical scheme:
the mobile phone projection technology with the function of adjusting the color emotion is sequentially carried out according to the following steps:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure respectively by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above 0 point and time on the force-time curve) and braking impulse (integral of force below 0 point and time on the force-time curve), and the frequency domain characteristic extraction is to align the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
In this embodiment, be provided with rhythm of the heart detection device, acceleration induction element and communication module in the intelligent bracelet of step one.
In this embodiment, the sampling calculation formula of the motion state in the step six is as followsWherein a, b and c are users respectivelyThree directional acceleration/angular velocity values.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the feature parameters extracted in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
In this embodiment, the smart phone in the first step is connected to the projection device through bluetooth.
In this embodiment, the color spectrum control module in the first step adjusts the generated light beam frequency to generate different colors.
In this embodiment, the acceleration acquisition unit is arranged in the smart bracelet in the first step, the acceleration acquisition unit adopts an MEMS, the key part is a middle capacitor plate with a cantilever structure, when the speed change or the acceleration reaches a sufficient value, the inertial force applied to the middle capacitor plate exceeds the force for fixing or supporting the middle capacitor plate, and then the middle capacitor plate moves, the distance between the middle capacitor plate and the upper capacitor plate changes, and the upper capacitor plate and the lower capacitor plate change accordingly
The invention has the beneficial effects that:
the intelligent mobile phone intelligent learning system collects the action and the heart rate of a user through the intelligent bracelet, performs color light projection through the intelligent projection mobile phone, adaptively improves learning and working environments, combines the conditions of light with different wavelengths, seasons and personal physical signs, predicts the most suitable color emotion adjusting model for each person through the corresponding portrait model in the whole crowd, performs supervised learning according to the individual emotion effect evaluation model established by the induced heart rate and action change, thereby establishing the most suitable color emotion mobile phone projection model for the individual, and continuously corrects the model according to a genetic algorithm.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the mobile phone projection technology with the function of adjusting the color emotion is sequentially carried out according to the following steps:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure respectively by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above 0 point and time on the force-time curve) and braking impulse (integral of force below 0 point and time on the force-time curve), and the frequency domain characteristic extraction is to align the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
In this embodiment, be provided with rhythm of the heart detection device, acceleration induction element and communication module in the intelligent bracelet of step one.
In this embodiment, the sampling calculation formula of the motion state in the step six is as followsWherein, a, b and c are acceleration/angular velocity values of three directions of the user respectively.
In this embodiment, the original motion vector set (F1, F2, …, Fm) of the feature parameters extracted in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
In this embodiment, the smart phone in the first step is connected to the projection device through bluetooth.
In this embodiment, the color spectrum control module in the first step adjusts the generated light beam frequency to generate different colors.
In this embodiment, the acceleration acquisition unit is arranged in the smart bracelet in the first step, the acceleration acquisition unit adopts an MEMS, and the key part is a middle capacitor plate of a cantilever structure, and when the speed change or the acceleration reaches a sufficient value, the inertial force applied to the middle capacitor plate exceeds the force for fixing or supporting the middle capacitor plate, and then the middle capacitor plate moves, the distance between the middle capacitor plate and the upper capacitor plate changes, and the upper capacitor plate and the lower capacitor plate change accordingly.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.

Claims (7)

1. Possess the cell-phone projection technique that the color mood was adjusted, its characterized in that: the method comprises the following steps of:
the method comprises the following steps: a user wears the smart bracelet, then the projection device is controlled by the smart phone to project the colored light beam to the area where the sampled person works or lives, then the color of the projected light beam is adjusted by adjusting the chromatographic control module on the projection device, and the action state data and the heart rate data of the user are collected by the smart bracelet;
step two: and (3) establishing a primary static feature tag library according to the age, the gender, the academic calendar and the occupation of an operator, representing the habit attribute features of the corresponding portrait population in the whole population, and establishing an earphone dynamic feature tag library according to the acceleration data, the heart rate data and the heart rate variability rate which are acquired in the step one and serve as dynamic features.
Step three: a wavelet transform threshold method is selected to denoise electromagnetic interference (namely high-frequency noise) generated by the circuit due to the twitching of a user in the acquisition process of the step one;
step four: respectively extracting frequency domain time domain characteristics from three directions (left and right, front and back and vertical) of pressure by wavelet packet decomposition and difference algorithm, and identifying by SVM, wherein the time domain characteristic extraction refers to detecting the peak point and valley point of the front and back and vertical curves by a first order difference method for the vertical force curve in the denoised acting force as the key point of the acting force curve, using the valley point of the vertical direction curve as the reference point of the acting force curve, and using the force value of the key point of the vertical direction curve and the time phase of the force value, the acting force change rate and impulse of the adjacent key points and the force value at the corresponding key point on the front and back direction curve, driving impulse (integral of force above O point and time on the force-time curve) and braking impulse (integral of force below O point and time on the force-time curve), and the frequency domain characteristic extraction refers to aligning the acting force automatically according to the reference point on the vertical force curve, to improve frequency domain feature contrast and classification capability: normalizing the dimension of the acting force to the same value by using a linear interpolation algorithm, searching valley points on a force curve in the vertical direction of the normalized acting force by using a first-order difference algorithm, taking the valley points as reference points for reference, and aligning the left and right, front and back and vertical direction curve waveforms in the acting force by using a linear interpolation method; detecting a valley point in the vertical direction of the vertical force curve in the denoised acting force by using a first-order difference algorithm, and using the valley point as a reference point of the acting force curve; using the reference point as a reference, and carrying out waveform alignment on the acting force by using a linear interpolation method to obtain the aligned acting force; extracting the acting force by using an L-layer wavelet packet decomposition algorithm;
step five: and selecting a minimum optimal wavelet packet set from a plurality of wavelet packets of the action frequency domain characteristics extracted in the fourth step according to a fuzzy C mean method, selecting a minimum optimal wavelet packet decomposition coefficient from the selected set based on fuzzy membership ranking by using the fuzzy C mean method to obtain a minimum optimal action frequency domain characteristic subset, combining the minimum optimal action frequency domain characteristic subset with the action time domain characteristics to obtain a fused action characteristic set, then adopting an SVM (support vector machine) to carry out action recognition, and adopting a nonlinear mapping radial basis function to map a linear inseparable low-dimensional space to a linear separable high-dimensional space. Training a classifier, and then identifying the motion sample by the classifier. Supposing that n types of personal action samples are registered in the action database, inputting the samples into a classifier for training, judging which type is 1-n according to an input value, if the input value exceeds the range of 1-n, newly registering a type n +1, and then updating the classifier again;
step six: removing noise generated by the smart phone due to factors of jitter and electromagnetic interference in the projection process, using layered and graded dimensionality reduction modeling, judging the motion type of a human body by using output data of the acceleration sensor and using median filtering, judging whether the human body moves statically or not, moving parts and types hierarchically, judging main characteristics by sampling in a graded mode, comprehensively verifying the influence of key characteristics, further judging the characteristics of sleep such as turning over, pushing, getting up and the like, and when modeling, firstly outputting a synthesized amplitude value through an accelerometer, and judging that the human body is static when the synthesized amplitude value is between given upper and lower thresholds; otherwise, judging the motion of the person, wherein the output composite amplitude of the accelerometer is as follows:
the upper and lower thresholds are respectively: th (h)a min=8m/s,tha maxThe first condition is that:
if the first condition is judged to be static, the second condition and the third condition are not judged, and the local variance output by the accelerometer is lower than a given threshold value, and the body part is judged to be static; otherwise, the body part is judged to move, and the second condition calculation formula is as follows:
therein, thσaIf the second condition is judged that the body part is still, the third condition is not judged, otherwise, the third condition calculation formula is as follows:
wherein,thamaxsampling and calculating the motion state, and extracting characteristic parameters and key characteristics;
step seven: modeling the characteristic fusion, evaluating the light value grade and the motion type obtained by a light sensor of the camera and the important type set by a user to obtain a light sensation quality user 5-point evaluation system, and establishing a subjective feeling and environment parameter self-adaptive projection regulation model by using a supervised classification algorithm and comparing historical optimal data as a supervision factor;
step eight: establishing a depth intercourse mode identification of a projection portrait model corresponding to the whole crowd, classifying the acceleration and heart rate variability values after dimensionality reduction, inputting the samples into a classifier for training according to N types of samples registered in a database, judging which type is (1, N) according to the input values, newly registering the type N +1 if the type exceeds the range of (1, N), then updating the classifier again, establishing an identification model which is perfect in self-adaptation through an acceleration signal vector model SVMA and an angular velocity signal vector model SVMW, subdividing the motion state and the working state life state to obtain a motion subdivision link, inputting environment parameters and sign identification type life filling quality scores, using a supervised classification algorithm, using the environment parameters as an input layer, and using the life filling scores as an output layer. By comparing with the model formed by the last environmental parameter input (the environmental parameter of the historical optimal living state), the quality of the individual living state is used as a training supervision factor, better 1, worse 0, and the working signal is propagated in the forward direction.
Step nine: the sampling times are repeated continuously, the SVM classifier can be optimized and perfected to input new samples each time in a self-adaptive mode along with the increase of the amount of the sampled samples, the recognition rate of the SVM classifier is calculated according to the principle of a cross verification method, fitness evaluation is carried out, the termination value of a genetic algorithm is not set, the termination condition adopts a higher method, if the recognition rate of training is higher than that of the existing SVM classifier, the SVM classifier is set as an optimal parameter, otherwise, the parameters are further optimized by performing operations such as selection, crossing and variation, the interactive projection process among the individual characteristic information, the environment and projection equipment of an operator is improved continuously, the self-adaptive perfection of the personalized model of the user by the projection of the smart phone is realized, and the smart phone provides a more comfortable projection.
2. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: and a heart rate detection device, an acceleration sensing unit and a communication module are arranged in the smart bracelet obtained in the first step.
3. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the sampling calculation formula of the motion state in the step six isWherein, a, b and c are acceleration/angular velocity values of three directions of the user respectively.
4. The mobile phone projection technology with color emotion adjustment as claimed in claim 2, wherein: the original motion vector group (F1, F2, …, Fm) of the extracted feature parameters in the step six is smaller than 9, and the extraction matrix is:the original vector F1 contains the most information and has the largest variance, and is called as a first principal component, and F2, … and Fm are sequentially decreased and called as a second principal component, "", and an mth principal component. The principal component analysis process can therefore be regarded as a process for determining the weighting factors aik (i ═ 1, "", m; (k ═ 1, "") 9).
5. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: and the smart phone in the first step is connected with the projection device through Bluetooth.
6. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the chromatographic control module in the first step adjusts and produces different colors by adjusting the frequency of the generated light beams.
7. The mobile phone projection technology with color emotion adjustment as claimed in claim 1, wherein: the intelligent bracelet in the step one is internally provided with an acceleration acquisition unit, the acceleration unit adopts an MEMS (micro electro mechanical system), the key part is a middle capacitor plate with a cantilever structure, when the speed change or the acceleration reaches enough magnitude, the inertia force borne by the intelligent bracelet exceeds the force for fixing or supporting the intelligent bracelet, at the moment, the intelligent bracelet moves, the distance between the intelligent bracelet and the upper capacitor plate is changed, and the upper capacitor and the lower capacitor are changed accordingly.
CN201810998874.9A 2018-08-29 2018-08-29 The mobile phone projection technology for having color mood regulation Pending CN109285598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810998874.9A CN109285598A (en) 2018-08-29 2018-08-29 The mobile phone projection technology for having color mood regulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810998874.9A CN109285598A (en) 2018-08-29 2018-08-29 The mobile phone projection technology for having color mood regulation

Publications (1)

Publication Number Publication Date
CN109285598A true CN109285598A (en) 2019-01-29

Family

ID=65184253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810998874.9A Pending CN109285598A (en) 2018-08-29 2018-08-29 The mobile phone projection technology for having color mood regulation

Country Status (1)

Country Link
CN (1) CN109285598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830834A (en) * 2019-04-15 2020-10-27 泰州市康平医疗科技有限公司 Equipment control method based on environment analysis
CN112860170A (en) * 2021-02-23 2021-05-28 深圳市沃特沃德信息有限公司 Smart watch control method, smart watch, computer device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1787454A1 (en) * 2004-09-08 2007-05-23 Sony Ericsson Mobile Communications AB Changeable soft cover for mobile devices
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN105652571A (en) * 2014-11-14 2016-06-08 中强光电股份有限公司 Projection device and projection system thereof
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1787454A1 (en) * 2004-09-08 2007-05-23 Sony Ericsson Mobile Communications AB Changeable soft cover for mobile devices
CN103584840A (en) * 2013-11-25 2014-02-19 天津大学 Automatic sleep stage method based on electroencephalogram, heart rate variability and coherence between electroencephalogram and heart rate variability
CN105652571A (en) * 2014-11-14 2016-06-08 中强光电股份有限公司 Projection device and projection system thereof
CN106971059A (en) * 2017-03-01 2017-07-21 福州云开智能科技有限公司 A kind of wearable device based on the adaptive health monitoring of neutral net
CN107102728A (en) * 2017-03-28 2017-08-29 北京犀牛数字互动科技有限公司 Display methods and system based on virtual reality technology
CN107753026A (en) * 2017-09-28 2018-03-06 古琳达姬(厦门)股份有限公司 For the intelligent shoe self-adaptive monitoring method of backbone leg health

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830834A (en) * 2019-04-15 2020-10-27 泰州市康平医疗科技有限公司 Equipment control method based on environment analysis
CN112860170A (en) * 2021-02-23 2021-05-28 深圳市沃特沃德信息有限公司 Smart watch control method, smart watch, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN112784798B (en) Multi-modal emotion recognition method based on feature-time attention mechanism
CN106951867A (en) Face identification method, device, system and equipment based on convolutional neural networks
Wysoski et al. Evolving spiking neural networks for audiovisual information processing
CN111985650B (en) Activity recognition model and system considering both universality and individuation
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN105426875A (en) Face identification method and attendance system based on deep convolution neural network
CN107403154A (en) A kind of gait recognition method based on dynamic visual sensor
CN106407889A (en) Video human body interaction motion identification method based on optical flow graph depth learning model
CN113743471B (en) Driving evaluation method and system
CN105787458A (en) Infrared behavior identification method based on adaptive fusion of artificial design feature and depth learning feature
CN104318221A (en) Facial expression recognition method based on ELM
CN109815892A (en) The signal recognition method of distributed fiber grating sensing network based on CNN
CN104134364B (en) Real-time traffic sign identification method and system with self-learning capacity
CN110909637A (en) Outdoor mobile robot terrain recognition method based on visual-touch fusion
CN115294658B (en) Personalized gesture recognition system and gesture recognition method for multiple application scenes
CN110472649A (en) Brain electricity sensibility classification method and system based on multiscale analysis and integrated tree-model
CN111714118A (en) Brain cognition model fusion method based on ensemble learning
CN109377429A (en) A kind of recognition of face quality-oriented education wisdom evaluation system
CN114578967B (en) Emotion recognition method and system based on electroencephalogram signals
Ocquaye et al. Dual exclusive attentive transfer for unsupervised deep convolutional domain adaptation in speech emotion recognition
CN109977867A (en) A kind of infrared biopsy method based on machine learning multiple features fusion
CN109858553A (en) Monitoring model update method, updating device and the storage medium of driving condition
CN109285598A (en) The mobile phone projection technology for having color mood regulation
CN111772629B (en) Brain cognitive skill transplanting method
CN109543637A (en) A kind of face identification method, device, equipment and readable storage medium storing program for executing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190129