CN117503062B - Neural detection control method, device, equipment and storage medium of beauty instrument - Google Patents
Neural detection control method, device, equipment and storage medium of beauty instrument Download PDFInfo
- Publication number
- CN117503062B CN117503062B CN202311563390.9A CN202311563390A CN117503062B CN 117503062 B CN117503062 B CN 117503062B CN 202311563390 A CN202311563390 A CN 202311563390A CN 117503062 B CN117503062 B CN 117503062B
- Authority
- CN
- China
- Prior art keywords
- facial
- data
- nerve
- beauty instrument
- image information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003796 beauty Effects 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 230000001537 neural effect Effects 0.000 title claims abstract description 41
- 210000000256 facial nerve Anatomy 0.000 claims abstract description 139
- 230000001815 facial effect Effects 0.000 claims abstract description 52
- 238000013528 artificial neural network Methods 0.000 claims abstract description 51
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 210000001097 facial muscle Anatomy 0.000 claims abstract description 11
- 210000005036 nerve Anatomy 0.000 claims description 75
- 239000002537 cosmetic Substances 0.000 claims description 20
- 238000005070 sampling Methods 0.000 claims description 20
- 230000009466 transformation Effects 0.000 claims description 18
- 210000003901 trigeminal nerve Anatomy 0.000 claims description 18
- 239000013598 vector Substances 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 241000283984 Rodentia Species 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000638 stimulation Effects 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000006378 damage Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000002131 composite material Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 206010016059 Facial pain Diseases 0.000 description 2
- 208000004044 Hypesthesia Diseases 0.000 description 2
- 208000016285 Movement disease Diseases 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000001055 chewing effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001508 eye Anatomy 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- 210000003054 facial bone Anatomy 0.000 description 2
- 210000001061 forehead Anatomy 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 208000034783 hypoesthesia Diseases 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 210000001331 nose Anatomy 0.000 description 2
- 231100000862 numbness Toxicity 0.000 description 2
- 230000000474 nursing effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000037307 sensitive skin Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4029—Detecting, measuring or recording for evaluating the nervous system for evaluating the peripheral nervous systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0048—Detecting, measuring or recording by applying mechanical forces or stimuli
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/01—Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/053—Measuring electrical impedance or conductance of a portion of the body
- A61B5/0531—Measuring skin impedance
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/053—Measuring electrical impedance or conductance of a portion of the body
- A61B5/0531—Measuring skin impedance
- A61B5/0533—Measuring galvanic skin response
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/388—Nerve conduction study, e.g. detecting action potential of peripheral nerves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/45—For evaluating or diagnosing the musculoskeletal system or teeth
- A61B5/4519—Muscles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
- A61B5/4893—Nerves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7253—Details of waveform analysis characterised by using transforms
- A61B5/726—Details of waveform analysis characterised by using transforms using Wavelet transforms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/40—Applying electric fields by inductive or capacitive coupling ; Applying radio-frequency signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N5/0613—Apparatus adapted for a specific treatment
- A61N5/0616—Skin treatment other than tanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N5/067—Radiation therapy using light using laser light
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/06—Radiation therapy using light
- A61N2005/0626—Monitoring, verifying, controlling systems and methods
Abstract
The invention relates to the technical field of beauty instruments, in particular to a neural detection control method, a device, equipment and a storage medium of a beauty instrument, which comprise the steps of obtaining face data of a detection assembly, wherein the face data comprises facial neural data and facial muscle data; extracting facial nerve data, and preprocessing the facial nerve data; inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model; identifying image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image; inputting the smooth deep layer image into a trained neural network, identifying important facial nerve coordinate parameters in the smooth deep layer image through the neural network, and determining the working state of the beauty instrument. And controlling the beauty instrument according to the important facial coordinate parameters, and controlling the beauty instrument to stop working when the beauty instrument passes through the facial nerves so as to prevent the beauty instrument from causing unnecessary stimulation to the important facial nerves.
Description
Technical Field
The application relates to the technical field of beauty instruments, in particular to a neural detection control method, a neural detection control device, neural detection control equipment and a neural detection control storage medium of a beauty instrument.
Background
A beauty instrument is a machine for adjusting and improving the body and the face according to the physiological functions of a human body. According to the working principle, the beauty instrument can be divided into a laser beauty instrument, a radio frequency beauty instrument and the like, and the beauty instrument is usually provided with various working gears so as to adapt to the requirements of different users. In the use process, a user can select a corresponding working gear according to the actual condition of the skin so as to achieve the experience effect.
Patent document (CN 114917472 a) discloses a method and a system for guiding the use of a cosmetic instrument, the method being performed by an electronic device on which an application program is currently running, a communication connection being present between the electronic device and the cosmetic instrument, the method comprising: acquiring a current working mode of the beauty instrument based on communication connection; collecting a plurality of first images of the process of using the beauty instrument by a user under the condition that the working mode of the beauty instrument is a using mode; processing the plurality of first images to determine a first movement track of the beauty instrument; and according to the first moving track and a preset standard track. Through the above patent, determining whether the current use action of the user is standard; and under the condition that the current use action of the user is not standard, reminding the user to use the beauty instrument in standard. In the above patent, the position of important facial nerves cannot be accurately determined, and some important facial nerves such as trigeminal nerve and rongeur nerve are located in the deep facial layer. These nerves are widely distributed around deep facial muscles and bones and control facial sensations and movements, such as expression, chewing, eye movements, and the like. If the nerves are damaged or pressed, facial pain, numbness or movement disorder may be caused, so that the facial nerves are unnecessarily stimulated, and discomfort or even injury is caused to the user.
Accordingly, the prior art has drawbacks and needs improvement.
Disclosure of Invention
In order to solve one or more problems in the prior art, a main object of the present application is to provide a neural detection control method, apparatus, device and storage medium for a cosmetic instrument.
In order to achieve the above object, the present application proposes a nerve detection control method of a cosmetic instrument, the method comprising:
acquiring face data of the detection component, wherein the face data comprises facial nerve data and facial muscle data;
extracting the facial nerve data and preprocessing the facial nerve data;
inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model;
identifying the image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image;
inputting the smooth deep layer image into a trained neural network, and identifying facial important nerve coordinate parameters in the smooth deep layer image through the neural network, wherein the important nerves comprise trigeminal nerves and masseter nerves;
and determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
Further, the inputting the level deep image into a trained neural network, and identifying facial important neural coordinate parameters in the level deep image through the neural network comprises the following steps:
inputting the smooth deep layer image into a trained neural network, wherein the trained neural network comprises a convolution layer, a pooling layer and a full connection layer;
the characteristic of trigeminal nerve and masseter nerve in the image information is obtained through the convolution layer, the characteristic of the trigeminal nerve and the masseter nerve is reduced in dimension through the pooling layer, an abstract characteristic vector is obtained, and the abstract characteristic vector is mapped to a coordinate reference space through a full connection layer;
and obtaining coordinate parameters of trigeminal nerve and rongeur nerve in a coordinate reference space based on the result of mapping the abstract feature vector to the coordinate reference space.
Further, the inputting the preprocessed facial nerve data into a trained model, and the identifying and converting the facial nerve data into image information through the trained model includes:
training a VAE model, and constructing a generating model according to the trained VAE model, wherein the generating model comprises an encoder and a decoder;
Inputting the preprocessed facial nerve data into an encoder;
analyzing the facial nerve data through the encoder, and mapping the analyzed facial nerve data in a construction space according to a preset ordering mode;
reconstructing the facial nerve data in a construction space by the decoder;
and based on the result of the reconstruction by the decoder, converting the facial nerve data after the reconstruction into image information.
Further, the preprocessing the facial nerve data includes:
analyzing the facial nerve data to obtain a noise signal and a non-noise signal;
extracting the non-noise signal, extracting the frequency parameter of the non-noise signal through wavelet transformation, and extracting the amplitude parameter of the non-noise signal through a differential algorithm;
judging whether the frequency characteristic and the amplitude characteristic of the non-noise signal meet the optimization condition or not;
and when the frequency parameter and the amplitude parameter of the non-noise signal meet the optimization conditions, iteratively optimizing the frequency parameter and the amplitude parameter of the non-noise signal according to a preset mode.
Further, the extracting the frequency parameter of the non-noise signal through wavelet transformation includes:
Performing wavelet transformation on the non-noise signals to obtain wavelet transformation coefficients;
selecting a threshold coefficient according to characteristic change generated in the process of increasing the decomposition scale of the wavelet transform coefficient by analyzing the wavelet transform coefficient;
and filtering the threshold coefficient and the wavelet change coefficient to obtain a frequency parameter.
Further, the extracting the amplitude parameter of the non-noise signal through the differential algorithm includes:
converting the non-noise signal into an array model of continuous sampling points, and performing difference calculation on two adjacent sampling points to obtain a differential signal;
taking the absolute value of the differential signal to obtain an amplitude parameter;
the formula for calculating the difference value between the two adjacent sampling points is as follows: Δx (t) =x (t+1) -x (t), x (t+1) being a first sampling point and x (t) being a second sampling point adjacent to the first sampling point.
Further, the determining the working state of the beauty instrument based on the facial important nerve coordinate parameter includes:
acquiring position information of the beauty instrument;
judging whether the position information of the beauty instrument meets the condition of stopping work or not based on the facial important nerve coordinate parameters;
When the distance between the position information of the beauty instrument and the nerve coordinate parameter is smaller than a preset distance threshold value, judging that the position information of the beauty instrument meets the condition of stopping work, and controlling the beauty instrument to stop work.
The embodiment of the application also provides a nerve detection control device of a beauty instrument, which comprises:
an acquisition module for acquiring facial data of the detection component, wherein the facial data includes facial nerve data and facial muscle data;
the processing module is used for extracting the facial nerve data and preprocessing the facial nerve data;
the conversion module is used for inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model;
the dividing module is used for identifying the image information, and carrying out hierarchical division on the image information to obtain a surface shallow image and a smooth deep image;
the identification module is used for inputting the image information into a trained neural network, and identifying facial important neural coordinate parameters in the image information through the neural network;
and the control module is used for determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
The present application also provides a computer device comprising a memory storing a computer program and a processor implementing the steps of any of the methods described above when the computer program is executed by the processor.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the above.
According to the neural detection control method, device, equipment and storage medium of the beauty instrument, facial nerve data of a detection component are obtained, the facial nerve data are preprocessed, facial nerve data can be optimized, accuracy of the data is improved, interference of other factors is reduced, the preprocessed facial nerve data are input into a trained model, multiple groups of models of facial nerve data stored in a historical mode can be identified through the trained model, corresponding facial image information can be identified according to different facial nerve data, the image information is subjected to hierarchical division through identification of the image information, a superficial layer image and a flat deep layer image are obtained, the flat deep layer image is input into a trained neural network, analysis processing can be carried out on the image information through the trained neural network, face important nerve coordinate parameters can be identified, work of the beauty instrument is controlled according to the result of the face important nerve coordinate parameters, when the beauty instrument passes through the face important nerve, the work of the beauty instrument can be controlled, when the position of the beauty instrument passes through the face important nerve, the face important nerve is controlled to restart the work, and therefore more comfortable and proper care effect is provided, and unnecessary stimulation of the face instrument at the work position is prevented.
Drawings
Fig. 1 is a flow chart of a neural detection control method of a cosmetic instrument according to an embodiment of the present application;
fig. 2 is a flow chart of a neural detection control method of a cosmetic instrument according to an embodiment of the present application;
fig. 3 is a schematic block diagram of a nerve detection control device of a cosmetic apparatus according to an embodiment of the present application;
fig. 4 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Referring to fig. 1, in an embodiment of the present application, there is provided a nerve detection control method of a beauty treatment device, the method including:
s1, acquiring face data of a detection component, wherein the face data comprises face nerve data and face muscle data;
s2, extracting the facial nerve data, and preprocessing the facial nerve data;
S3, inputting the preprocessed facial nerve data into a trained model, and recognizing and converting the facial nerve data into image information through the trained model;
s4, identifying the image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a parallel deep layer image
S5, inputting the image information into a trained neural network, and identifying important facial nerve coordinate parameters in the parallel deep image through the neural network;
s6, determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
As described in step S1 above, the detection component is configured to detect the neural data and facial muscle data of the face of the user, wherein the detection component can be configured as a plurality of sensors, and the activity of the facial nerve is monitored by distributing the plurality of sensors at different positions of the cosmetic instrument. The sensors can be an electric stimulation sensor, an infrared sensor, a temperature sensor or a resistance sensor and the like and are used for acquiring facial nerve data, and the nerve data acquired by the sensors are the unprocessed amplitude and frequency, so that the specific position parameters of the nerves need to be identified after the amplitude and the frequency are processed and optimized.
As described in the above step S2, the facial nerve data is extracted, and the facial nerve data is preprocessed, so that the facial nerve data can be optimized, and the accuracy of the judgment can be improved when further judgment is performed, and in the preprocessing step, the facial nerve data can be subjected to noise reduction, smoothing or appropriate signal format adjustment.
As described in the above step S3, the facial nerve data which has been preprocessed is input into the trained model, and the plurality of sets of facial nerve data stored in the history can be identified by the trained model, and corresponding facial image information can be identified according to different facial nerve data. When the model is trained, the sensor data set with the labels can be collected, wherein the sensor data set comprises the original reading of the sensor and corresponding image data, and the original reading and the corresponding image data are stored in a corresponding database, so that the facial nerve data can be conveniently and quickly identified and called when the facial nerve data are received.
As described in the above steps S4-S5, the image information is identified, and the image information includes multiple neural images of the face, and because the important nerves of the face are all distributed in the parallel deep layer, the image information needs to be subjected to layering processing, and when the superficial layer and the parallel deep layer are divided, the image information can be divided according to different depths. Fewer nerve cells are on the superficial layer, and quantitative measurement and analysis can be carried out on the image information by using a microscopic method multiple so as to divide the superficial layer and the parallel deep layer. Inputting the flat-composite deep layer image into a trained neural network, analyzing the flat-composite deep layer image through the trained neural network, identifying facial important nerve coordinate parameters, particularly, when a model is trained, taking the preprocessed image as input, taking the corresponding facial important nerve coordinate parameters as labels, carrying out model training by using a marked data set, optimizing the model parameters by minimizing a loss function, optimizing an algorithm through a random gradient descent algorithm, inputting image information to be tested into the trained neural network, obtaining facial important nerve coordinates through forward propagation calculation, and carrying out evading positions through the facial important nerve coordinates, thereby preventing unnecessary stimulation on facial important nerves including trigeminal nerves and masseter nerves during working of the beauty instrument. These nerves are widely distributed around facial muscles and bones and control facial sensations and movements, such as expression, chewing, and eye movements. If these nerves are damaged or pressed, symptoms such as facial pain, numbness, or movement disorders may result.
As described in the above step S6, the operation of the beauty instrument is controlled according to the result of the facial important coordinate parameters, and the beauty instrument is controlled to stop operating when the beauty instrument passes through the facial important nerves such as the trigeminal nerve and the masseter nerve, and then the beauty instrument is controlled to restart operating after the position of the beauty instrument passes through the facial nerves, so as to provide more comfortable and proper care effect. It should be noted that, because of the large number of facial nerve types, when the beauty instrument is controlled to avoid the nerves, important nerve cells can be selectively avoided, and the damage of the important nerve cells can be prevented.
As described in the above steps, the facial nerve data of the detection component is obtained, the facial nerve data is preprocessed, the accuracy of the data is improved, the interference of other factors is reduced, the preprocessed facial nerve data is input into a trained model, multiple groups of models of facial nerve data stored in the history can be identified through the trained model, corresponding facial image information can be identified according to different facial nerve data, the image information is classified into a superficial shallow image and a parallel deep image through identifying the image information, the parallel deep image is input into a trained neural network, the image information can be analyzed through the trained neural network, the important facial nerve coordinate parameters can be identified, the operation of the beauty instrument is controlled according to the result of the important facial nerve coordinate parameters, the operation of the beauty instrument can be controlled when the beauty instrument passes through the important facial nerve, the operation of the beauty instrument is controlled again after the position of the beauty instrument passes through the important facial nerve, so that more comfortable and proper effects can be provided, and unnecessary stimulation caused by the facial nerve during the operation of the beauty instrument is prevented.
In a possible embodiment, with the large-scale growth of the internet of things, the neural detection control of the internet of things in the beauty instrument plays an increasingly important role. The internet of things refers to collecting any object or process needing to be monitored, connected and interacted in real time through various devices and technologies such as various information sensors, radio frequency identification technologies, global positioning systems, infrared sensors and laser scanners, collecting various needed information such as sound, light, heat, electricity, mechanics, chemistry, biology and positions, and realizing ubiquitous connection of objects and people through various possible network access, and realizing intelligent sensing, identification and management of objects and processes. The terminal electric control end of the beauty instrument can be interacted with the cloud platform, and then the cloud platform is connected with a third-party platform (namely a mobile phone end), so that the face data and the result after beauty treatment are obtained, and the sharing and synchronization of the operation are realized.
In yet another embodiment, the detection component is an electrical stimulation sensor, weak electrical stimulation is applied to the face of the user, micro-current passes through the face of the human body and then returns to the receiving end of the electrical stimulation sensor, the receiving end receives the fed-back electrical signal, the nerve of the face of the user is judged by collecting parameters of the electrical signal, the parameters of the electrical signal comprise a voltage waveform curve of the current, and the amplitude, the frequency and the duration of the current can be analyzed in the curve to obtain facial nerve data.
In another embodiment, since the parts around the eyes, forehead, nose wings and lips are sensitive skin parts in the face of the human body, the sensitive position coordinate parameters can be obtained by identifying the coordinates around the eyes, forehead, nose wings and lips according to the obtained image information and according to the obtained image information, and the working intensity of the beauty instrument can be controlled by the sensitive position coordinate parameters, so that the working intensity of the beauty instrument can be reduced when the beauty instrument passes through the sensitive position coordinates, and discomfort or damage can be avoided.
Referring to fig. 2, in one embodiment, the inputting the level deep image into a trained neural network, identifying facial significant neural coordinate parameters in the level deep image through the neural network, includes:
s41, inputting the parallel deep layer image into a trained neural network, wherein the trained neural network comprises a convolution layer, a pooling layer and a full connection layer;
s42, obtaining characteristics of trigeminal nerves and masseter nerves in the image information through the convolution layer, reducing dimensionality of the characteristics of the trigeminal nerves and the masseter nerves through the pooling layer to obtain abstract characteristic vectors, and mapping the abstract characteristic vectors to a coordinate reference space through a full connection layer;
S43, based on the result of mapping the abstract feature vector to a coordinate reference space, obtaining coordinate parameters of trigeminal nerve and rongeur nerve in the coordinate reference space.
In the trained neural network model, the image data or parameters of the target are trained to convert the parallel deep image into important facial nerve coordinate parameters, the specific nerve coordinates can rapidly control the beauty instrument to stop working when passing through the coordinate positions of the nerves, the convolution layer in the neural network model is used for extracting the characteristics of the input trigeminal nerves and the crunchus nerves, the convolution layer carries out convolution operation on the input data by applying a series of learnable convolution kernels (filters), so that the local characteristics are captured at different positions, the convolution operation can effectively reduce the number of parameters, share the weight, retain the space structure information and have translation invariance, and the output of the convolution layer is called a feature map (feature map) which comprises the characteristics extracted from the input data at different positions; the pooling layer is used for reducing the dimension of the feature map and extracting the layer type of the main features, and the pooling layer is used for downsampling the feature map in a mode of taking the maximum value (maximum pooling) or taking the average value (average pooling) in each local area, so that the pooling operation can reduce the size of the feature map, reduce the number of parameters, improve the calculation efficiency of a model and extract the main features with more robustness; the full-connection layer is used as a layer type of tasks such as classification or regression, the output of the front convolution layer and the pooled layer is flattened into a vector, and the vector is multiplied by a weight matrix to perform linear transformation and feature combination, and the output of the full-connection layer is processed by a nonlinear activation function to generate a final classification or regression result; the facial nerve features in the trigeminal nerve and the masseter nerve are obtained through the convolution layer, the facial features are reduced in dimension through the pooling layer, abstract feature vectors are extracted, the abstract feature vectors are mapped into the coordinate reference space, and the coordinates of the trigeminal nerve and the masseter nerve can be obtained through the coordinates of the feature vectors corresponding to the coordinate reference space.
In an embodiment, in the training of the neural network, the preprocessed image is input into the trained deep convolutional neural network, in the specific training of the convolutional neural network, the loss function value can be used for training the graph neural network to be trained to obtain the trained graph neural network, the graph neural network to be trained is reversely propagated according to the final loss function value, the updated network parameters of the graph neural network to be trained comprise the learning rate and the weight matrix of the graph neural network to be trained, the training process of the graph neural network to be trained comprises multiple iterations, the larger the difference value between the final loss function values obtained by calculation of two adjacent iterations is, the faster the updating of the network parameters of the graph neural network to be trained is, whether the propagation times of the reverse propagation are larger than the propagation times threshold value is judged, if yes, the training is stopped, and the trained graph neural network is obtained.
In an embodiment, the inputting the preprocessed facial nerve data into the trained model, and identifying and converting the facial nerve data into image information through the trained model includes:
training a VAE model, and constructing a generating model according to the trained VAE model, wherein the generating model comprises an encoder and a decoder;
Inputting the preprocessed facial nerve data into an encoder;
analyzing the facial nerve data through the encoder, and mapping the analyzed facial nerve data in a construction space according to a preset ordering mode;
reconstructing the facial nerve data in a construction space by the decoder;
and based on the result of the reconstruction by the decoder, converting the facial nerve data after the reconstruction into image information.
As described above, the VAE model (Variational Autoencoder) is a generative model that incorporates probabilistic modeling methods inferred from encoders and variations. It has the following functions and actions: VAEs are an unsupervised learning method by which data can be modeled and generated without tags through the underlying distribution of the learning data, and the encoder portion of the VAEs can compress the input data into a low-dimensional underlying spatial representation, thereby enabling efficient compression and feature learning of the data. These latent variables may capture important features and patterns of change of the data and the decoder portion of the VAE may generate reconstructed data corresponding to the original data from the encoded vectors in the latent space. This allows the VAE to be used for data reconstruction and denoising, to recover the original clean data from the corrupted or noisy data, and to sample from the potential space using the potential distribution learned in the encoder, to generate new composite data having a similar distribution to the original data, e.g., to compare the data to the image information in the potential database. Since the facial nerve data can identify the uncorrupted data and the uncorrupted data when preprocessing is performed, and the uncorrupted data still has damage or other useless data after optimizing the preprocessed uncorrupted data, the preprocessed facial nerve data is further processed through the trained VAE model, so as to obtain the facial nerve data after recombination, and some useless data or damaged data can be optimized when the data is recombined and constructed, the encoder network can be set as a deep neural network, including a full connection layer and an activation function, for gradually encoding the characteristics of the facial nerve data, and in the decoder, the encoding vectors in the construction space are used as input to gradually decode into the generated image data.
In an embodiment, the preprocessing the facial nerve data includes:
analyzing the facial nerve data to obtain a noise signal and a non-noise signal;
extracting the non-noise signal, extracting the frequency parameter of the non-noise signal through wavelet transformation, and extracting the amplitude parameter of the non-noise signal through a differential algorithm;
judging whether the frequency characteristic and the amplitude characteristic of the non-noise signal meet the optimization condition or not;
and when the frequency parameter and the amplitude parameter of the non-noise signal meet the optimization conditions, carrying out iterative optimization on the frequency parameter and the amplitude parameter of the non-noise signal.
As described above, when preprocessing facial nerve data, noise signals and non-noise signals can be resolved, the noise signals are abnormally useless data or damaged data, the non-noise signals can appear when being interfered by other electrical signals or being interfered by the environment, then by extracting the non-noise signals, the noise signals can be removed preliminarily, useless signals which can be determined to be useless can be removed, when the useless signals which can be carried by the non-noise signals are needed to be further optimized, for example, when a curve or a waveform diagram of the nerve signals is constructed through the amplitude and the frequency of the nerve signals in the non-optimized data, the lines can be presented in a non-smooth bending state, the nerve signals need to be preprocessed, the preliminarily optimized frequency parameters of the non-noise signals are extracted through wavelet transformation, the preliminarily optimized amplitude parameters of the non-noise signals are extracted through a differential algorithm, by judging whether the frequency parameters and the amplitude parameters need to be further optimized, and by setting a threshold value of a change trend, when the change trend in any group of parameters of the frequency parameters is larger than a preset threshold value, the optimization condition is met, and the frequency parameters and the amplitude parameters are iteratively optimized in a preset manner, for example, a random gradient descent algorithm is adopted.
In an embodiment, the extracting the frequency parameter of the non-noise signal by wavelet transformation includes:
performing wavelet transformation on the non-noise signals to obtain wavelet transformation coefficients;
selecting a threshold coefficient according to characteristic change generated in the process of increasing the decomposition scale of the wavelet transform coefficient by analyzing the wavelet transform coefficient;
and filtering the threshold coefficient and the wavelet change coefficient to obtain a frequency parameter.
As mentioned above, wavelet transformation is a signal analysis technique aimed at a method of decomposing a signal into different scale and frequency components. The wavelet transformation can provide better time domain and frequency domain resolution, the obtained wavelet transformation coefficients are analyzed, the threshold coefficients are selected in the characteristic change process generated in the decomposition scale increasing process so as to improve the calculation efficiency, the threshold coefficients and the wavelet transformation coefficients are subjected to filtering processing, the signal noise reduction effect is realized, and in the wavelet transformation, the signal can be expressed as linear combination of wavelet coefficients with different scales and frequencies. By thresholding the wavelet coefficients, the smaller coefficients can be zeroed or reduced to a value close to zero, thereby removing noise signals and obtaining optimized frequency parameters.
In an embodiment, the extracting the amplitude parameter of the non-noise signal by a differential algorithm includes:
converting the non-noise signal into an array model of continuous sampling points, and performing difference calculation on two adjacent sampling points to obtain a differential signal;
taking the absolute value of the differential signal to obtain an amplitude parameter;
the formula for calculating the difference value between the two adjacent sampling points is as follows: Δx (t) =x (t+1) -x (t), x (t+1) being a first sampling point and x (t) being a second sampling point adjacent to the first sampling point.
As described above, in extracting the amplitude parameter of the non-noise signal by the differential algorithm, it may be ensured that the signal is sampled by obtaining the non-noise signal, and in converting the non-noise signal into an array model of continuous sampling points, performing difference calculation on two adjacent sampling points, in the difference calculation, performing forward differential calculation, center differential calculation or center differential calculation, and then taking an absolute value of the differential signal obtained by the differential calculation, thereby obtaining the amplitude parameter, that is: amplitude (t) = |Δx (t) |. The variation trend of the signal can be captured and the influence of noise can be removed through a differential algorithm.
In an embodiment, the controlling the cosmetic instrument based on the result of the facial vital neural coordinate parameter includes:
Acquiring position information of the beauty instrument;
judging whether the position information of the beauty instrument meets the condition of stopping work or not based on the facial important nerve coordinate parameters;
when the distance between the position information of the beauty instrument and the nerve coordinate parameter is smaller than a preset distance threshold value, judging that the position information of the beauty instrument meets the condition of stopping work, and controlling the beauty instrument to stop work.
As described above, by acquiring the position of the beauty instrument and judging whether the beauty instrument is close to the coordinate position of the facial nerve in the coordinate parameters based on the facial nerve, when the coordinate parameters are close to the facial nerve, the beauty instrument is controlled to stop working, the damage to the facial nerve is prevented when the beauty instrument works, and when the position of the beauty instrument is based on the position of the neural coordinate, the beauty instrument is controlled to work again.
In a possible embodiment, by setting the independent control modes of the plurality of driving components of the beauty instrument, when the beauty instrument passes through the facial nerve position, the driving components close to the facial nerve position can be controlled to stop working or operate under the working strength reduced to the acceptable range of nerves, and other driving components far away from the facial nerve do not stop working.
According to the neural detection control method of the beauty instrument, the facial nerve data of the detection assembly are obtained, the facial nerve data are preprocessed, the facial nerve data can be optimized, the accuracy of the data is improved, the interference of other factors is reduced, the preprocessed facial nerve data are input into a trained model, multiple groups of models of facial nerve data stored in the history can be identified through the trained model, corresponding facial image information can be identified according to different facial nerve data, the image information is input into a trained neural network, the image information can be analyzed through the trained neural network, important facial nerve coordinate parameters can be identified, the work of the beauty instrument is controlled according to the result of the facial coordinate parameters, when the beauty instrument passes through the facial nerve, the beauty instrument can be controlled to stop working, and when the position of the beauty instrument passes through the facial nerve, the beauty instrument is controlled to restart working so as to provide more comfortable and proper nursing effects, and unnecessary stimulation to the facial nerve during working of the beauty instrument is prevented. From the above analysis, the embodiment of the present application can effectively determine the position of the facial nerve, and prevent the facial nerve from causing unnecessary stimulation.
Referring to fig. 3, in an embodiment of the present application, there is further provided a nerve detection control device of a cosmetic apparatus, including:
an acquisition module 1 for acquiring face data of the detection component, wherein the face data includes facial nerve data and facial muscle data;
the processing module 2 is used for extracting the facial nerve data and preprocessing the facial nerve data;
a conversion module 3, configured to input the preprocessed facial nerve data into a trained model, and identify and convert the facial nerve data into image information through the trained model;
the dividing module 4 is used for identifying the image information and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image;
the recognition module 5 is used for inputting the level deep layer image into a trained neural network, and recognizing facial important nerve coordinate parameters in the level deep layer image through the neural network, wherein the important nerves comprise trigeminal nerves and masseter nerves;
and the control module 6 is used for determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
As described above, it may be understood that each component of the neural detection control device of the beauty instrument set forth in the present application may implement the function of any one of the neural detection control methods of the beauty instrument as described above, and the specific structure will not be described again.
Referring to fig. 4, a computer device is further provided in the embodiment of the present application, where the computer device may be a server, and the internal structure of the computer device may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as monitoring data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a neural detection control method of a cosmetic instrument.
The processor executes the neural detection control method of the beauty instrument, and the neural detection control method comprises the following steps: acquiring face data of the detection component, wherein the face data comprises facial nerve data and facial muscle data; extracting the facial nerve data and preprocessing the facial nerve data; inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model; identifying the image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image; inputting the image information into a trained neural network, and identifying facial important neural coordinate parameters in the image information through the neural network; and determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
According to the neural detection control method of the beauty instrument, the facial nerve data of the detection assembly are obtained, the facial nerve data are preprocessed, the facial nerve data can be optimized, the accuracy of the data is improved, the interference of other factors is reduced, the preprocessed facial nerve data are input into a trained model, multiple groups of models of facial nerve data stored in the history can be recognized through the trained model, corresponding facial image information can be recognized according to different facial nerve data, the image information is input into a trained neural network, the image information can be analyzed through the trained neural network, the facial important nerve coordinate parameters can be recognized, the work of the beauty instrument is controlled according to the result of the facial coordinate parameters, when the beauty instrument passes through the facial nerve, the beauty instrument can be controlled to stop working, and when the position of the beauty instrument passes through the facial nerve, the beauty instrument is controlled to restart working so as to provide more comfortable and proper nursing effects, and unnecessary stimulation of the facial nerve during working of the beauty instrument is prevented.
An embodiment of the present application further provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a nerve detection control method of a beauty treatment instrument, including the steps of: acquiring face data of the detection component, wherein the face data comprises facial nerve data and facial muscle data; extracting the facial nerve data and preprocessing the facial nerve data; inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model; identifying the image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image; inputting the image information into a trained neural network, and identifying facial important neural coordinate parameters in the image information through the neural network; and determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in embodiments may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual speed data rate SDRAM (SSRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.
Claims (10)
1. A nerve detection control method of a cosmetic apparatus, wherein the cosmetic apparatus includes a detection assembly, the method comprising:
acquiring face data of the detection component, wherein the face data comprises facial nerve data and facial muscle data;
Extracting the facial nerve data and preprocessing the facial nerve data;
inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model;
identifying the image information, and carrying out hierarchical division on the image information to obtain a superficial layer image and a smooth deep layer image;
inputting the smooth deep layer image into a trained neural network, and identifying facial important nerve coordinate parameters in the smooth deep layer image through the neural network, wherein the important nerves comprise trigeminal nerves and masseter nerves;
and determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
2. The neural detection control method of a cosmetic instrument according to claim 1, wherein inputting the level deep image into a trained neural network, identifying facial important neural coordinate parameters in the level deep image through the neural network, comprises:
inputting the smooth deep layer image into a trained neural network, wherein the trained neural network comprises a convolution layer, a pooling layer and a full connection layer;
The characteristic of trigeminal nerve and masseter nerve in the image information is obtained through the convolution layer, the characteristic of the trigeminal nerve and the masseter nerve is reduced in dimension through the pooling layer, an abstract characteristic vector is obtained, and the abstract characteristic vector is mapped to a coordinate reference space through a full connection layer;
and obtaining coordinate parameters of trigeminal nerve and rongeur nerve in a coordinate reference space based on the result of mapping the abstract feature vector to the coordinate reference space.
3. The neural detection control method of a cosmetic instrument according to claim 1, wherein the inputting the preprocessed facial neural data into a trained model, and the recognizing and converting the facial neural data into image information by the trained model, comprises:
training a VAE model, and constructing a generating model according to the trained VAE model, wherein the generating model comprises an encoder and a decoder;
inputting the preprocessed facial nerve data into an encoder;
analyzing the facial nerve data through the encoder, and mapping the analyzed facial nerve data in a construction space according to a preset ordering mode;
Reconstructing the facial nerve data in a construction space by the decoder;
and based on the result of the reconstruction by the decoder, converting the facial nerve data after the reconstruction into image information.
4. The neural detection control method of a cosmetic instrument according to claim 1, wherein the preprocessing of the facial neural data includes:
analyzing the facial nerve data to obtain a noise signal and a non-noise signal;
extracting the non-noise signal, extracting the frequency parameter of the non-noise signal through wavelet transformation, and extracting the amplitude parameter of the non-noise signal through a differential algorithm;
judging whether the frequency characteristic and the amplitude characteristic of the non-noise signal meet the optimization condition or not;
and when the frequency parameter and the amplitude parameter of the non-noise signal meet the optimization conditions, iteratively optimizing the frequency parameter and the amplitude parameter of the non-noise signal according to a preset mode.
5. The neural detection control method of a cosmetic instrument according to claim 4, wherein the extracting the frequency parameter of the non-noise signal by wavelet transform comprises:
performing wavelet transformation on the non-noise signals to obtain wavelet transformation coefficients;
Selecting a threshold coefficient according to characteristic change generated in the process of increasing the decomposition scale of the wavelet transform coefficient by analyzing the wavelet transform coefficient;
and filtering the threshold coefficient and the wavelet change coefficient to obtain a frequency parameter.
6. The neural detection control method of a cosmetic instrument according to claim 4, wherein the extracting the amplitude parameter of the non-noise signal by the differential algorithm includes:
converting the non-noise signal into an array model of continuous sampling points, and performing difference calculation on two adjacent sampling points to obtain a differential signal;
taking the absolute value of the differential signal to obtain an amplitude parameter;
the formula for calculating the difference value between the two adjacent sampling points is as follows: Δx (t) =x (t+1) -x (t), x (t+1) being a first sampling point and x (t) being a second sampling point adjacent to the first sampling point.
7. The neural detection control method of a cosmetic instrument according to claim 1, wherein the determining the operation state of the cosmetic instrument based on the facial significant neural coordinate parameter includes:
acquiring position information of the beauty instrument;
judging whether the position information of the beauty instrument meets the condition of stopping work or not based on the facial important nerve coordinate parameters;
When the distance between the position information of the beauty instrument and the nerve coordinate parameter is smaller than a preset distance threshold value, judging that the position information of the beauty instrument meets the condition of stopping work, and controlling the beauty instrument to stop work.
8. A nerve detection control device of a beauty instrument, comprising:
an acquisition module for acquiring facial data of the detection component, wherein the facial data includes facial nerve data and facial muscle data;
the processing module is used for extracting the facial nerve data and preprocessing the facial nerve data;
the conversion module is used for inputting the preprocessed facial nerve data into a trained model, and identifying and converting the facial nerve data into image information through the trained model;
the dividing module is used for identifying the image information, and carrying out hierarchical division on the image information to obtain a surface shallow image and a smooth deep image;
the identification module is used for inputting the image information into a trained neural network, and identifying facial important neural coordinate parameters in the image information through the neural network;
and the control module is used for determining the working state of the beauty instrument based on the facial important nerve coordinate parameters, wherein the working state comprises stop work and normal work.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311563390.9A CN117503062B (en) | 2023-11-21 | 2023-11-21 | Neural detection control method, device, equipment and storage medium of beauty instrument |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311563390.9A CN117503062B (en) | 2023-11-21 | 2023-11-21 | Neural detection control method, device, equipment and storage medium of beauty instrument |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117503062A CN117503062A (en) | 2024-02-06 |
CN117503062B true CN117503062B (en) | 2024-04-09 |
Family
ID=89764120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311563390.9A Active CN117503062B (en) | 2023-11-21 | 2023-11-21 | Neural detection control method, device, equipment and storage medium of beauty instrument |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117503062B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009038976A1 (en) * | 2008-12-15 | 2010-07-15 | Siemens Aktiengesellschaft | Sensor arrangement for medical diagnostic system for detecting action potentials of nerve cells in e.g. animal body, has scanning unit including communication module for transmission of information to evaluation system |
CN105873506A (en) * | 2013-11-07 | 2016-08-17 | 赛佛欧普手术有限公司 | Systems and methods for detecting nerve function |
CN107106840A (en) * | 2014-10-07 | 2017-08-29 | 纽罗路普有限公司 | The component that can be implanted into |
CN107145833A (en) * | 2017-04-11 | 2017-09-08 | 腾讯科技(上海)有限公司 | The determination method and apparatus of human face region |
CN108523887A (en) * | 2017-03-02 | 2018-09-14 | Smk株式会社 | The guide device of organism electrode |
CN209203263U (en) * | 2018-06-05 | 2019-08-06 | 南京仁康医院有限公司 | A kind of MTN ganglioside GM_3 repairing and treating facial nerve disease equipment |
CN110705428A (en) * | 2019-09-26 | 2020-01-17 | 北京智能工场科技有限公司 | Facial age recognition system and method based on impulse neural network |
CN113095310A (en) * | 2021-06-10 | 2021-07-09 | 杭州魔点科技有限公司 | Face position detection method, electronic device and storage medium |
CN114711959A (en) * | 2022-04-24 | 2022-07-08 | 史四季 | Skin surgery laser nursing device and system |
CN115227215A (en) * | 2022-07-27 | 2022-10-25 | 西安科悦医疗技术有限公司 | Resonance respiration-based non-invasive vagal nerve stimulation method and related device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150142079A1 (en) * | 2013-11-20 | 2015-05-21 | Jay Pensler | Method for stimulating facial muscles |
US10478623B2 (en) * | 2014-10-20 | 2019-11-19 | Indiana University Research And Technology Corporation | System and method for non-invasively controlling autonomic nerve activity |
US20230072423A1 (en) * | 2018-01-25 | 2023-03-09 | Meta Platforms Technologies, Llc | Wearable electronic devices and extended reality systems including neuromuscular sensors |
CA3187876A1 (en) * | 2022-02-09 | 2023-08-09 | Little Angel Medical Inc. | System and method for automatic personalized assessment of human body surface conditions |
-
2023
- 2023-11-21 CN CN202311563390.9A patent/CN117503062B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009038976A1 (en) * | 2008-12-15 | 2010-07-15 | Siemens Aktiengesellschaft | Sensor arrangement for medical diagnostic system for detecting action potentials of nerve cells in e.g. animal body, has scanning unit including communication module for transmission of information to evaluation system |
CN105873506A (en) * | 2013-11-07 | 2016-08-17 | 赛佛欧普手术有限公司 | Systems and methods for detecting nerve function |
CN107106840A (en) * | 2014-10-07 | 2017-08-29 | 纽罗路普有限公司 | The component that can be implanted into |
CN108523887A (en) * | 2017-03-02 | 2018-09-14 | Smk株式会社 | The guide device of organism electrode |
CN107145833A (en) * | 2017-04-11 | 2017-09-08 | 腾讯科技(上海)有限公司 | The determination method and apparatus of human face region |
CN209203263U (en) * | 2018-06-05 | 2019-08-06 | 南京仁康医院有限公司 | A kind of MTN ganglioside GM_3 repairing and treating facial nerve disease equipment |
CN110705428A (en) * | 2019-09-26 | 2020-01-17 | 北京智能工场科技有限公司 | Facial age recognition system and method based on impulse neural network |
CN113095310A (en) * | 2021-06-10 | 2021-07-09 | 杭州魔点科技有限公司 | Face position detection method, electronic device and storage medium |
CN114711959A (en) * | 2022-04-24 | 2022-07-08 | 史四季 | Skin surgery laser nursing device and system |
CN115227215A (en) * | 2022-07-27 | 2022-10-25 | 西安科悦医疗技术有限公司 | Resonance respiration-based non-invasive vagal nerve stimulation method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN117503062A (en) | 2024-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Seal et al. | DeprNet: A deep convolution neural network framework for detecting depression using EEG | |
WO2021143353A1 (en) | Gesture information processing method and apparatus, electronic device, and storage medium | |
Abbaspour et al. | Evaluation of surface EMG-based recognition algorithms for decoding hand movements | |
Geethanjali | Myoelectric control of prosthetic hands: state-of-the-art review | |
Del Testa et al. | Lightweight lossy compression of biometric patterns via denoising autoencoders | |
CN109165556B (en) | Identity recognition method based on GRNN | |
Mane et al. | Hand motion recognition from single channel surface EMG using wavelet & artificial neural network | |
Ko et al. | Emotion recognition using EEG signals with relative power values and Bayesian network | |
CN111881812A (en) | Multi-modal emotion analysis method and system based on deep learning for acupuncture | |
CN111954250A (en) | Lightweight Wi-Fi behavior sensing method and system | |
Jothiraj et al. | Classification of EEG signals for detection of epileptic seizure activities based on feature extraction from brain maps using image processing algorithms | |
KR20200108969A (en) | Method and Apparatus for Cyclic Time Series Data Feature Extraction | |
Huang et al. | Robust multi-feature collective non-negative matrix factorization for ECG biometrics | |
Hwaidi et al. | A noise removal approach from eeg recordings based on variational autoencoders | |
CN113143261B (en) | Myoelectric signal-based identity recognition system, method and equipment | |
CN115211858A (en) | Emotion recognition method and system based on deep learning and storable medium | |
CN117503062B (en) | Neural detection control method, device, equipment and storage medium of beauty instrument | |
CN116392087B (en) | Sleep stability quantification and adjustment method, system and device based on modal decomposition | |
Benitez et al. | Robust unsupervised detection of action potentials with probabilistic models | |
CN116712056A (en) | Characteristic image generation and identification method, equipment and storage medium for electrocardiogram data | |
CN115813409A (en) | Ultra-low-delay moving image electroencephalogram decoding method | |
Vijayvargiya et al. | PC-GNN: Pearson Correlation-Based Graph Neural Network for Recognition of Human Lower Limb Activity Using sEMG Signal | |
CN111789592B (en) | Electroencephalogram recognition method based on topological feature fusion | |
Emara et al. | A Hybrid Compressive Sensing and Classification Approach for Dynamic Storage Management of Vital Biomedical Signals | |
Al Nazi et al. | Motor Imagery EEG Classification Using Random Subspace Ensemble Network with Variable Length Features. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |