CN114821115A - APP-based multifunctional electronic acupuncture system - Google Patents

APP-based multifunctional electronic acupuncture system Download PDF

Info

Publication number
CN114821115A
CN114821115A CN202210407914.4A CN202210407914A CN114821115A CN 114821115 A CN114821115 A CN 114821115A CN 202210407914 A CN202210407914 A CN 202210407914A CN 114821115 A CN114821115 A CN 114821115A
Authority
CN
China
Prior art keywords
acupuncture
module
simulation
data
electronic acupuncture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210407914.4A
Other languages
Chinese (zh)
Inventor
刘方铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital of Shandong First Medical University
Original Assignee
First Affiliated Hospital of Shandong First Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital of Shandong First Medical University filed Critical First Affiliated Hospital of Shandong First Medical University
Priority to CN202210407914.4A priority Critical patent/CN114821115A/en
Publication of CN114821115A publication Critical patent/CN114821115A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36014External stimulators, e.g. with patch electrodes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36014External stimulators, e.g. with patch electrodes
    • A61N1/3603Control systems
    • A61N1/36031Control systems using physiological parameters for adjustment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Physiology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Dermatology (AREA)

Abstract

The invention belongs to the technical field of electronic acupuncture and discloses an APP-based multifunctional electronic acupuncture system, which comprises: the device comprises a power supply module, a body information acquisition module, a main control module, a pulse module, an acupuncture module, an emotion analysis module, an acupuncture simulation module and a display module. The invention can weaken the interference of abnormal stimulation through the emotion analysis module; meanwhile, the electronic acupuncture simulation effect can be objectively evaluated through the acupuncture simulation module; and the data of the electronic acupuncture simulation can be adjusted in a targeted manner according to the obtained second data parameters, so that the controllability of the electronic acupuncture simulation effect is improved.

Description

APP-based multifunctional electronic acupuncture system
Technical Field
The invention belongs to the technical field of electronic acupuncture, and particularly relates to an APP-based multifunctional electronic acupuncture system.
Background
The electronic acupuncture is different from the traditional acupuncture in that the electronic acupuncture stimulates acupuncture points through audio pulses, is different from the common electric acupuncture in that a silver needle is not used, does not puncture the skin, and has no wound and side effect. The defects of the traditional acupuncture are abandoned, and the advantages of the traditional acupuncture are inherited and developed. It can treat diseases which can be treated by traditional acupuncture and moxibustion. However, in the existing acupuncture emotion recognition method based on the APP multifunctional electronic acupuncture system, although non-physiological expressions such as expressions and voices are combined with physiological factors such as electroencephalogram and heart rate to perform emotion recognition, the problem of interference of electroencephalogram and heart rate caused by acquisition of voice signals and the like is brought, so that emotion recognition accuracy is affected; meanwhile, the existing electronic acupuncture simulation can not accurately control the electronic acupuncture simulation, and the electronic acupuncture simulation effect has the defect of low controllability.
In summary, the problems of the prior art are as follows: in the existing acupuncture emotion recognition method based on an APP multifunctional electronic acupuncture system, although non-physiological expressions such as expressions and voices are combined with physiological factors such as electroencephalogram and heart rate to perform emotion recognition, the problem of interference of electroencephalogram and heart rate caused by acquisition of voice signals and the like is caused, so that emotion recognition accuracy is affected; meanwhile, the existing electronic acupuncture simulation can not accurately control the electronic acupuncture simulation, and the electronic acupuncture simulation effect has the defect of low controllability.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an APP-based multifunctional electronic acupuncture system.
The invention is realized in this way, a multifunctional electronic acupuncture system based on APP includes:
the device comprises a power supply module, a body information acquisition module, a main control module, a pulse module, an acupuncture module, an emotion analysis module, an acupuncture simulation module and a display module;
the power supply module is connected with the main control module and used for supplying power to the APP-based multifunctional electronic acupuncture system;
the body information acquisition module is connected with the main control module and is used for acquiring body contour information and weight information of the user;
the main control module is connected with the power supply module, the body information acquisition module, the pulse module, the acupuncture module, the emotion analysis module, the acupuncture simulation module and the display module and is used for controlling the normal work of each module;
the pulse module is connected with the main control module and used for generating pulses to stimulate acupuncture points through a pulse circuit;
the method for generating the pulse by the pulse circuit comprises the following steps: establishing a degradation mechanism model of the pulse and determining an optimization frame;
constructing a data item for modeling pulsed anomalous stimulation; selecting a prior item, and combining the prior item with a data item to construct a pulse non-blind deconvolution model; carrying out numerical optimization solution on the non-blind deconvolution model by using an iterative reweighted least square algorithm and a conjugate gradient method to obtain a stimulation track; the method for constructing the data item to model the pulse abnormal stimulation specifically comprises the following steps:
(1) the following nonlinear functions are selected to eliminate the influence of the abnormal stimulation of the salt and pepper:
Figure BDA0003602814260000021
wherein a and b are function parameters, a controls the non-linear degree of the function at the truncation point, b controls the truncation point, and x is the gray value of the track point; a curve of the function t (x) when a is 5e2 and b is 0.1;
(2) combining the L1 norm with the nonlinear function shown above constructs a data term shown below:
Figure BDA0003602814260000022
where i is the stimulation trajectory index.
The non-blind deconvolution model solving method specifically comprises the following steps:
(1) the non-blind deconvolution model is deformed into a weighted least squares form as shown below:
Figure BDA0003602814260000023
wherein, W h And W v For a weight diagonal matrix, the values of its diagonal elements are calculated as follows:
Figure BDA0003602814260000031
Figure BDA0003602814260000032
where i is the matrix element index, T' (| (Kx-y) i |) is a nonlinear function T (| (Kx-y) i Derivative function of | s;
(2) the energy function in the least squares form of the non-blind deconvolution model is derived and the derivative is made equal to 0, resulting in the following linear equation:
Figure BDA0003602814260000033
order to
Figure BDA0003602814260000034
b=K T W 2 y and A are symmetric positive definite matrixes, and solving by the upper yoke gradient method;
the acupuncture module is connected with the main control module and is used for connecting the pulse circuit through the electronic contact to perform acupuncture operation on acupuncture points;
the emotion analysis module is connected with the main control module and is used for analyzing emotion changes of the user in the acupuncture process;
the acupuncture simulation module is connected with the main control module and is used for simulating acupuncture;
and the display module is connected with the main control module and is used for displaying body information, emotion analysis results and acupuncture simulation information.
Further, the emotion analysis module analysis method is as follows:
(1) acquiring physical condition information of a treated user; collecting user expression images during electronic acupuncture, processing the collected expression images, and extracting Gabor wavelet characteristics of expressions through Gabor wavelet transformation;
(2) acquiring an electroencephalogram image of a user during electronic acupuncture, extracting texture features of the electroencephalogram image by adopting a uniform local binary pattern UPLBP, and performing dimension reduction processing on the image;
(3) performing multi-modal feature fusion on the features extracted in the first step and the second step by adopting a CNN-LSTM network, and performing emotion classification;
(4) giving an operation suggestion of the electronic acupuncture system according to the emotion classification result, and adjusting the strength or frequency of the electronic acupuncture;
further, the specific processing process of the multi-modal feature fusion and emotion classification comprises:
fusing the Gabor wavelet characteristics of the obtained expression image and the characteristics of the electroencephalogram curve image obtained in the second step into a characteristic vector;
the merging into one feature vector specifically includes: weighting the extracted features of the images in different modes by adopting variable-weight sparse linear fusion to synthesize a feature vector, wherein a feature fusion weighting formula is expressed as follows:
O(x)=γK(x)+(1-γ)F(x)(1)
wherein:
k (x) represents the characteristics of the electroencephalogram curve image;
f (x) expression of expressive features;
gamma is an empirical weight coefficient of the influence of different characters on the electroencephalogram curve;
converting the fused eigenvector into a tensor form, iterating by setting different values of the sample number batch _ size selected in one training, randomly taking out a training sample from each iteration training as input data of the CNN-LSTM network, and inputting the training sample into the CNN-LSTM network;
adjusting the initial structure and network parameters of the CNN network, and performing iteration; the adjusting of the initial structure and the network parameters specifically includes: adjusting the number of layers of the convolutional layers in the CNN network initial structure and the network parameter learning rate, and selecting the CNN network initial structure and the network parameters which enable the network precision and the consumed time to be optimal;
extracting the characteristics of the picture in a CNN network through multilayer convolution pooling to obtain a five-dimensional tensor characteristic diagram;
on the premise of not changing the numerical value in the characteristic diagram, converting the five-dimensional tensor characteristic diagram into a three-dimensional tensor characteristic diagram which meets the input requirement of the LSTM, and inputting the three-dimensional tensor characteristic diagram into an LSTM layer for processing;
inputting the output of the LSTM layer into a full-connection layer and a function layer for SVM classification;
through SVM classification, training is carried out according to emotion classification corresponding to an input feature vector of the SVM, the feature with the minimum loss function value of the emotion classification corresponding to the vector is selected to represent the emotion category, the emotion classification result of a one-dimensional array is obtained, and a trained neural network is stored, wherein the one-dimensional array comprises predicted emotion classification information corresponding to a sample after training;
and comparing the predicted emotion classification information with the actual emotion classification information to obtain the prediction accuracy of the trained neural network, and continuously correcting the weight ratio of the characteristics of the electroencephalogram image and the facial expression characteristics in the characteristic fusion weighting formula according to the recognition accuracy.
Further, the loss function adopted by the function layer is a softmax function.
Further, the simulation method of the acupuncture simulation module comprises the following steps:
1) building a simulation database through a database program; acquiring a first condition data parameter, wherein the data parameter comprises environmental data and/or physiological data of the user; storing the acquired data in a simulation database for storage; determining an activated electronic acupuncture simulator arranged on a wearable device and a simulation mode of the acupuncture simulator according to the first data parameter;
2) controlling the electronic acupuncture simulator to output electronic acupuncture simulation signals to the attached human body position according to the simulation mode; and acquiring a second data parameter of the user during electronic acupuncture simulation, performing preset processing based on the second data parameter, and storing the acquired data in a simulation database for storage.
Further, before the obtaining the first data parameter, the method further includes:
acquiring characteristic information of the user;
and determining the first physiological data to be acquired according to the characteristic information.
Further, the determining, according to the first data parameter, an electronic acupuncture simulator disposed on a wearable device and a simulation mode of the electronic acupuncture simulator, includes;
comparing the first data parameter with a pre-stored reference parameter, and determining the position of one or more activated electronic acupuncture simulators arranged on the wearable device according to the comparison result;
according to the comparison result, the simulation mode of the activated electronic acupuncture simulator is determined.
Further, the simulation mode includes: the type of the analog signal and the corresponding electronic acupuncture simulation parameters.
Further, the performing preset processing on the second data parameter includes:
and analyzing the second data parameter to update the use effect data of the user.
Further, the acquiring a second data parameter of the user during the electronic acupuncture simulation and performing the preset processing based on the second data parameter includes:
acquiring the second data parameter of the user;
and adjusting the simulation mode based on the acquired second data parameter.
The invention has the advantages and positive effects that: the method has the advantages that the uniform local binary pattern UPLBP adopted by the emotion analysis module is used for carrying out feature extraction on the electroencephalogram signals, the problem that the binary patterns of other patterns are too many is solved, dimension reduction is carried out on the binary patterns by the UPLBP, so that the types of the binary patterns are greatly reduced, but the stored texture feature data are not changed, but the total data amount is reduced from 2p to p (p-1) +2, and the interference of abnormal stimulation is obviously weakened under the dimension reduction of feature vectors; meanwhile, the electronic acupuncture simulation effect can be objectively evaluated through the acupuncture simulation module; and the data of the electronic acupuncture simulation can be adjusted in a targeted manner according to the obtained second data parameters, so that the controllability of the electronic acupuncture simulation effect is improved.
The method for generating the pulse by the pulse circuit comprises the following steps: establishing a degradation mechanism model of the pulse and determining an optimization frame;
constructing a data item for modeling pulsed anomalous stimulation; selecting a prior item, and combining the prior item with a data item to construct a pulse non-blind deconvolution model; carrying out numerical optimization solution on the non-blind deconvolution model by using an iterative reweighted least square algorithm and a conjugate gradient method to obtain a stimulation track; can realize accurate control of the stimulation track.
Drawings
Fig. 1 is a block diagram of a multifunctional electronic acupuncture system based on APP according to an embodiment of the present invention.
Fig. 2 is a flowchart of an emotion analysis module analysis method provided in an embodiment of the present invention.
Fig. 3 is a flowchart of a simulation method of an acupuncture simulation module according to an embodiment of the present invention.
In fig. 1: 1. a power supply module; 2. a body information acquisition module; 3. a main control module; 4. a pulse module; 5. an acupuncture module; 6. an emotion analysis module; 7. an acupuncture simulation module; 8. and a display module.
Detailed Description
In order to further understand the contents, features and effects of the present invention, the following embodiments are illustrated and described in detail with reference to the accompanying drawings.
The structure of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the multifunctional electronic acupuncture system based on APP provided by the embodiment of the present invention comprises: the device comprises a power supply module 1, a body information acquisition module 2, a main control module 3, a pulse module 4, an acupuncture module 5, an emotion analysis module 6, an acupuncture simulation module 7 and a display module 8.
The power supply module 1 is connected with the main control module 3 and used for supplying power to the APP-based multifunctional electronic acupuncture system;
the body information acquisition module 2 is connected with the main control module 3 and is used for acquiring body contour information and weight information of the user;
the main control module 3 is connected with the power supply module 1, the body information acquisition module 2, the pulse module 4, the acupuncture module 5, the emotion analysis module 6, the acupuncture simulation module 7 and the display module 8 and is used for controlling the normal work of each module;
the pulse module 4 is connected with the main control module 3 and used for generating pulses through a pulse circuit to stimulate acupuncture points;
the method for generating the pulse by the pulse circuit comprises the following steps: establishing a degradation mechanism model of the pulse and determining an optimization frame;
constructing a data item for modeling pulsed anomalous stimulation; selecting a prior item, and combining the prior item with a data item to construct a pulse non-blind deconvolution model; carrying out numerical optimization solution on the non-blind deconvolution model by using an iterative reweighted least square algorithm and a conjugate gradient method to obtain a stimulation track; the method for constructing the data item to model the pulse abnormal stimulation specifically comprises the following steps:
(1) the following nonlinear functions are selected to eliminate the influence of the abnormal stimulation of the salt and pepper:
Figure BDA0003602814260000071
wherein a and b are function parameters, a controls the non-linear degree of the function at the truncation point, b controls the truncation point, and x is the gray value of the track point; a curve of the function t (x) when a is 5e2 and b is 0.1;
(2) combining the L1 norm with the nonlinear function shown above constructs a data term shown below:
Figure BDA0003602814260000072
where i is the stimulation trajectory index.
The non-blind deconvolution model solving method specifically comprises the following steps:
(1) the non-blind deconvolution model is deformed into a weighted least squares form as shown below:
Figure BDA0003602814260000081
wherein, W h And W v For a weight diagonal matrix, the values of its diagonal elements are calculated as follows:
Figure BDA0003602814260000082
Figure BDA0003602814260000083
where i is the matrix element index, T' (| (Kx-y) i |) is a nonlinear function T (| (Kx-y) i A derivative function of |;
(2) the energy function in the least squares form of the non-blind deconvolution model is derived and the derivative is made equal to 0, resulting in the following linear equation:
Figure BDA0003602814260000084
order to
Figure BDA0003602814260000085
b=K T W 2 y, A are pairsThe positive definite matrix is called, and the upper yoke gradient method is used for solving;
the acupuncture module 5 is connected with the main control module 3 and is used for carrying out acupuncture operation on acupuncture points by connecting an electronic contact with a pulse circuit;
the emotion analysis module 6 is connected with the main control module 3 and is used for analyzing emotion changes of the user in the acupuncture process;
the acupuncture simulation module 7 is connected with the main control module 3 and is used for simulating acupuncture;
and the display module 8 is connected with the main control module 3 and is used for displaying body information, emotion analysis results and acupuncture simulation information.
As shown in fig. 2, the emotion analysis module 6 provided by the present invention has the following analysis method:
s101, acquiring physical condition information of a treated user; collecting user expression images during electronic acupuncture, processing the collected expression images, and extracting Gabor wavelet characteristics of expressions through Gabor wavelet transformation;
s102, acquiring an electroencephalogram image of a user during electronic acupuncture, extracting texture features of the electroencephalogram image by adopting a uniform local binary pattern UPLBP, and performing dimension reduction processing on the image;
s103, performing multi-modal feature fusion on the features extracted in the first step and the second step by adopting a CNN-LSTM network, and performing emotion classification;
s104, providing an operation suggestion of the electronic acupuncture system according to the emotion classification result, and adjusting the strength or frequency of the electronic acupuncture;
the specific processing process of the multi-modal feature fusion and emotion classification provided by the invention comprises the following steps:
fusing the Gabor wavelet characteristics of the obtained expression image and the characteristics of the electroencephalogram curve image obtained in the second step into a characteristic vector;
the merging into one feature vector specifically includes: weighting the extracted features of the images in different modes by adopting variable-weight sparse linear fusion to synthesize a feature vector, wherein a feature fusion weighting formula is expressed as follows:
O(x)=γK(x)+(1-γ)F(x)(1)
wherein:
k (x) represents the characteristics of the electroencephalogram curve image;
f (x) expression of expressive features;
gamma is an empirical weight coefficient of the influence of different characters on the electroencephalogram curve;
converting the fused eigenvector into a tensor form, iterating by setting different values of the sample number batch _ size selected in one training, randomly taking out a training sample from each iteration training as input data of the CNN-LSTM network, and inputting the training sample into the CNN-LSTM network;
adjusting the initial structure and network parameters of the CNN network, and performing iteration; the adjusting of the initial structure and the network parameters specifically includes: adjusting the number of layers of the convolutional layers in the CNN network initial structure and the network parameter learning rate, and selecting the CNN network initial structure and the network parameters which enable the network precision and the consumed time to be optimal;
extracting the characteristics of the picture in a CNN network through multilayer convolution pooling to obtain a five-dimensional tensor characteristic diagram;
on the premise of not changing the numerical value in the characteristic diagram, converting the five-dimensional tensor characteristic diagram into a three-dimensional tensor characteristic diagram which meets the input requirement of the LSTM, and inputting the three-dimensional tensor characteristic diagram into an LSTM layer for processing;
inputting the output of the LSTM layer into a full-connection layer and a function layer for SVM classification;
through SVM classification, training is carried out according to emotion classification corresponding to an input feature vector of the SVM, the feature with the minimum loss function value of the emotion classification corresponding to the vector is selected to represent the emotion category, the emotion classification result of a one-dimensional array is obtained, and a trained neural network is stored, wherein the one-dimensional array comprises predicted emotion classification information corresponding to a sample after training;
and comparing the predicted emotion classification information with the actual emotion classification information to obtain the prediction accuracy of the trained neural network, and continuously correcting the weight ratio of the characteristics of the electroencephalogram image and the facial expression characteristics in the characteristic fusion weighting formula according to the recognition accuracy.
The loss function adopted by the function layer provided by the invention is a softmax function.
As shown in fig. 3, the simulation method of the acupuncture simulation module 7 provided by the present invention is as follows:
s201, establishing a simulation database through a database program; acquiring a first condition data parameter, wherein the data parameter comprises environmental data and/or physiological data of the user; storing the acquired data in a simulation database for storage; determining an activated electronic acupuncture simulator arranged on a wearable device and a simulation mode of the acupuncture simulator according to the first data parameter;
s202, controlling the electronic acupuncture simulator to output electronic acupuncture simulation signals to the attached human body position according to the simulation mode; and acquiring a second data parameter of the user during electronic acupuncture simulation, performing preset processing based on the second data parameter, and storing the acquired data in a simulation database for storage.
Before the first data parameter is obtained, the method further comprises the following steps:
acquiring characteristic information of the user;
and determining the first physiological data to be acquired according to the characteristic information.
The invention provides a method for determining an electronic acupuncture simulator arranged on a wearable device and a simulation mode of the electronic acupuncture simulator according to the first data parameter, which comprises the following steps of;
comparing the first data parameter with a pre-stored reference parameter, and determining the position of one or more activated electronic acupuncture simulators arranged on the wearable device according to the comparison result;
according to the comparison result, the simulation mode of the activated electronic acupuncture simulator is determined.
The simulation mode provided by the invention comprises the following steps: the type of the analog signal and the corresponding electronic acupuncture simulation parameters.
The preset processing of the second data parameter provided by the invention comprises the following steps:
and analyzing the second data parameter to update the use effect data of the user.
The invention provides a method for acquiring a second data parameter of a user during electronic acupuncture simulation and performing preset processing based on the second data parameter, which comprises the following steps:
acquiring the second data parameter of the user;
and adjusting the simulation mode based on the acquired second data parameter.
When the multifunctional electronic acupuncture system works, firstly, the multifunctional electronic acupuncture system based on APP is powered through the power supply module 1; the body contour information and the weight information of the user are collected through a body information collecting module 2; secondly, the main control module 3 utilizes the pulse circuit to generate pulses to stimulate acupuncture points through the pulse module 4; the acupuncture module 5 is connected with the pulse circuit by the electronic contact to perform acupuncture operation on acupuncture points; the emotion change of the user in the acupuncture process is analyzed through an emotion analysis module 6; then, the acupuncture is simulated through an acupuncture simulation module 7; and finally, displaying the body information, the emotion analysis result and the acupuncture simulation information through the display module 8.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent changes and modifications made to the above embodiment according to the technical spirit of the present invention are within the scope of the technical solution of the present invention.

Claims (10)

1. The utility model provides a multi-functional electron acupuncture system based on APP which characterized in that, multi-functional electron acupuncture system based on APP includes:
the power supply module is connected with the main control module and used for supplying power to the APP-based multifunctional electronic acupuncture system;
the body information acquisition module is connected with the main control module and is used for acquiring body contour information and weight information of the user;
the main control module is connected with the power supply module, the body information acquisition module, the pulse module, the acupuncture module, the emotion analysis module, the acupuncture simulation module and the display module and is used for controlling the normal work of each module;
the pulse module is connected with the main control module and used for generating pulses to stimulate acupuncture points through a pulse circuit; the method for generating the pulse by the pulse circuit comprises the following steps: establishing a degradation mechanism model of the pulse and determining an optimization frame;
constructing a data item for modeling pulsed anomalous stimulation; selecting a prior item, and combining the prior item with a data item to construct a pulse non-blind deconvolution model; carrying out numerical optimization solution on the non-blind deconvolution model by using an iterative reweighted least square algorithm and a conjugate gradient method to obtain a stimulation track; the method for constructing the data item to model the pulse abnormal stimulation specifically comprises the following steps:
(1) the following nonlinear functions are selected to eliminate the influence of the abnormal stimulation of the salt and pepper:
Figure FDA0003602814250000011
wherein a and b are function parameters, a controls the non-linear degree of the function at the truncation point, b controls the truncation point, and x is the gray value of the track point; a curve of the function t (x) when a is 5e2 and b is 0.1;
(2) combining the L1 norm with the nonlinear function shown above constructs a data term shown below:
Figure FDA0003602814250000012
where i is the stimulation trajectory index.
The non-blind deconvolution model solving method specifically comprises the following steps:
(1) the non-blind deconvolution model is deformed into a weighted least squares form as shown below:
Figure FDA0003602814250000013
wherein, W h And W v For a weight diagonal matrix, the values of its diagonal elements are calculated as follows:
Figure FDA0003602814250000021
Figure FDA0003602814250000022
where i is the matrix element index, T' (| (Kx-y) i |) is a nonlinear function T (| (Kx-y) i A derivative function of |;
(2) the energy function in the least squares form of the non-blind deconvolution model is derived and the derivative is made equal to 0, resulting in the following linear equation:
Figure FDA0003602814250000023
order to
Figure FDA0003602814250000024
b=K T W 2 y and A are symmetric positive definite matrixes, and solving by the upper yoke gradient method;
the acupuncture module is connected with the main control module and is used for connecting the pulse circuit through the electronic contact to perform acupuncture operation on acupuncture points;
the emotion analysis module is connected with the main control module and is used for analyzing emotion changes of the user in the acupuncture process;
the acupuncture simulation module is connected with the main control module and is used for simulating acupuncture;
and the display module is connected with the main control module and is used for displaying body information, emotion analysis results and acupuncture simulation information.
2. The APP-based multifunctional electronic acupuncture system of claim 1, wherein the emotion analysis module analysis method is as follows:
(1) acquiring physical condition information of a treated user; collecting user expression images during electronic acupuncture, processing the collected expression images, and extracting Gabor wavelet characteristics of expressions through Gabor wavelet transformation;
(2) acquiring an electroencephalogram image of a user during electronic acupuncture, extracting texture features of the electroencephalogram image by adopting a uniform local binary pattern UPLBP, and performing dimension reduction processing on the image;
(3) performing multi-modal feature fusion on the features extracted in the first step and the second step by adopting a CNN-LSTM network, and performing emotion classification;
(4) and giving an operation suggestion of the electronic acupuncture system according to the emotion classification result, and adjusting the strength or frequency of the electronic acupuncture.
3. The APP-based multifunctional electronic acupuncture system of claim 2, wherein the specific processes of multi-modal feature fusion and emotion classification include:
fusing the Gabor wavelet characteristics of the obtained expression image and the characteristics of the electroencephalogram curve image obtained in the second step into a characteristic vector;
the merging into one feature vector specifically includes: weighting the extracted features of the images in different modes by adopting variable-weight sparse linear fusion to synthesize a feature vector, wherein a feature fusion weighting formula is expressed as follows:
O(x)=γK(x)+(1-γ)F(x) (1)
wherein:
k (x) represents the characteristics of the electroencephalogram curve image;
f (x) expression of expressive features;
gamma is an empirical weight coefficient of the influence of different characters on the electroencephalogram curve;
converting the fused eigenvector into a tensor form, iterating by setting different values of the sample number batch _ size selected in one training, randomly taking out a training sample from each iteration training as input data of the CNN-LSTM network, and inputting the training sample into the CNN-LSTM network;
adjusting the initial structure and network parameters of the CNN network, and performing iteration; the adjusting of the initial structure and the network parameters specifically includes: adjusting the number of layers of the convolutional layers in the CNN network initial structure and the network parameter learning rate, and selecting the CNN network initial structure and the network parameters which enable the network precision and the consumed time to be optimal;
extracting the characteristics of the picture in a CNN network through multilayer convolution pooling to obtain a five-dimensional tensor characteristic diagram;
on the premise of not changing the numerical value in the characteristic diagram, converting the five-dimensional tensor characteristic diagram into a three-dimensional tensor characteristic diagram which meets the input requirement of the LSTM, and inputting the three-dimensional tensor characteristic diagram into an LSTM layer for processing;
inputting the output of the LSTM layer into a full-connection layer and a function layer for SVM classification;
through SVM classification, training is carried out according to emotion classification corresponding to an input feature vector of the SVM, the feature with the minimum loss function value of the emotion classification corresponding to the vector is selected to represent the emotion category, the emotion classification result of a one-dimensional array is obtained, and a trained neural network is stored, wherein the one-dimensional array comprises predicted emotion classification information corresponding to a sample after training;
and comparing the predicted emotion classification information with the actual emotion classification information to obtain the prediction accuracy of the trained neural network, and continuously correcting the weight ratio of the characteristics of the electroencephalogram image and the facial expression characteristics in the characteristic fusion weighting formula according to the recognition accuracy.
4. The APP-based multifunctional electronic acupuncture system of claim 2, wherein the loss function adopted by the function layer is a softmax function.
5. The APP-based multifunctional electronic acupuncture system of claim 1, wherein the acupuncture simulation module simulation method is as follows:
1) building a simulation database through a database program; acquiring a first condition data parameter, wherein the data parameter comprises environmental data and/or physiological data of the user; storing the acquired data in a simulation database for storage; determining an activated electronic acupuncture simulator arranged on a wearable device and a simulation mode of the acupuncture simulator according to the first data parameter;
2) controlling the electronic acupuncture simulator to output electronic acupuncture simulation signals to the attached human body position according to the simulation mode; and acquiring a second data parameter of the user during electronic acupuncture simulation, performing preset processing based on the second data parameter, and storing the acquired data in a simulation database for storage.
6. The APP-based multifunctional electronic acupuncture system of claim 5, wherein before the obtaining the first data parameter, further comprising:
acquiring characteristic information of the user;
and determining the first physiological data to be acquired according to the characteristic information.
7. The APP-based multifunctional electronic acupuncture system of claim 5, wherein the determining of the electronic acupuncture simulator disposed on the wearable device and the simulation mode of the electronic acupuncture simulator according to the first data parameter comprises;
comparing the first data parameter with a pre-stored reference parameter, and determining the position of one or more activated electronic acupuncture simulators arranged on the wearable device according to the comparison result;
determining the simulation mode of the activated electro-acupuncture simulator according to the comparison result.
8. The APP-based multifunctional electronic acupuncture system of claim 7, wherein the simulation mode comprises: the type of the analog signal and the corresponding electronic acupuncture simulation parameters.
9. The APP-based multifunctional electronic acupuncture system of claim 5, wherein the pre-processing of the second data parameters comprises:
and analyzing the second data parameter to update the use effect data of the user.
10. The APP-based multifunctional electronic acupuncture system of claim 5, wherein the obtaining of the second data parameter of the user during the electronic acupuncture simulation and the performing of the preset processing based on the second data parameter comprise:
acquiring the second data parameter of the user;
and adjusting the simulation mode based on the acquired second data parameter.
CN202210407914.4A 2022-04-19 2022-04-19 APP-based multifunctional electronic acupuncture system Pending CN114821115A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210407914.4A CN114821115A (en) 2022-04-19 2022-04-19 APP-based multifunctional electronic acupuncture system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210407914.4A CN114821115A (en) 2022-04-19 2022-04-19 APP-based multifunctional electronic acupuncture system

Publications (1)

Publication Number Publication Date
CN114821115A true CN114821115A (en) 2022-07-29

Family

ID=82506417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210407914.4A Pending CN114821115A (en) 2022-04-19 2022-04-19 APP-based multifunctional electronic acupuncture system

Country Status (1)

Country Link
CN (1) CN114821115A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116077071A (en) * 2023-02-10 2023-05-09 湖北工业大学 Intelligent rehabilitation massage method, robot and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116077071A (en) * 2023-02-10 2023-05-09 湖北工业大学 Intelligent rehabilitation massage method, robot and storage medium
CN116077071B (en) * 2023-02-10 2023-11-17 湖北工业大学 Intelligent rehabilitation massage method, robot and storage medium

Similar Documents

Publication Publication Date Title
CN111881812B (en) Multi-modal emotion analysis method and system based on deep learning for acupuncture
CN106682616B (en) Method for recognizing neonatal pain expression based on two-channel feature deep learning
CN104771163B (en) EEG feature extraction method based on CSP and R CSP algorithms
CN109784211A (en) A kind of Mental imagery Method of EEG signals classification based on deep learning
CN110321833A (en) Human bodys' response method based on convolutional neural networks and Recognition with Recurrent Neural Network
CN108345903B (en) A kind of multi-modal fusion image classification method based on mode distance restraint
Wang<? A3B2 show [zaq no=" AQ9"]?> et al. Sensor fusion for myoelectric control based on deep learning with recurrent convolutional neural networks
Jacobs Nature, nurture, and the development of functional specializations: A computational approach
CN111584030A (en) Idea control intelligent rehabilitation system based on deep learning and complex network and application
CN108846380A (en) A kind of facial expression recognizing method based on cost-sensitive convolutional neural networks
CN106970703A (en) Multilayer affection computation method based on mood index
CN112037179B (en) Method, system and equipment for generating brain disease diagnosis model
CN117389441B (en) Writing imagination Chinese character track determining method and system based on visual following assistance
CN115083337B (en) LED display driving system and method
CN114821115A (en) APP-based multifunctional electronic acupuncture system
CN113988123A (en) Electroencephalogram fatigue prediction method based on self-weighted increment RVFL network
CN111178288A (en) Human body posture recognition method and device based on local error layer-by-layer training
CN110084867B (en) Arteriovenous image reconstruction method based on CNN and multi-electrode electromagnetic measurement
CN108647229A (en) Virtual human model construction method based on artificial intelligence
CN107169958A (en) Machine learning, background suppress with perceiving the vision significance detection method that positive feedback is combined
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
US20230010664A1 (en) Reinforcement Learning Based Adaptive State Observation for Brain-Machine Interface
Nejadgholi et al. A brain-inspired method of facial expression generation using chaotic feature extracting bidirectional associative memory
JP2002366927A (en) Neural network, neural network system and neural network processing program
CN113625870A (en) Brain-computer interface control system and method based on language imagination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination