CN109008952A - Monitoring method and Related product based on deep learning - Google Patents
Monitoring method and Related product based on deep learning Download PDFInfo
- Publication number
- CN109008952A CN109008952A CN201810432383.8A CN201810432383A CN109008952A CN 109008952 A CN109008952 A CN 109008952A CN 201810432383 A CN201810432383 A CN 201810432383A CN 109008952 A CN109008952 A CN 109008952A
- Authority
- CN
- China
- Prior art keywords
- data
- physiological
- target data
- physiological status
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012544 monitoring process Methods 0.000 title claims abstract description 39
- 238000013135 deep learning Methods 0.000 title claims abstract description 18
- 238000012806 monitoring device Methods 0.000 claims abstract description 23
- 230000036651 mood Effects 0.000 claims description 21
- 230000008921 facial expression Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 17
- 230000035479 physiological effects, processes and functions Effects 0.000 claims description 16
- 230000015654 memory Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 206010020400 Hostility Diseases 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 241001269238 Data Species 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 7
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 201000010099 disease Diseases 0.000 description 4
- 230000036541 health Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000036760 body temperature Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 208000035475 disorder Diseases 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 208000010125 myocardial infarction Diseases 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 208000008035 Back Pain Diseases 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 206010016952 Food poisoning Diseases 0.000 description 1
- 208000019331 Foodborne disease Diseases 0.000 description 1
- 206010019233 Headaches Diseases 0.000 description 1
- 206010020772 Hypertension Diseases 0.000 description 1
- 208000002193 Pain Diseases 0.000 description 1
- 206010037660 Pyrexia Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 208000019804 backache Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 231100000869 headache Toxicity 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The embodiment of the present application discloses a kind of monitoring method based on deep learning, and this method is applied to intelligent monitoring device, this method comprises: obtaining face-image, voice messaging and physiological data;It handles the face-image and obtains first object data, handle the voice messaging and obtain the second target data, handle the physiological data and obtain third target data;The first object data, second target data and the third target data are input to corresponding default network model execution forward operation to be exported as a result, determining physiological status according to the output result;Corresponding monitoring strategy is determined according to the physiological status.The application is conducive to targetedly be guarded.
Description
Technical field
This application involves robot and internet of things field, and in particular to a kind of monitoring method and phase based on depth
Close product.
Background technique
Currently, the aging of many countries population's structures is on the rise, moreover, with the quickening pace of modern life, for old
The the accompanying and attending to of year people will be one and extremely difficult solve the problems, such as.Need to put into a large amount of human resources for accompanying and attending to for the elderly,
And need relatively high medical condition and professional requirement of accompanying and attending to.Therefore, the appearance of robot family doctor
The elderly is set to obtain profession and intimate treatment, either in terms of health aspect or emotion.But current some robots
Family doctor is relatively simple to the analysis of old man's physiological status, it is difficult to which treatment old man in all directions, diagnosis efficiency is low, delays old
The monitoring and treatment of people.
Summary of the invention
The embodiment of the present application provides a kind of monitoring method and Related product based on deep learning, to old by obtaining
Multiple information of people, determine the physiological status of old man, realize to old man and targetedly guard.
In a first aspect, the embodiment of the present application provides a kind of monitoring method based on deep learning, comprising:
Obtain face-image, voice messaging and physiological data;
It handles the face-image and obtains first object data, handle the voice messaging and obtain the second target data, locate
It manages the physiological data and obtains third target data;
The first object data, second target data and the third target data are input to corresponding default
Network model executes forward operation and is exported as a result, determining physiological status according to the output result;
Corresponding monitoring strategy is determined according to the physiological status.
Second aspect, the embodiment of the present application provide a kind of intelligent monitoring device based on deep learning, the intelligent monitor
Device includes: cloud processor, local terminal and wearable device, and the local terminal includes: camera, audio collection device, answers
With processor AP, transceiver;
The camera, for acquiring face-image;
The audio collection device, for acquiring voice messaging;
The wearable device, for acquiring physiological data;
The transceiver, for receiving the face-image, voice messaging and physiological data;
The AP obtains first object data for handling the processing face-image, handles the voice messaging
The second target data is obtained, the physiological data is handled and obtains third target data;
The transceiver is used for transmission the first object data, the second target data and third target data to described
Cloud processor;
The cloud processor is used for the first object data, second target data and the third number of targets
It is exported according to corresponding default network model execution forward operation is input to as a result, determining physiology shape according to the output result
State;
The cloud processor, for determining corresponding monitoring strategy according to the physiological status.
The third aspect, the embodiment of the present application provide a kind of intelligent monitoring device, including one or more processors, one or
Multiple memories, one or more transceivers, and one or more programs, one or more of programs are stored in described
In memory, and it is configured to be executed by one or more of processors, described program includes for executing such as first aspect
The instruction of step in the method.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, and storage is handed over for electronic data
The computer program changed, wherein the computer program makes the method for computer execution as described in relation to the first aspect.
5th aspect, the embodiment of the present application provide a kind of computer program product, and the computer program product includes depositing
The non-transient computer readable storage medium of computer program is stored up, the computer is operable to make computer to execute such as the
Method described in one side.
Implement the embodiment of the present application, has the following beneficial effects:
As can be seen that face-image, voice messaging and physiological data are first obtained in the application, then to face-image, language
Message breath and physiological data are pre-processed to obtain target data, and target data input network model is determined to the physiology shape of user
State determines the monitoring strategy of user according to physiological status, determines the physiology of user by the way that multi-parameter is input to artificial intelligence
State improves the accuracy of determining physiological status, formulates corresponding monitoring strategy, realizes and targetedly guard, and lead to
Artificial intelligence is crossed to judge the high-efficient of physiological status.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present application, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of schematic diagram of a scenario of intelligent monitor;
Fig. 2 is a kind of flow diagram of intelligent monitor method based on deep learning provided by the embodiments of the present application;
Fig. 2A is a kind of flow diagram for generating first object data provided by the embodiments of the present application;
Fig. 2 B is a kind of schematic diagram by first object data composition input data matrix provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another intelligent monitor method based on deep learning provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of intelligent monitoring device disclosed in the embodiment of the present application;
Fig. 5 is a kind of functional unit composition block diagram of intelligent monitoring device disclosed in the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen
Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall in the protection scope of this application.
The description and claims of this application and term " first ", " second ", " third " and " in the attached drawing
Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it
Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be
System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list
Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that the special characteristic, result or the characteristic that describe can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Local terminal in the application may include smart phone (such as Android phone, iOS mobile phone, Windows
Phone mobile phone etc.), tablet computer, palm PC, laptop, mobile internet device (MID, Mobile Internet
Devices) or wearable device etc., above-mentioned electronic device are only citings, and non exhaustive, are filled including but not limited to above-mentioned electronics
It sets, for convenience of description, above-mentioned electronic device is known as user equipment (User equipment, UE) in following example.
Certainly in practical applications, above-mentioned user equipment is also not necessarily limited to above-mentioned realization form, such as can also include: intelligent vehicle-carried end
End, computer equipment etc..
The local terminal mentioned in the embodiment of the present application can be the robot that receives instruction, wherein the robot can be with
With complete humanoid, or the robot of pillow form does not specifically limit the size and type of robot, below
Mention local terminal be referring herein to robot, no longer describe below.
It is serious with aging of population trend, an important topic is had become to the monitoring of old man's health, is
Solve the problems, such as the monitoring of old man it is burning hoter be online medical treatment mode, but mainly consulting clothes that online medical treatment provides
Some simple illnesss can be made accurate suggestion, but for the disease of some complexity, online doctor can not be timely by business
Accurate diagnosis is made, it could even be possible to mistaken diagnosis, moreover, online needs of medical treatment medical practitioner all weather operations, release is not cured
Resource is treated, so online medical treatment does not improve diagnoses and treatment efficiency.Intelligent monitor based on deep learning provided herein
Method and device preferably resolves the problem of monitoring and medical diagnosis on disease of old man, provides the robot pair with " professional skill "
Old man guards and diagnoses, solve some symptom that artificial monitoring can not be found with Artificial Diagnosis, more accurately, profession
Protect the health of old man.
Refering to fig. 1, Fig. 1 is a kind of schematic diagram of the application scenarios of intelligent monitor of the application, as shown in Figure 1, the application
Intelligent monitoring device in scene includes cloud processor 10, local terminal 20 and wearable device 30;Wherein, the people in cloud processing
Work intelligence system is strong artificial intelligence AGI (Artificial General Intelligence, referred to as: AGI);
Relative to common artificial intelligence, AGI can carry out autonomous learning, have independently seek knowledge, put question to, retrieving, concluding,
The learning ability of deduction;Picture file can be become high-level data format, identify facial expression;Directly handle voice messaging.
Mood is identified without being converted into text information.
Optionally, the AGI in the application includes at least convolutional neural networks model, Recognition with Recurrent Neural Network model, confrontation mind
Through network model, wherein convolutional neural networks model is mainly used for carrying out the classification of face recognition and facial expression, circulation nerve
Network model is mainly used for carrying out voice and semantic identification, and confrontation neural network model main users periodically obtain feedback result
Right value update is carried out, autonomous learning is carried out, continuously improves model structure.
Wherein, the arrow in Fig. 1 indicates that there are interactive operation, i.e. user in user and local terminal 20 and wearable device 30
It can control local terminal 20 and wearable device 30 acquire the data of the user, arrow herein is not communication link.
Wherein, wearable device 30, for acquiring the physiological data of user, the physiological data may include blood oxygen concentration,
Blood pressure, heart rate, body temperature, etc., and the physiological data is sent to local terminal 20;
Local terminal 20 obtains the face-image of user, obtains the voice messaging of collection user for taking pictures;Alternatively, being used for
Video is shot, the face-image and voice messaging of video acquisition user are passed through;
Local terminal 20, the image for being also used to will acquire, voice, physiological parameter are pre-processed, and target data is obtained,
Target data is sent to cloud processor 10;
Cloud processing 10, for receiving target data, is input to AGI for target data, is exported as a result, being tied according to output
Fruit determines the current physiological status of user, and formulates corresponding monitoring strategy according to physiological status.
Optionally, local terminal further includes touching display screen, and the touching display screen is for showing monitoring strategy, so as to user
Know current physiological status, makes corresponding movement.The touching display screen be specifically as follows Thin Film Transistor-LCD,
Light emitting diode (LED) display screen, organic light-emitting diode (OLED) display screen etc..
Referring to Fig.2, Fig. 2 is a kind of intelligent monitor method based on deep learning provided by the embodiments of the present application, the side
Method is applied to intelligent monitoring device, which comprises
Step S201, face-image, voice messaging and physiological data are obtained.
Wherein it is possible to it is synchronous can also asynchronous face-image, voice messaging and the physiological data for obtaining old man, by taking the photograph
As head obtains face-image, microphone acquires voice messaging, or by camera shooting video, to every frame image of video into
Row screening, obtains face-image, obtains the voice messaging in video;Physiological data is obtained by wearable device;Wherein, voice
Information includes after physical condition is inquired to the user according to the current facial expression of user by robot, and user makes for the inquiry
The voice messaging of feedback, alternatively, the voice messaging that robot user collected in real time talks to onself.
Step S202, it handles the face-image and obtains first object data, handle the voice messaging and obtain the second mesh
Data are marked, the physiological data is handled and obtains third target data.
Optionally, face recognition, voice mood identification and the identification of physiological characteristic such as are carried out in cloud processor, for this
Ground terminal mitigates the pressure of transmission data, can be pre-processed to obtain first to face-image, voice messaging and physiological data
Target data, the second target data and third target data.
Optionally, it handles the face-image and obtains the specific step as shown in Figure 2 A of first object data:
Step S202a, face-image progress gray proces are obtained into gray level image.
Since color image uses rgb color mode, color carries out gray scale without influence, therefore to image to the feature of image
Processing is to reduce subsequent arithmetic amount;The brightness range of its pixel of color image be 0~255, frequently with gray scale processing method
Include: 1, the component method of average, i.e. L=(R+B+G)/3, wherein L be brightness value of the pixel in gray level image, R, G and
B is respectively three kind colors corresponding brightness value of the pixel in color image;2, preset weights method, i.e. L=0.3R+
0.59G+0.11B, wherein 0.3,0.59 and 0.11 is respectively the preset weights coefficient of R, G and B.
Step S202b, the gray level image is divided into N number of region according to preset rules.
Optionally, for face-image, different regions is different for the contribution of the identification of facial expression, therefore default
Rule can be with are as follows: is divided according to contribution, for example, brow region, mouth region, cheek region etc. can be divided into;
Human face region therefore is divided into N number of region in the different dynamics that shows of characteristic by different regions, to each region into
The division of row weight, the region big to percentage contribution (brow region, mouth region such as setting) assigns biggish weight, to nothing
It closes region and assigns lesser weight, or even ignore the region.
Step S202c, the characteristic of the gray level image is determined according to the weight coefficient in each region, it will be described
The characteristic of the gray level image is as the first object data.
Optionally, the characteristic in each region, i.e., the pixel size in each region, then according to the region are obtained
Weight coefficient is weighted, and is determined the final characteristic in each region, is finally obtained the characteristic of the gray level image.
Optionally, it handles the voice messaging and obtains the second target data and specifically include: the voice messaging is subjected to Fu
In leaf transformation, obtain frequency-domain spectrum;The intonation variation of the voice messaging in the time domain is determined according to spectrum energy height, is obtained every
The intonation parameter of a period;The word speed parameter of the voice messaging is determined according to the total duration of the voice messaging;Obtain the voice
The volume parameters of information each period;The intonation parameter, word speed parameter and volume parameters group are become into second target data.
Optionally, handle the physiological data and obtain third target data and specifically include: by physiological data filtering and
Denoising obtains the third target data, and Wavelet Denoising Method generally can be used.
Optionally, with the arriving in 5G epoch, the significantly promotion of network speed, between communication equipment the transmission of data be no longer
It hinders, based on the method for step S202 processing data, the method also includes: directly upload face-image, voice messaging and life
Data are managed to cloud processor, is handled using cloud and the face-image, voice messaging and physiological data to cloud processor is located in advance
Reason obtains first object data, the second target data and third target data, mitigates the pressure of local terminal processing data;Or
Person, in local terminal embed artificial intelligence AI chip, first carry out facial expression identification, by the recognition result of facial expression,
Voice messaging and physiological data are uploaded to cloud processing together, in order to directly utilize the face when subsequent comprehensive descision physiological status
The recognition result of expression, that is, the pressure for alleviating local terminal processing data alleviate the pressure of data transmission again.
Step S203, the first object data, second target data and the third target data are input to
Corresponding default network model executes forward operation and is exported as a result, determining physiological status according to the output result.
Optionally, the first object data, the second target data and third target data are formed into input data square first
Then input data matrix is input to and executes forward operation with corresponding network model by battle array, obtain output result.
Optionally, composition input data square is illustrated for the first object data are formed input data matrix
The mode of battle array, the mode that other target datas form input data is similar, no longer describes.First object data are formed defeated
The method for entering data matrix specifically includes: the quantity a of pixel in first object data is extracted, a data composition is initial
Input data matrix, that is, CI*H*W, as shown in Fig. 2 B left half, wherein H is height value, and W is width value, and CI is depth value;Than
Whether it is greater than CI compared with a1*H1*W1(CI1*H1*W1For the size of the input data of network model setting), as a is more than or equal to CI1*
H1*W1, not suffix evidence, as a is less than CI1*H1*W1, by the strategy of addition zero to first object data addition zero, so that after addition
A '=CI1*H1*W1, shown in Fig. 2 B, original input data are as follows: H=8, W=7, CI=3, the input data of setting are as follows: H1=
16, W1=7, CI1=3;It can be then to be inserted into a manner of interlacing zero insertion in original input data in the way of addition zero,
Data gray zone as shown in Fig. 2 B right half after specific insertion is that the position of the zero of insertion presses zero certainly here
The strategy of addition is only illustrated, and other addition strategies are not limited.
Optionally, the corresponding facial expression of the face-image is determined according to the output result of the first object data,
The corresponding voice mood of the voice messaging is determined according to the output result of second target data, according to the third target
The output result of data determines the corresponding physiological characteristic of the physiological data.
Optionally, each facial expression (such as happy, angry, indignation) corresponding feature vector can be set, i.e., will
Expression carries out vectorization expression, and voice mood and physiological characteristic are also similarly carried out vector expression.
Further, it is determined that the second feature vector sum physiology of the first eigenvector of the facial expression, voice mood
The first eigenvector, second feature vector sum third feature vector are composed in series the 4th by the third feature vector of feature
Feature vector determines the feature vector matching value of the fourth feature vector Yu multiple default physiological status, obtains matching value most
High corresponding default physiological status is the physiological status, wherein the fourth feature vector and the multiple default physiology shape
The feature vector dimension of state is identical.
Matching value is determined according to the following formula:
Wherein, β is fourth feature vector, βiFor i-th of default physiology in the feature vector of multiple default physiological status
The feature vector of state.
Optionally, as fourth feature vector is different from the feature vector dimension of multiple default physiological status, then addition is taken
Zero mode carries out matching value calculating after building dimension is identical again.
Step S204, corresponding monitoring strategy is determined according to the physiological status.
Optionally, based on the physiological status in step S203, control instruction corresponding with the physiological status is generated, executing should
Control instruction corresponding monitoring strategy, specifically: if the physiological status is the first default physiological status, from pre-stored
The corresponding control instruction of the physiological status is inquired in physiological status and the mapping table of control instruction, the control instruction is used
Corresponding operation is executed in controlling the intelligent monitoring device, the first default physiological status is slight condition;As institute
Stating physiological status is that the physiological status is sent to pre-stored contact person, is alarmed in the second default physiological status
Prompt, the second default physiological status are severe condition.
Wherein, which may include that cat fever, headache, backache, mood are gloomy, more i.e., will not threaten
The illness of life security, which may include heart attack, hypertension burst, old man falls down can not breathe shape
State, etc., wherein this is slight condition and severe condition can specifically be classified according to the suggestion of medical practitioner,
We are not specifically limited herein.
Optionally, the method according to step S203, step further include: determine occurrence when matching value highest,
Such as the highest numerical value of the matching value (such as 0.5) is less than preset threshold (such as 0.7), then the physiological status obtained at this time may with it is default
Physiological status difference is larger, and the possible AGI has erroneous judgement when judging physiological status, alternatively, the physiological status that the user is current
Belong to a kind of New Type of Diseases, do not stored, at this point, the current physiological status of the user is sent to medical practitioner, is requested
The medical practitioner further diagnoses, in order to avoid there is mistaken diagnosis, delays the treatment of old man, after medical practitioner diagnosis, obtains the profession
The result is sent to the AGI in cloud processor by the diagnostic result of doctor, updates the network model in the AGI, and cloud is handled
Device indicates that corresponding operation is made by robot according to the diagnostic result of medical practitioner.It is filled by this medical practitioner and intelligent monitor
Interactive mode is set, the intelligent monitoring device can be made to judge that the accuracy of physiological status is higher and higher, moreover, medical practitioner
It only receives the intelligent apparatus and differs biggish output with default physiological status as a result, so a medical practitioner can be handled simultaneously
The output of multiple intelligent monitoring devices releases medical resource as a result, alleviate the pressure of medical practitioner.
For example, as gloomy for mood such as the physiological status, then it controls local terminal and opens voice-enabled chat mode, accompany old people people
Chat, and control word speed and intonation and chat content when chat, the interested topic of some old men of having a chat, and can
Chat mode is set as the mode that has a fine sense of humour, thus alleviate the gloomy mood of old man, when the mood for detecting old man is alleviated,
Chat mode can be closed, such as determines that old man keeps gloomy mood for a long time, the guardian of the remote notification old man carries out old man
Guidance.
Again for example, as the physiological status be heart attack, then immediately open alarm mode, notified by loudspeaker
The care provider of surrounding notifies family members moreover, the physiological status is sent to remote contact.
As can be seen that obtaining the face-image, voice messaging and physiological data of old man in the embodiment of the present application, passing through face
Portion's image determines the facial expression of old man, and voice messaging determines mood, and physiological data determines physiological characteristic, comprehensive facial expression,
Mood and physiological characteristic determine the current physiological status of old man, determine corresponding monitoring strategy according to current physiological status, real
Show and targetedly guarded, and robot can ensure that old man is raw with all weather operations, the physiological status of real-time monitoring old man
When managing disorder, obtain medical treatment in time, and robot can detecte artificial detection less than burst disease, it is more professional comprehensive
Guard old man;The robot is sent to mankind doctor when that can not make accurate judgement, by the physiological status monitored, passes through
With the mode of mankind doctor's interaction, accurately diagnostic result is obtained.
Another intelligent monitor method based on deep learning is shown refering to Fig. 3, Fig. 3, the method is applied to intelligence
Monitor device, which comprises
Step S301, related data is screened from cloud database as training data, and the training data is executed into input
To initial network model execute multilayer forward operation exported as a result, according to the output result obtain export result gradient,
The output result gradient execution reversed operation of multilayer is obtained into every layer of weight gradient, according to the weight gradient to every layer
Weight is updated, and final weight is calculated by successive ignition, constructs default network mould according to the final weight
Type.
Wherein, the data in cloud database include the diagnostic result, various medical treatment that intelligent diagnostics kit uploads in family
Medical data that place uploads, Medical Technologist's medical data for uploading after medical simulation and test, in addition Electronic Health Record,
Medical data in wearable device and biomedical detection is also the significant data in cloud database, and certainly, the application is not
Limit the type and quantity of medical data in cloud database.
Step S302, face-image, voice messaging and physiological data are obtained.
Step S303, it handles the face-image and obtains first object data, handle the voice messaging and obtain the second mesh
Data are marked, the physiological data is handled and obtains third target data.
Step S304, the first object data, second target data and the third target data are input to
Corresponding default network model executes forward operation and is exported as a result, determining physiological status according to the output result.
Step S305, corresponding monitoring strategy is determined according to the physiological status.
Step S306, the feedback information that user is made based on the monitoring strategy is obtained, by the feedback information and institute
It states the physiological status to be compared, obtains consequences of hostilities, the consequences of hostilities is input to the default network model and is executed
Reversed operation updates the weight of the default network model.
For example, as determine user facial expression be pain, according to voice messaging determine mood be it is low, according to life
Reason parameter determines that user's physiological characteristic is that body temperature is higher than normal body temperature, and current physiological status is such as determined according to these three results
For mild cold, therefore robot gives user to pick up cold drug, and suggests that user takes cold drug by way of voice prompting, such as
Actual user does not take cold drug, but has called medical staff, determines user for food poisoning, the behavior of user and machine
The agenda of user is input to default network model and executes reversed operation, it is default to update this by the judgement outcome conflict of device people
The weight of network model.
As can be seen that obtaining the face-image, voice messaging and physiological data of old man in the embodiment of the present application, passing through face
Portion's image determines the facial expression of old man, and voice messaging determines mood, and physiological data determines physiological characteristic, comprehensive facial expression,
Mood and physiological characteristic determine the current physiological status of old man, determine corresponding monitoring strategy according to current physiological status, real
Show and targetedly guarded, and robot can ensure that old man is raw with all weather operations, the physiological status of real-time monitoring old man
It when managing disorder, obtains medical treatment in time, in addition obtains the feedback information that user is made based on monitoring strategy, more according to feedback information
New network model improves the accuracy that network model judges physiological status step by step by way of machine learning.
It is consistent with above-mentioned Fig. 2, embodiment shown in Fig. 3, referring to Fig. 4, Fig. 4 is one kind provided by the embodiments of the present application
The structural schematic diagram of intelligent monitoring device 400 based on deep learning, the intelligent monitoring device 400 include cloud processor 401,
Local terminal 402 and wearable device 403, the local terminal include: camera 4021, audio collection device 4022, using place
Manage device AP4023, transceiver 4024;
The camera, for acquiring face-image;
The audio collection device, for acquiring voice messaging;
The wearable device, for acquiring physiological data;
The transceiver, for receiving the face-image, voice messaging and physiological data;
The AP obtains first object data for handling the processing face-image, handles the voice messaging
The second target data is obtained, the physiological data is handled and obtains third target data;
The transceiver is used for transmission the first object data, the second target data and third target data to described
Cloud processor;
The cloud processor is used for the first object data, second target data and the third number of targets
It is exported according to corresponding default network model execution forward operation is input to as a result, determining physiology shape according to the output result
State;
The cloud processor, for determining corresponding monitoring strategy according to the physiological status.
One of the involved intelligent monitoring device 500 based on deep learning in above-described embodiment is shown refering to Fig. 5, Fig. 5
The possible functional unit of kind forms block diagram, and intelligent monitoring device 500 includes acquiring unit 501, processing unit 502, input unit
503, determination unit 504:
Acquiring unit 501, for obtaining face-image, voice messaging and physiological data;
Processing unit 502 obtains first object data for handling the face-image, handles the voice messaging and obtain
Second target data handles the physiological data and obtains third target data;
Input unit 503 is used for the first object data, second target data and the third target data
Corresponding default network model execution forward operation is input to be exported as a result, determining physiology shape according to the output result
State;
Determination unit 504, for determining corresponding monitoring strategy according to the physiological status.
In a possible example, electronic device further includes feedback unit 505, is based on the monitoring plan for obtaining user
Slightly made feedback information, the feedback information is compared with the physiological status, consequences of hostilities is obtained, by institute
It states consequences of hostilities and is input to the default network model and execute the weight that reversed operation updates the default network model.
In a possible example, first object data are obtained handling the face-image, handle the voice messaging
The second target data is obtained, the physiological data is handled and obtains third target data aspect, processing unit 502 is specifically used for: will
The face-image carries out gray proces and obtains gray level image, and the gray level image is divided into N number of region according to preset rules,
The characteristic for obtaining each region determines the characteristic of the gray level image according to the weight coefficient in each region,
Using the characteristic of the gray level image as the first object data;Obtain word speed parameter, the volume of the voice messaging
Word speed parameter, volume parameters and intonation parameter group are become second target data by parameter and intonation parameter;By the physiology
Data filtering and denoising obtain the third target data.
In a possible example, by the first object data, second target data and the third target
Data are input to corresponding default network model execution forward operation and are exported as a result, determining physiology according to the output result
State aspect, input unit 503 are specifically used for: by the first object data, second target data and the third mesh
Mark data are separately input to corresponding network model execution forward operation and obtain respective output as a result, according to described first
The output result of target data determines the corresponding facial expression of the face-image, according to the output knot of second target data
Fruit determines the corresponding voice mood of the voice messaging, determines the physiology number according to the output result of the third target data
According to corresponding physiological characteristic;Determine the first eigenvector of the facial expression, the second feature vector sum physiology of voice mood
The first eigenvector, second feature vector sum third feature vector are composed in series the 4th by the third feature vector of feature
Feature vector determines the feature vector matching value of the fourth feature vector Yu multiple default physiological status, obtains matching value most
High corresponding default physiological status is the physiological status, wherein the fourth feature vector and the multiple default physiology shape
The feature vector dimension of state is identical.
In a possible example, in terms of determining corresponding monitoring strategy according to the physiological status, determination unit 504
It is specifically used for: if the physiological status is the first default physiological status, from reflecting for pre-stored physiological status and control instruction
It penetrates in relation table and inquires the corresponding control instruction of the physiological status, the control instruction is held with the intelligent monitoring device is controlled
The corresponding operation of row, the first default physiological status are slight condition;If the physiological status is the second default physiology
The physiological status is sent to pre-stored contact person, carries out warning note by state, and the second default physiological status is
Severe condition.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, it is as any in recorded in above method embodiment which execute computer
A kind of some or all of the monitoring method based on deep learning step.
The embodiment of the present application also provides a kind of computer program product, and the computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, the computer program are operable to that computer is made to execute such as above-mentioned side
Some or all of any monitoring method based on deep learning recorded in method embodiment step.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, embodiment described in this description belongs to alternative embodiment, related actions and modules not necessarily the application
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of the unit, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also be realized in the form of software program module.
If the integrated unit is realized in the form of software program module and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment the method for the application
Step.And memory above-mentioned includes: USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
May include: flash disk, read-only memory (English: Read-Only Memory, referred to as: ROM), random access device (English:
Random Access Memory, referred to as: RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas;
At the same time, for those skilled in the art can in specific embodiments and applications according to the thought of the application
There is change place, in conclusion the contents of this specification should not be construed as limiting the present application.
Claims (10)
1. a kind of monitoring method based on deep learning, which is characterized in that the method is applied to intelligent monitoring device, the side
Method includes:
Obtain face-image, voice messaging and physiological data;
It handles the face-image and obtains first object data, handle the voice messaging and obtain the second target data, handle institute
It states physiological data and obtains third target data;
The first object data, second target data and the third target data are input to corresponding default network
Model executes forward operation and is exported as a result, determining physiological status according to the output result;
Corresponding monitoring strategy is determined according to the physiological status.
2. the method according to claim 1, wherein the processing face-image obtains first object number
According to handling the voice messaging and obtain the second target data, handle the physiological data and obtain third target data and include:
Face-image progress gray proces are obtained into gray level image, the gray level image is divided into N according to preset rules
A region obtains the characteristic in each region, and the spy of the gray level image is determined according to the weight coefficient in each region
Data are levied, using the characteristic of the gray level image as the first object data;
Word speed parameter, volume parameters and the intonation parameter for obtaining the voice messaging join word speed parameter, volume parameters and intonation
Array becomes second target data;
The physiological data is filtered and denoising obtains the third target data.
3. method according to claim 1 or 2, which is characterized in that described by the first object data, second mesh
Mark data and the third target data be input to corresponding default network model execute forward operation exported as a result, according to
The output result determines that physiological status includes:
The first object data, second target data and the third target data are separately input to corresponding
Network model executes forward operation and obtains respective output as a result, according to the determination of the output result of the first object data
The corresponding facial expression of face-image determines the corresponding language of the voice messaging according to the output result of second target data
Sound mood determines the corresponding physiological characteristic of the physiological data according to the output result of the third target data;
Determine the third feature of the first eigenvector of the facial expression, the second feature vector sum physiological characteristic of voice mood
The first eigenvector, second feature vector sum third feature vector are composed in series fourth feature vector, determine institute by vector
The feature vector matching value of fourth feature vector Yu multiple default physiological status is stated, the corresponding default physiology of matching value highest is obtained
State is the physiological status, wherein the feature vector dimension of the fourth feature vector and the multiple default physiological status
It is identical.
4. method according to claim 1-3, which is characterized in that described determined according to the physiological status corresponds to
Monitoring strategy include:
If the physiological status is to close in the first default physiological status from the mapping of pre-stored physiological status and control instruction
It is that the corresponding control instruction of the physiological status is inquired in table, the control instruction is executed for controlling the intelligent monitoring device
Corresponding operation, the first default physiological status are slight condition;
If the physiological status is the second default physiological status, the physiological status is sent to pre-stored contact person, and
Warning note, the second default physiological status are severe condition.
5. the method according to claim 3 or 4, which is characterized in that the method also includes:
The feedback information that user is made based on the monitoring strategy is obtained, by the feedback information and the physiological status
It is compared, obtains consequences of hostilities, the consequences of hostilities is input to the default network model and executes reversed operation update institute
State the weight of default network model.
6. a kind of intelligent monitoring device based on deep learning, which is characterized in that the intelligent monitoring device includes: cloud processing
Device, local terminal and wearable device, the local terminal include: camera, audio collection device, application processor AP, transmitting-receiving
Device;
The camera, for acquiring face-image;
The audio collection device, for acquiring voice messaging;
The wearable device, for acquiring physiological data;
The transceiver, for receiving the face-image, voice messaging and physiological data;
The AP obtains first object data for handling the processing face-image, handles the voice messaging and obtain
Second target data handles the physiological data and obtains third target data;
The transceiver is used for transmission at the first object data, the second target data and third target data to the cloud
Manage device;
The cloud processor, for the first object data, second target data and the third target data is defeated
Enter to corresponding default network model execution forward operation and is exported as a result, determining physiological status according to the output result;
The cloud processor, for determining corresponding monitoring strategy according to the physiological status.
7. intelligent monitoring device according to claim 6, which is characterized in that by the first object data, described
Two target datas and the third target data be input to corresponding default network model execute forward operation exported as a result,
In terms of determining physiological status according to the output result, the cloud processor is specifically used for:
The first object data, second target data and the third target data are separately input to corresponding
Network model executes forward operation and obtains respective output as a result, according to the determination of the output result of the first object data
The corresponding facial expression of face-image determines the corresponding language of the voice messaging according to the output result of second target data
Sound mood determines the corresponding physiological characteristic of the physiological data according to the output result of the third target data;
Determine the third feature of the first eigenvector of the facial expression, the second feature vector sum physiological characteristic of voice mood
The first eigenvector, second feature vector sum third feature vector are composed in series fourth feature vector, determine institute by vector
The feature vector matching value of fourth feature vector Yu multiple default physiological status is stated, the corresponding default physiology of matching value highest is obtained
State is the physiological status, wherein the feature vector dimension of the fourth feature vector and the multiple default physiological status
It is identical.
8. a kind of intelligent monitoring device based on deep learning, which is characterized in that the intelligent monitoring device includes:
Acquiring unit, for obtaining face-image, voice messaging and physiological data;
Processing unit obtains first object data for handling the face-image, handles the voice messaging and obtain the second mesh
Data are marked, the physiological data is handled and obtains third target data;
Input unit, for the first object data, second target data and the third target data to be input to
Corresponding default network model executes forward operation and is exported as a result, determining physiological status according to the output result;
Determination unit, for determining corresponding monitoring strategy according to the physiological status.
9. a kind of intelligent monitoring device, which is characterized in that including processor, memory, communication interface and one or more journeys
Sequence, wherein one or more of programs are stored in the memory, and are configured to be executed by the processor, institute
Program is stated to include the steps that requiring the instruction in any one of 1-5 method for perform claim.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage is used for electron number
According to the computer program of exchange, wherein it is as described in any one in claim 1-5 that the computer program executes computer
Method, the computer include electronic device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810432383.8A CN109008952A (en) | 2018-05-08 | 2018-05-08 | Monitoring method and Related product based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810432383.8A CN109008952A (en) | 2018-05-08 | 2018-05-08 | Monitoring method and Related product based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109008952A true CN109008952A (en) | 2018-12-18 |
Family
ID=64611430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810432383.8A Pending CN109008952A (en) | 2018-05-08 | 2018-05-08 | Monitoring method and Related product based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109008952A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886740A (en) * | 2019-01-31 | 2019-06-14 | 泰康保险集团股份有限公司 | Medical information consultation method, medical information counseling apparatus, storage medium |
CN110047588A (en) * | 2019-03-18 | 2019-07-23 | 平安科技(深圳)有限公司 | Method of calling, device, computer equipment and storage medium based on micro- expression |
CN110124210A (en) * | 2019-05-24 | 2019-08-16 | 雷恩友力数据科技南京有限公司 | A kind of premenstrual syndrome physical therapeutic system based on artificial intelligence |
CN110598611A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Nursing system, patient nursing method based on nursing system and readable storage medium |
CN110587621A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Robot, robot-based patient care method and readable storage medium |
CN111724571A (en) * | 2020-08-07 | 2020-09-29 | 新疆爱华盈通信息技术有限公司 | Smart watch, temperature measurement method using smart watch, and body temperature monitoring system |
CN111820872A (en) * | 2020-06-16 | 2020-10-27 | 曾浩军 | User state analysis method and related equipment |
CN112651319A (en) * | 2020-12-21 | 2021-04-13 | 科大讯飞股份有限公司 | Video detection method and device, electronic equipment and storage medium |
CN113748449A (en) * | 2019-03-27 | 2021-12-03 | 人间制造局有限责任公司 | Evaluation and training system |
WO2021253217A1 (en) * | 2020-06-16 | 2021-12-23 | 曾浩军 | User state analysis method and related device |
CN114452133A (en) * | 2022-03-10 | 2022-05-10 | 四川省医学科学院·四川省人民医院 | Nursing method and nursing system based on hierarchical monitoring mode |
CN115271002A (en) * | 2022-09-29 | 2022-11-01 | 广东机电职业技术学院 | Identification method, first-aid decision method, medium and life health intelligent monitoring system |
CN115969356A (en) * | 2022-12-12 | 2023-04-18 | 北京顺源辰辰科技发展有限公司 | Multimode behavior monitoring method and device based on intelligent sliding rail |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105596016A (en) * | 2015-12-23 | 2016-05-25 | 王嘉宇 | Human body psychological and physical health monitoring and managing device and method |
CN105868519A (en) * | 2015-01-20 | 2016-08-17 | 中兴通讯股份有限公司 | Human body characteristic data processing method and apparatus |
CN106127196A (en) * | 2016-09-14 | 2016-11-16 | 河北工业大学 | The classification of human face expression based on dynamic texture feature and recognition methods |
CN106777954A (en) * | 2016-12-09 | 2017-05-31 | 电子科技大学 | The intelligent guarding system and method for a kind of Empty nest elderly health |
CN107273845A (en) * | 2017-06-12 | 2017-10-20 | 大连海事大学 | A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion |
CN107330420A (en) * | 2017-07-14 | 2017-11-07 | 河北工业大学 | The facial expression recognizing method of rotation information is carried based on deep learning |
CN107635147A (en) * | 2017-09-30 | 2018-01-26 | 上海交通大学 | Health information management TV based on multi-modal man-machine interaction |
-
2018
- 2018-05-08 CN CN201810432383.8A patent/CN109008952A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105868519A (en) * | 2015-01-20 | 2016-08-17 | 中兴通讯股份有限公司 | Human body characteristic data processing method and apparatus |
CN105596016A (en) * | 2015-12-23 | 2016-05-25 | 王嘉宇 | Human body psychological and physical health monitoring and managing device and method |
CN106127196A (en) * | 2016-09-14 | 2016-11-16 | 河北工业大学 | The classification of human face expression based on dynamic texture feature and recognition methods |
CN106777954A (en) * | 2016-12-09 | 2017-05-31 | 电子科技大学 | The intelligent guarding system and method for a kind of Empty nest elderly health |
CN107273845A (en) * | 2017-06-12 | 2017-10-20 | 大连海事大学 | A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion |
CN107330420A (en) * | 2017-07-14 | 2017-11-07 | 河北工业大学 | The facial expression recognizing method of rotation information is carried based on deep learning |
CN107635147A (en) * | 2017-09-30 | 2018-01-26 | 上海交通大学 | Health information management TV based on multi-modal man-machine interaction |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886740A (en) * | 2019-01-31 | 2019-06-14 | 泰康保险集团股份有限公司 | Medical information consultation method, medical information counseling apparatus, storage medium |
CN110047588A (en) * | 2019-03-18 | 2019-07-23 | 平安科技(深圳)有限公司 | Method of calling, device, computer equipment and storage medium based on micro- expression |
CN113748449A (en) * | 2019-03-27 | 2021-12-03 | 人间制造局有限责任公司 | Evaluation and training system |
CN113748449B (en) * | 2019-03-27 | 2024-05-14 | 人间制造局有限责任公司 | Evaluation and training system |
CN110124210A (en) * | 2019-05-24 | 2019-08-16 | 雷恩友力数据科技南京有限公司 | A kind of premenstrual syndrome physical therapeutic system based on artificial intelligence |
CN110587621A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Robot, robot-based patient care method and readable storage medium |
CN110587621B (en) * | 2019-08-30 | 2023-06-06 | 深圳智慧林网络科技有限公司 | Robot, robot-based patient care method, and readable storage medium |
CN110598611A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Nursing system, patient nursing method based on nursing system and readable storage medium |
CN110598611B (en) * | 2019-08-30 | 2023-06-09 | 深圳智慧林网络科技有限公司 | Nursing system, patient nursing method based on nursing system and readable storage medium |
WO2021253217A1 (en) * | 2020-06-16 | 2021-12-23 | 曾浩军 | User state analysis method and related device |
CN111820872A (en) * | 2020-06-16 | 2020-10-27 | 曾浩军 | User state analysis method and related equipment |
CN111724571B (en) * | 2020-08-07 | 2022-11-04 | 新疆爱华盈通信息技术有限公司 | Smart watch, temperature measurement method using smart watch, and body temperature monitoring system |
CN111724571A (en) * | 2020-08-07 | 2020-09-29 | 新疆爱华盈通信息技术有限公司 | Smart watch, temperature measurement method using smart watch, and body temperature monitoring system |
CN112651319B (en) * | 2020-12-21 | 2023-12-05 | 科大讯飞股份有限公司 | Video detection method and device, electronic equipment and storage medium |
CN112651319A (en) * | 2020-12-21 | 2021-04-13 | 科大讯飞股份有限公司 | Video detection method and device, electronic equipment and storage medium |
CN114452133A (en) * | 2022-03-10 | 2022-05-10 | 四川省医学科学院·四川省人民医院 | Nursing method and nursing system based on hierarchical monitoring mode |
CN115271002A (en) * | 2022-09-29 | 2022-11-01 | 广东机电职业技术学院 | Identification method, first-aid decision method, medium and life health intelligent monitoring system |
CN115271002B (en) * | 2022-09-29 | 2023-02-17 | 广东机电职业技术学院 | Identification method, first-aid decision method, medium and life health intelligent monitoring system |
CN115969356A (en) * | 2022-12-12 | 2023-04-18 | 北京顺源辰辰科技发展有限公司 | Multimode behavior monitoring method and device based on intelligent sliding rail |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109008952A (en) | Monitoring method and Related product based on deep learning | |
Labuguen et al. | MacaquePose: a novel “in the wild” macaque monkey pose dataset for markerless motion capture | |
Breckler et al. | On defining attitude and attitude theory: Once more with feeling | |
Kao et al. | Decision accuracy in complex environments is often maximized by small group sizes | |
Freeman et al. | Will a category cue attract you? Motor output reveals dynamic competition across person construal. | |
Soltani et al. | A range-normalization model of context-dependent choice: a new model and evidence | |
Gilbert | Evolution, social roles, and the differences in shame and guilt | |
CN107799165A (en) | A kind of psychological assessment method based on virtual reality technology | |
US20140143183A1 (en) | Hierarchical model for human activity recognition | |
CN109346159A (en) | Case image classification method, device, computer equipment and storage medium | |
Laubu et al. | Pair-bonding influences affective state in a monogamous fish species | |
Nolfi | Emergence of communication in embodied agents: co-adapting communicative and non-communicative behaviours | |
US20190216334A1 (en) | Emotion representative image to derive health rating | |
TW202115622A (en) | Face attribute recognition method, electronic device and computer-readable storage medium | |
US20220207862A1 (en) | Image analysis method, image analysis apparatus, and image analysis system | |
Uher | The Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals: Foundations for the science of personality and individual differences | |
Li et al. | [Retracted] Deep Learning and Improved HMM Training Algorithm and Its Analysis in Facial Expression Recognition of Sports Athletes | |
Sim et al. | Improving the accuracy of erroneous-plan recognition system for Activities of Daily Living | |
Ma et al. | A prediction method for transport stress in meat sheep based on GA-BPNN | |
US20230278200A1 (en) | System and method to emulate human cognition in artificial intelligence using bio-inspired physiology simulation | |
US20220284649A1 (en) | Virtual Representation with Dynamic and Realistic Behavioral and Emotional Responses | |
US20230136939A1 (en) | User experience modeling system | |
Koenderink | World, environment, Umwelt, and innerworld: A biological perspective on visual awareness | |
CN116994695A (en) | Training method, device, equipment and storage medium of report generation model | |
CN112487980A (en) | Micro-expression-based treatment method, device, system and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
RJ01 | Rejection of invention patent application after publication |