CN110096145A - Psychological condition display methods and device based on mixed reality and neural network - Google Patents

Psychological condition display methods and device based on mixed reality and neural network Download PDF

Info

Publication number
CN110096145A
CN110096145A CN201910290677.6A CN201910290677A CN110096145A CN 110096145 A CN110096145 A CN 110096145A CN 201910290677 A CN201910290677 A CN 201910290677A CN 110096145 A CN110096145 A CN 110096145A
Authority
CN
China
Prior art keywords
head
display device
image
analysis model
psychological condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910290677.6A
Other languages
Chinese (zh)
Inventor
余日季
胡书山
朱天放
王晓晨
戴仕杰
刘舜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Original Assignee
Hubei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University filed Critical Hubei University
Priority to CN201910290677.6A priority Critical patent/CN110096145A/en
Publication of CN110096145A publication Critical patent/CN110096145A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Social Psychology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a kind of psychological condition display methods, system, electronic equipment and computer readable storage medium based on mixed reality and neural network.The above method, comprising: receive the drawing image that head-wearing display device is sent, drawing image draws generation in the default virtual scene that the head-wearing display device is shown;The drawing image is analyzed by the analysis model pre-established, obtain the psychological condition parameter of user corresponding with the head-wearing display device, and output state classification, wherein analysis model is obtained according to the training of the image data set for the paintings drawn comprising different mental state;Display data are generated according to psychological condition parameter and status categories;Display data are sent to the head-wearing display device, head-wearing display device shows three-dimensional picture according to the display data.Above-mentioned psychological condition display methods, device, system and electronic equipment based on mixed reality and neural network, can be improved the accuracy rate of analysis drawing image, and reduce analysis cost.

Description

Psychological condition display methods and device based on mixed reality and neural network
Technical field
This application involves field of computer technology, more particularly to a kind of psychological shape based on mixed reality and neural network State display methods, system, electronic equipment and computer readable storage medium.
Background technique
With the high development of medical technology, the mental health state of people is increasingly paid attention to, and current many people can There can be Psychological Health Problem, but cannot often treat well.On the one hand be due to masses to the understanding of mental symptoms not Comprehensively, have certain " sick shame sense ", on the other hand then since the treatment resource of mental disease more lacks, and treatment cost It is high, it is difficult to a wide range of to promote.
Summary of the invention
The embodiment of the present application provides a kind of based on mixed reality and the psychological condition display methods of neural network, system, electricity Sub- equipment and computer readable storage medium, can be improved the accuracy rate of analysis drawing image, and reduce analysis cost.
A kind of psychological condition display methods based on mixed reality and neural network, comprising:
The drawing image that head-wearing display device is sent is received, the drawing image shows pre- in the head-wearing display device It is generated if being drawn in virtual scene;
The drawing image is analyzed by the analysis model pre-established, obtains use corresponding with the head-wearing display device The psychological condition parameter at family, and output state classification, wherein the analysis model is according to the picture drawn comprising different mental state The image data set training of work obtains;
Display data are generated according to the psychological condition parameter and status categories;
The display data are sent to the head-wearing display device, the display data, which are used to indicate, described wears display Device shows three-dimensional picture according to the display data.
A kind of psychological condition display methods based on mixed reality and neural network is applied to head-wearing display device, comprising:
The first idsplay order is received, it is described default according to the default virtual scene that first idsplay order display is three-dimensional Virtual scene includes one or more drawing Aids;
The drafting operation that the drawing Aids for passing through and choosing carry out is obtained, and corresponding according to drafting operation generation Drawing image;
The drawing image is sent to electronic equipment, the drawing image is used to indicate the electronic equipment and passes through in advance The analysis model of foundation analyzes the drawing image, generates display data, wherein the analysis model is according to including different mental The image data set training for the paintings that state is drawn obtains;
The display data that the electronic equipment is sent are received, and show three-dimensional picture according to the display data.
A kind of psychological condition display system based on mixed reality and neural network, the system comprises electronic equipments and head Display device is worn, the electronic equipment and the head-wearing display device, which are established, to be communicated to connect;
The head-wearing display device, for receiving the first idsplay order, three-dimensional according to first idsplay order display Default virtual scene, the default virtual scene include one or more drawing Aids;It obtains auxiliary by the drawing chosen Assistant engineer has the drafting operation carried out, and generates corresponding drawing image according to drafting operation;The drawing image is sent Electron equipment;
The electronic equipment passes through the analysis mould pre-established for receiving the drawing image of head-wearing display device transmission Drawing image described in type analysis obtains the psychological condition parameter of user corresponding with the head-wearing display device, and output state Classification, wherein the analysis model is obtained according to the training of the image data set for the paintings drawn comprising different mental state;According to The psychological condition parameter and status categories generate display data;The display data are sent to the head-wearing display device;
The head-wearing display device is also used to receive the display data that the electronic equipment is sent, and according to the display Data show three-dimensional picture.
A kind of electronic equipment, including memory and processor are stored with computer program, the calculating in the memory When machine program is executed by the processor, so that the processor realizes method as described above.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor Method as described above is realized when row.
In above-described embodiment based on mixed reality and the psychological condition display methods of neural network, system, electronic equipment And computer readable storage medium, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture guides user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
Detailed description of the invention
Fig. 1 is the system architecture of the psychological condition display methods based on mixed reality and neural network in one embodiment Figure;
Fig. 2 is the flow chart of the psychological condition display methods based on mixed reality and neural network in one embodiment;
Fig. 3 is the flow chart for analyzing drawing image in one embodiment by analysis model;
Fig. 4 is the schematic diagram of residual unit in one embodiment;
Fig. 5 is the flow chart of the psychological condition display methods based on mixed reality and neural network in another embodiment;
Fig. 6 is the block diagram of the psychological condition display device based on mixed reality and neural network in one embodiment;
Fig. 7 is the block diagram of the psychological condition display device based on mixed reality and neural network in another embodiment;
Fig. 8 is the structural block diagram of electronic equipment in one embodiment;
Fig. 9 is the structural block diagram of head-wearing display device in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes It says, in the case where not departing from scope of the present application, the first client can be known as the second client, and similarly, can incite somebody to action Second client is known as the first client.The first client and the second client both client, but it is not same visitor Family end.
Fig. 1 is the system architecture of the psychological condition display methods based on mixed reality and neural network in one embodiment Figure.As shown in Figure 1, including head-wearing display device 10 and electronic equipment 20, head-wearing display device 10 and electronics in the system architecture Equipment 20 can be attached by network, wherein head-wearing display device 10 can be HMD, and (Head Mount Display, wears Display), it is also possible to intelligent glasses etc., electronic equipment 20 can be server, be also possible to desktop computer, notebook electricity The terminals such as brain.
User wears head-wearing display device 10 can be according to this when head-wearing display device 10 receives the first idsplay order The three-dimensional default virtual scene of first idsplay order display.User is by head-wearing display device 10 it can be seen that is shown is default virtual Scene is superimposed with real scene.It may include one or more drawing Aids in the default virtual scene, user can pass through choosing Drawing Aids needed for extract operation is chosen are drawn.Head-wearing display device 10 obtains user and is assisted by the drawing chosen Tool carries out drafting operation, and generates corresponding drawing image according to operation is drawn, and drawing image can pass through display device in real time It has been shown that, is presented to the user.
After user completes, complete drawing image can be sent to electronic equipment 20 by head-wearing display device 10.Electronics Equipment 20 receives the drawing image that head-wearing display device 10 is sent, and the drafting figure can be analyzed by the analysis model pre-established Picture, obtains the psychological condition parameter of user, and exports corresponding status categories.Electronic equipment 20 can be according to the psychological condition of user Parameter and status categories generate display data, and display data are sent to head-wearing display device 10.Head-wearing display device 10 connects The display data that electronic equipment is sent are received, three-dimensional picture can be shown according to display data, be drawn by the three-dimensional of display It is guided in face of user.
As shown in Fig. 2, in one embodiment, a kind of psychological condition based on mixed reality and neural network is provided and is shown Method is applicable to above system framework, is described from the angle of electronic equipment, it may include following steps:
Step 110, the drawing image that head-wearing display device is sent is received, drawing image shows pre- in head-wearing display device It is generated if being drawn in virtual scene.
User wears head-wearing display device, and in the embodiment of the present application, user can carry out the user of psychological detection for needs, Head-wearing display device can use augmented reality (Augmented Reality, AR)/mixed reality (Mixed Reality, MR) Mode show virtual content, project 3 D stereoscopic image in three dimensions, user can by staring, voice and gesture etc. Mode is interacted with virtual content.Wherein, augmented reality constructs existing by computer graphics techniques and visualization technique The virtual content being not present in real environment, and virtual content is accurately fused to by true environment by image recognition location technology In, virtual content and true environment are combined together by display equipment, and be shown to user, makes user that there is true sense organ Experience.Mixed reality is then not only virtual content merging display with true environment, also in real world, virtual world and user Between set up the information circuits of an interaction feedback, the interaction with real world can be obtained in time.
Head-wearing display device can show three-dimensional default virtual scene, project default virtual scene in three dimensions 3 D stereoscopic image, user can see default virtual scene Overlapping display in real scene by head-wearing display device.? In some embodiments, default virtual scene can be virtual graphics scene, may include virtually drawing in the default virtual scene Plate and one or more drawing Aids, drawing Aids may include different drawing tool and drawing material etc., for example, It may include different types of paintbrush, erasing rubber, lines, figure, pattern etc., but not limited to this.User can by staring, language Sound, gesture or controller etc. and the virtual content of display interact, as a kind of specific embodiment, user can with gesture, Controller etc. chooses required drawing Aids, and carries out drafting behaviour in the virtual drawing board of display using drawing Aids Make.Head-wearing display device can receive the drafting operation of user, and the image drawn according to operation real-time display user is drawn.Pass through Augmented reality can be used family and experience immersion experience, user is made to carry out diagnoses and treatment in the virtual scene of immersion, Effectively mitigate the psychological pressure and anxiety of patient, and makes constraint of the painting creation not by actual environments such as time, places, Yong Hugeng Oneself mood and problem can freely be expressed.
After completing, drawing image can be sent to the electronic equipment of connection by head-wearing display device, in some embodiment party In formula, electronic equipment can be server, be also possible to the terminals such as desktop computer, laptop.Head-wearing display device can It is connect by wireless network with electronic equipment, such as bluetooth, Wireless Fidelity (WiFi, WIreless-FIdelity) net can be passed through Network etc. is attached, and can also carry out wire communication connection etc. with electronic equipment, this is not restricted.
Step 120, drawing image is analyzed by the analysis model pre-established, obtains use corresponding with head-wearing display device The psychological condition parameter at family, and output state classification.
It, can be by the analysis model that pre-establishes to drawing after electronic equipment receives the drawing image that head-wearing display device is sent It is imaged to be analyzed, the psychological condition parameter for wearing the user of head-wearing display device is obtained, analysis model can export use simultaneously The status categories at family.In one embodiment, status categories can be used for indicating different degrees of psychological condition, and status categories can wrap It for example, status categories can be mental symptoms grade, for example may include normal, slight state, moderate state, severe containing a variety of Four kinds of classifications such as state, but not limited to this.Psychological condition parameter can then be set according to demand, for indicating the psychology of user State, such as nervous parameter, disappointed parameter, impulsion parameter, damage parameter, irritated parameter etc., psychological condition parameter can be used for auxiliary Help the status categories of quantization user.In some embodiments, psychological condition parameter can be carried out by the quantization table of psychology detection Setting, such as Beck Depression scale, Hamilton depressive scale etc. are set, but not limited to this.
In one embodiment, analysis model can assemble for training according to the image data for the paintings drawn comprising different mental state It gets.In some embodiments, which can be according to the image data of the paintings comprising a large amount of mental disease patients Collection is trained to obtain, and image data concentration may include paintings of the different PATIENT POPULATIONs in each psychological condition stage.Analysis Model can be convolutional neural networks, it provides a kind of learning model end to end, and the parameter in model can pass through tradition Gradient descent method be trained.By the analysis model that training obtains, it may learn image data and concentrate different paintings Characteristics of image, so as to classify to the corresponding mental symptoms grade of paintings.In some embodiments, the analysis model It can be ResNet neural network etc., but not limited to this.
In one embodiment, analysis model can be obtained by second training, can be first with image data base to analysis mould Type carries out pre-training, the parameter in Initialization Analysis model, wherein may include multiple and different image datas in image data base Collection, that is, being not limited only to the paintings of mental disease patient comprising a large amount of different images in image data base, mould can be improved The general resolving ability of type solves the problems, such as that the paintings quantity of mental disease patient is on the low side.It recycles to utilize and includes different mental The image data set for the paintings that state is drawn carries out second training to the analysis model after pre-training, that is, using comprising a large amount of The image data set of the paintings of mental disease patient carries out second training to the analysis model after pre-training, to first in analysis model The parameter of beginningization is finely adjusted, and the analysis model after second training had not only possessed powerful general image resolving ability, but also energy The paintings of accurate discrimination mental disease patient, the analysis model after training can accurately export mental symptoms grade, can effectively mention The diagnosis of high mental disease.
Step 130, display data are generated according to psychological condition parameter and status categories.
Electronic equipment analyzes drawing image, and after obtaining psychological condition parameter and the status categories of user, can root Display data are generated according to psychological condition parameter and status categories, and display data are sent to head-wearing display device.As one kind Specific embodiment, display data may include symptom description and treatment recommendations etc., symptom description may include user psychological condition, Mental symptoms grade etc..In some embodiments, treatment recommendations can be obtained by way of artificial intelligence, by the psychology of user State parameter and status categories input the treatment model pre-established, by treatment model to psychological condition parameter and status categories It is analyzed, to export corresponding treatment recommendations.Wherein, treatment model can be instructed by learning a large amount for the treatment of forms data Practice, treatment singly can be the treatment recommendations that doctor provides for different symptoms, different degrees of mental disease patient.Directly pass through people The mode of work intelligence generates treatment recommendations and treats to patient, alleviates the unbalanced contradiction of medical resource to a certain extent, Reduce the treatment cost of mental disease.
In some embodiments, electronic equipment analyzes drawing image, and obtains the psychological condition parameter of user And after status categories, psychological condition parameter and status categories can be sent to client, the client may be mounted at computer, In the terminals such as smart phone, tablet computer, which can establish respectively with electronic equipment, head-wearing display device and communicate to connect.Visitor The psychological condition parameter and status categories of received user can be presented to doctor by family end, and doctor is by checking the psychological shape of user State parameter and status categories analyze the psychological condition parameter and status categories, and provide corresponding treatment recommendations.Client End obtains the treatment recommendations inputted, generates corresponding generate based on treatment recommendations, user psychology parameter and status categories and shows Registration evidence, and display data are sent to head-wearing display device.
In some embodiments, electronic equipment can also first analyze psychological condition parameter and status categories, obtain Preliminary treatment recommendations, and the psychological condition parameter, status categories and preliminary treatment recommendations are sent to client, lead to It crosses client and is presented to doctor.Doctor can modify to preliminary treatment recommendations and perfect, and client is based on modified again Treatment recommendations, user psychology parameter and status categories generate corresponding generate and show data, and display data are sent to Head-wearing display device.By the suggestion that obtained medical treatment by way of artificial intelligence, then by profession healthcare givers modify it is perfect, The therapeutic effect of mental disease can be improved.
Step 140, display data are sent to head-wearing display device, display data be used to indicate head-wearing display device according to Show that data show three-dimensional picture.
After head-wearing display device receives display data, corresponding three-dimensional picture can will be generated according to display data, and Three-dimensional picture is shown, may include symptom description and treatment recommendations of user etc. in the three-dimensional picture, may be used also The drawing image drawn including user.User passes through head-wearing display device, it can be seen that holography display three-dimensional picture with Real scene superposition is shown.By augmented reality, family can be used and carry out immersion diagnoses and treatment, effectively mitigate and suffer from The psychological pressure and anxiety of person.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
As shown in Fig. 2, in one embodiment, step 120 passes through the analysis model pre-established and analyzes drawing image, obtain Take the psychological condition parameter of user corresponding with head-wearing display device, and output state classification, comprising the following steps:
Step 202, the characteristics of image of drawing image is extracted by the analysis model pre-established.
After electronic equipment receives the drawing image that head-wearing display device is sent, drawing image can be extracted by analysis model Characteristics of image, and according to the status categories of characteristics of image acquisition user.The image that analysis model can extract different levels is special Sign, in some embodiments, analysis model may include multiple convolutional layers, can be extracted in drawing image by multiple convolutional layers The quantity of the characteristics of image of different dimensions, convolutional layer can be set according to actual needs, and when quantity is more, the image of extraction is special It is more, more accurate to levy.
In one embodiment, analysis model may include multiple residual units, and each residual unit may include multilayer convolution Layer, for example include 3 layers or 3 layers or more of convolutional layer.In residual unit the output of certain layer of convolutional layer can directly as below certain The input of layer, thus across output convolutional layer and input convolutional layer among convolutional layer, without strictly by every layer of convolutional layer Input of the output as corresponding next layer of convolutional layer, the leap between realization convolutional layer.
Fig. 4 is the schematic diagram of residual unit in one embodiment.It is assumed that the input of the residual unit is x, X is directly connected to the output of residual unit, is carried out that is, the input of residual unit is directly connected to the last layer convolutional layer It calculates.Assuming that the desired output of the residual unit is H (x), then the residual unit needs target F (x)=H (the x)-x learnt, and No longer it is the gradient maximum merely by H (x), the depth of analysis model is continued growing.
In one embodiment, each residual unit of analysis model may include A layers of convolutional layer, B in residual unit The input of layer convolutional layer inputted as B+C layers, wherein A is the integer more than or equal to 3, and B is greater than 0 and less than A-1's Integer, C are the integer greater than 1.For example, A is 3, then residual unit includes 3 layers of convolutional layer, and B 1, C 2 then can be by level 1 volume Input of the input of lamination as the 3rd layer of convolutional layer, that is, the input of the residual unit is directly connected to output It practises.By residual unit, the limitation of " degradation phenomena " to the deep neural network number of plies can be broken, and reduce the increasing of parameter amount Add, the diagnostic result of analysis model can be made more accurate.
Step 204, the psychological condition parameter of user corresponding with head-wearing display device is obtained according to characteristics of image.
The psychological condition that electronic equipment can obtain user corresponding with head-wearing display device according to the characteristics of image of extraction is joined Number, the psychological condition parameter can be used for indicating the psychological condition of user, which can be used for assisting determining user's Status categories, that is, the mental symptoms grade that can be used for assisting determining user.In some embodiments, in analysis model After convolutional layer extracts the characteristics of image of drawing image, psychological characteristics corresponding with characteristics of image can be obtained according to characteristics of image, it will Image feature maps obtain psychological characteristics corresponding with characteristics of image to preset psychological characteristics space, and according to psychological characteristics Determine the psychological condition parameter of user.
Step 206, the probability of each status categories in analysis model is determined according to psychological condition parameter, and output probability is most Big status categories.
It in some embodiments, can direct root after the characteristics of image of the analysis model extraction drawing image of electronic equipment The corresponding status categories of the drawing image are determined according to characteristics of image, and characteristics of image can be divided by classifier by different states Classification, the status categories can be it is preset, be also possible to analysis model training when, pass through learning psychologies Disease Paintings data obtain.Classifier can calculate characteristics of image in the probability of each status categories, and choose the shape of maximum probability The output of state classification, then the status categories of the maximum probability are the corresponding status categories of user of drawing image.As a kind of tool Body embodiment, analysis model may also include full articulamentum and normalization layer, wherein full articulamentum other than including convolutional layer For characteristics of image to be assigned to each status categories, normalization layer is then equivalent to above-mentioned classifier, for calculating image spy Levy the probability in each status categories, and the maximum status categories of output probability.
In one embodiment, the Softmax layers of (normalization i.e. in above-described embodiment can be set after full articulamentum Layer), Softmax layers of formula can be as follows:
Wherein, n is the categorical measure of status categories, and the Softmax layers of image feature maps that can extract convolutional layer are one The probability distribution of a n dimension, wherein the classification of maximum probability is the output result of analysis model.
It in some embodiments, can be according to psychology after psychological characteristics of the electronic equipment according to characteristics of image acquisition user Feature determines the probability of each status categories in analysis model.Psychological characteristics can be assigned to by the full articulamentum of analysis model Each status categories, and psychological characteristics is calculated separately in the probability of each status categories, as a kind of specific based on normalization layer Embodiment, the Softmax layer that above-described embodiment can be used calculates psychological characteristics in the probability of each status categories, and exports general The maximum status categories of rate, as the mental symptoms grade of the corresponding status categories of user namely user.
In some embodiments, other modes can also be used and calculate the corresponding status categories of psychological characteristics, for example, electronics It can be the different weight of different psychological condition parametric distributions after equipment obtains psychological condition parameter according to psychological characteristics, according to Corresponding psychological condition can be calculated according to modes such as weighted sums in the value and corresponding weight of each psychological condition parameter Numerical value, and corresponding status categories are determined according to psychological condition numerical value.Wherein, the corresponding psychological condition numerical value of each status categories Difference can be preset, and can be set according to the case in pathology library, setting means is not limited thereto.
In the present embodiment, the characteristics of image of higher-dimension can be extracted from drawing image by analysis model, and is based on image Feature obtains the psychological condition parameter and status categories of user, can provide objective and accurate parameter and psychological condition classification, improves The verification and measurement ratio of psychological condition, and reduce the cost of analysis psychogram.
As shown in figure 5, in one embodiment, a kind of psychological condition based on mixed reality and neural network is provided and is shown Method is applicable to above system framework, is described from the angle of head-wearing display device, it may include following steps:
Step 510, the first idsplay order is received, according to the default virtual scene that the display of the first idsplay order is three-dimensional, is preset Virtual scene includes one or more drawing Aids.
User wears head-wearing display device, and in the embodiment of the present application, user can carry out the user of psychological detection for needs, Head-wearing display device can show virtual content by the way of augmented reality/mixed reality, and projection is three-dimensional in three dimensions Stereopsis, user can by staring, the modes such as voice and gesture and virtual content interact.
It in one embodiment, can be aobvious according to the first idsplay order when head-wearing display device receives the first idsplay order Show three-dimensional default virtual scene, wherein the first idsplay order shows default virtual scene, first idsplay order for opening Can be user by staring, the modes such as voice and gesture carry out triggering generation.Default virtual scene can be virtual drawing Scene, it may include that virtual drawing board and one or more drawing Aids, drawing Aids can wrap that this, which is preset in virtual scene, Include different drawing tool and drawing material etc..
Step 520, it obtains and passes through the drafting operation that the drawing Aids of selection carry out, and operate generation pair according to drawing The drawing image answered.
User can choose required drawing Aids with gesture, controller etc., and shown using drawing Aids Virtual drawing board in carry out drafting operation.Head-wearing display device can receive the drafting operation of user, and real-time according to operation is drawn Show the image that user draws.By augmented reality, family can be used and carry out immersion diagnoses and treatment, effectively mitigation patient Psychological pressure and anxiety, and make constraint of the painting creation not by actual environments such as time, places, user more can freely express from Oneself mood and problem.
In one embodiment, before head-wearing display device receives the first idsplay order, when receiving the second idsplay order When, can show virtual guide picture according to the second idsplay order, second idsplay order be used to show it is virtual instruct scene, should Virtual guide picture may include the basic operation introduction of head-wearing display device, interactive mode of subsequent default virtual scene etc..It should The virtual operating method for instructing scene that can guide user's learning and mastering basic in a manner of animation, text etc. is aided with exquisite move It draws effect mitigation the anti-of user to fully feel, achievees the effect that immersion is seen a doctor.
Step 530, drawing image is sent to electronic equipment, drawing image is used to indicate electronic equipment by pre-establishing Analysis model analyze drawing image, generate display data.
After completing, drawing image can be sent to the electronic equipment of connection by head-wearing display device, and electronic equipment can lead to It crosses the analysis model pre-established to analyze drawing image, obtains the psychological condition ginseng for wearing the user of head-wearing display device Number, analysis model can export the status categories of user simultaneously.The description as described in analysis model can refer to above-described embodiment, herein It repeats no more.
Electronic equipment can generate display data according to psychological condition parameter and status categories, and display data are sent to head Display device is worn, display data may include symptom description and treatment recommendations etc..In some embodiments, electronic equipment is to drafting Image is analyzed, and after obtaining psychological condition parameter and the status categories of user, can be by psychological condition parameter and status categories It is sent to client, the psychological condition parameter and status categories of received user can be presented to doctor by client, and client obtains The treatment recommendations of input are taken, corresponding generate is generated based on treatment recommendations, user psychology parameter and status categories and shows number According to, and display data are sent to head-wearing display device.
Step 540, the display data that electronic equipment is sent are received, and show three-dimensional picture according to display data.
After head-wearing display device receives display data, corresponding three-dimensional picture can will be generated according to display data, and Three-dimensional picture is shown.User passes through head-wearing display device, it can be seen that holography display three-dimensional picture with Real scene superposition is shown, is carried out collaboration guidance to user in a manner of voice, three-dimensional animation etc., is given drum to its behavior It encourages, develops the creation of art of user to sunlight, optimistic direction.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
In one embodiment, a kind of psychological condition display methods based on mixed reality and neural network is provided, can be wrapped Include following steps:
Step (1), head-wearing display device receive the first idsplay order, according to the default void that the display of the first idsplay order is three-dimensional Quasi- scene, presetting virtual scene includes one or more drawing Aids.
In one embodiment, before step (1), further includes: head-wearing display device receives the second idsplay order, and root Virtual guide picture is shown according to the second idsplay order.
Step (2), head-wearing display device obtain the drafting operation carried out by the drawing Aids chosen, and according to drawing System operation generates corresponding drawing image.
Step (3), drawing image is sent to electronic equipment by head-wearing display device.
Step (4), electronic equipment receive the drawing image that head-wearing display device is sent.
Step (5), electronic equipment pass through the analysis model pre-established and analyze drawing image, acquisition and head-wearing display device The psychological condition parameter of corresponding user, and output state classification, wherein analysis model is drawn according to comprising different mental state Paintings image data set training obtain.
In one embodiment, before step (4), further includes: establish the analysis model for analyzing drawing image.
In one embodiment, step establishes the analysis model for analyzing drawing image, comprising: electronic equipment utilizes figure Picture database carries out pre-training to analysis model, and the parameter in Initialization Analysis model, image data base includes multiple and different Image data set;The analysis model after pre-training is carried out using the image data set for the paintings drawn comprising different mental state Second training is finely adjusted the parameter initialized in analysis model.
In one embodiment, step (5), comprising: electronic equipment passes through the analysis model pre-established and extracts drafting figure The characteristics of image of picture;The psychological condition parameter of user corresponding with the head-wearing display device is obtained according to characteristics of image;According to Psychological condition parameter determines the probability of each status categories in analysis model, and the maximum status categories of output probability.
In one embodiment, analysis model includes convolutional layer, full articulamentum and normalization layer;Step (5), comprising: electricity Sub- equipment extracts the characteristics of image of drawing image by convolutional layer;It is special that psychology corresponding with characteristics of image is obtained according to characteristics of image Sign, and using psychological characteristics as psychological condition parameter;Psychological characteristics is assigned to each status categories by full articulamentum;It is based on It normalizes layer and calculates psychological characteristics in the probability of each status categories, and maximum status categories of output probability.
In one embodiment, analysis model includes multiple residual units, and each residual unit includes A layers of convolutional layer, The input of B layers of convolutional layer inputted as B+C layers in residual unit, wherein A is the integer more than or equal to 3, and B is big In 0 and being less than the integer of A-1, C is integer greater than 1.
Step (6), electronic equipment generate display data according to psychological condition parameter and status categories.
Step (7), electronic equipment will show that data are sent to head-wearing display device.
Step (8), head-wearing display device receive the display data that electronic equipment is sent, and three-dimensional according to display data display Virtual screen.
In one embodiment, the above method further include: psychological condition parameter and status categories are sent to by electronic equipment Client, client generate display data according to psychological condition parameter and status categories, and display data are sent to wear it is aobvious Showing device.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
It should be understood that although each step in above-mentioned each flow diagram is successively shown according to the instruction of arrow Show, but these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, this There is no stringent sequences to limit for the execution of a little steps, these steps can execute in other order.Moreover, above-mentioned each stream At least part step in journey schematic diagram may include that perhaps these sub-steps of multiple stages or stage be simultaneously for multiple sub-steps It is not necessarily and executes completion in synchronization, but can execute at different times, the execution in these sub-steps or stage Sequence, which is also not necessarily, successively to be carried out, but can be at least the one of the sub-step or stage of other steps or other steps Part executes in turn or alternately.
As shown in fig. 6, in one embodiment, a kind of psychological condition based on mixed reality and neural network is provided and is shown Device 600, including receiving module 610, analysis module 620, schemes generation module 630 and sending module 640.
Receiving module 610, for receiving the drawing image of head-wearing display device transmission, drawing image is in head-wearing display device It draws and generates in the default virtual scene of display.
Analysis module 620 analyzes drawing image, acquisition and head-wearing display device for the analysis model by pre-establishing The psychological condition parameter of corresponding user, and output state classification, wherein analysis model is drawn according to comprising different mental state Paintings image data set training obtain.
Schemes generation module 630, for generating display data according to psychological condition parameter and status categories.
Sending module 640, for that will show that data are sent to head-wearing display device, display data, which are used to indicate, wears display Device shows three-dimensional picture according to display data.
In one embodiment, the psychological condition display device 600 based on mixed reality and neural network further includes establishing Module.
Module is established, for carrying out pre-training to analysis model using image data base, the ginseng in Initialization Analysis model Number, image data base include multiple and different image data sets;Utilize the picture number for the paintings drawn comprising different mental state Second training is carried out to the analysis model after pre-training according to collection, the parameter initialized in analysis model is finely adjusted.
In one embodiment, sending module 640 are also used to psychological condition parameter and status categories being sent to client End, psychological condition parameter and status categories are used to indicate client and generate display data, and display data are sent to wear it is aobvious Showing device.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
In one embodiment, analysis module 620, including extraction unit, acquiring unit and output unit.
Extraction unit extracts the characteristics of image of drawing image for the analysis model by pre-establishing;
Acquiring unit, for obtaining the psychological condition parameter of user corresponding with head-wearing display device according to characteristics of image.
Output unit for determining the probability of each status categories in analysis model according to psychological condition parameter, and exports The status categories of maximum probability.
In one embodiment, analysis model includes convolutional layer, full articulamentum and normalization layer.
Extraction unit is also used to extract the characteristics of image of drawing image by convolutional layer.
Acquiring unit, is also used to obtain corresponding with characteristics of image psychological characteristics according to characteristics of image, and by psychological characteristics As psychological condition parameter.
Output unit is also used to that psychological characteristics is assigned to each status categories by full articulamentum, based on normalization layer Psychological characteristics is calculated in the probability of each status categories, and maximum status categories of output probability.
In one embodiment, analysis model includes multiple residual units, and each residual unit includes A layers of convolutional layer, The input of B layers of convolutional layer inputted as B+C layers in residual unit, wherein A is the integer more than or equal to 3, and B is big In 0 and being less than the integer of A-1, C is integer greater than 1.
In the present embodiment, the characteristics of image of higher-dimension can be extracted from drawing image by analysis model, and is based on image Feature obtains the psychological condition parameter and status categories of user, can provide objective and accurate parameter and psychological condition classification, improves The verification and measurement ratio of psychological condition, and reduce the cost of analysis psychogram.
As shown in fig. 7, in one embodiment, a kind of psychological condition based on mixed reality and neural network is provided and is shown Device 700, including display module 710, image generation module 720 and sending module 730.
Display module 710, for receiving the first idsplay order, according to the default virtual field that the display of the first idsplay order is three-dimensional Scape, presetting virtual scene includes one or more drawing Aids.
In one embodiment, display module 710 are also used to receive the second idsplay order, and according to the second idsplay order Show virtual guide picture
Image generation module 720, the drafting operation carried out for obtaining the drawing Aids for passing through and choosing, and according to drawing System operation generates corresponding drawing image.
Sending module 730, for drawing image to be sent to electronic equipment, drawing image is used to indicate electronic equipment and passes through The analysis model analysis drawing image pre-established, generates display data, wherein analysis model is according to including different mental state The image data set training of the paintings of drafting obtains.
Display module 710 is also used to receive the display data of electronic equipment transmission, and three-dimensional empty according to display data display Quasi- picture.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
In one embodiment, a kind of psychological condition display system based on mixed reality and neural network is provided, this is System includes electronic equipment and head-wearing display device, and electronic equipment and head-wearing display device, which are established, to be communicated to connect.
Head-wearing display device, for receiving the first idsplay order, according to the default virtual of the first idsplay order display three-dimensional Scene, presetting virtual scene includes one or more drawing Aids;Obtain what the drawing Aids for passing through and choosing carried out Operation is drawn, and generates corresponding drawing image according to operation is drawn;Drawing image is sent to electronic equipment.
Electronic equipment passes through the analysis model point pre-established for receiving the drawing image of head-wearing display device transmission Drawing image is analysed, the psychological condition parameter of user corresponding with head-wearing display device, and output state classification are obtained, wherein point Analysis model is obtained according to the training of the image data set for the paintings drawn comprising different mental state;According to psychological condition parameter and shape State classification generates display data;Display data are sent to head-wearing display device.
Head-wearing display device is also used to receive the display data of electronic equipment transmission, and three-dimensional according to display data display Virtual screen.
In the present embodiment, the default virtual scene that head-wearing display device is shown, user draw in default virtual scene Image, electronic equipment analyze drawing image using analysis model, obtain the psychological condition parameter and status categories of user, And corresponding display data are generated, head-wearing display device shows that three-dimensional picture treats user according to display data, Analysis drawing image quasi- rate really can be improved, and reduce analysis cost, in addition, using head-wearing display device with augmented reality/ The mode of mixed reality shows virtual scene, can effectively mitigate the psychological pressure and anxiety of user.
Fig. 8 is the structural block diagram of electronic equipment in one embodiment.As shown in figure 8, in one embodiment, which sets Standby 20 can be server, be also possible to the terminal devices such as desktop computer, laptop.Electronic equipment 20 may include one A or multiple such as lower component: processor 21 and memory 23, wherein one or more application programs can be stored in memory It in 23 and is configured as being executed by one or more processors 21, one or more programs are configured to carry out such as above-described embodiment The described psychological condition display methods based on mixed reality and neural network.
Processor 21 may include one or more processing core.Processor 21 is entire using various interfaces and connection Various pieces in electronic equipment 20, by running or executing the instruction being stored in memory 23, program, code set or instruction Collection, and the data being stored in memory 23 are called, execute the various functions and processing data of electronic equipment 20.Optionally, Processor 21 can use Digital Signal Processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA) at least one of example, in hardware realize.Processor 21 can integrating central processor (Central Processing Unit, CPU), in image processor (Graphics Processing Unit, GPU) and modem etc. One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem It can not be integrated into processor 21, be realized separately through one piece of communication chip.
Memory 23 may include random access memory (Random Access Memory, RAM), also may include read-only deposit Reservoir (Read-Only Memory).Memory 23 can be used for store instruction, program, code, code set or instruction set.Memory 23 may include storing program area and storage data area, wherein storing program area can store the instruction for realizing operating system, use In the instruction (such as touch function, sound-playing function, image player function etc.) of at least one function of realization, for realizing upper State the instruction etc. of each embodiment of the method.Storage data area can also store the data that electronic equipment 20 is created in use Deng.
It is to be appreciated that electronic equipment 20 may include than structural details more or fewer in above structure block diagram, herein Without limiting.
Fig. 9 is the structural block diagram of head-wearing display device in one embodiment.As shown in figure 9, in one embodiment, wearing Display device 10 may include such as lower component: shell (not shown), processor 11, memory 13, display device 15 and Image Acquisition Device 17, image collecting device 17 may be provided on shell, processor 11, memory 13 and display device 15 can respectively with shell Connection.
In one embodiment, image collecting device 17 can be used for acquiring image in the real world, including reality scene Image, the images of gestures of user etc..Image collecting device 17 can be infrared camera, be also possible to colour imagery shot, have The camera types of body are not intended as limiting in the embodiment of the present application.Image collecting device 17 can be also used for capturing user's Eyeball image carries out eye movement tracking to user.
Memory 13 is stored with one or more computer programs, and one or more computer programs are configured as by one Or multiple processors 11 execute, and following steps can be performed:
The first idsplay order is received, the three-dimensional default virtual scene of display device display is controlled according to the first idsplay order, Default virtual scene includes one or more drawing Aids;
The drafting operation carried out by the drawing Aids chosen is obtained, and generates corresponding drafting according to operation is drawn Image;
Drawing image is sent to electronic equipment, drawing image is used to indicate electronic equipment and passes through the analysis mould pre-established Type analysis drawing image generates display data, wherein analysis model is according to the images of the paintings drawn comprising different mental state Data set training obtains;
The display data that electronic equipment is sent are received, and show that three-dimensional is drawn by display device according to display data Face.
It is to be appreciated that head-wearing display device 10 may include than structural details more or fewer in above structure block diagram, Herein without limiting.
In one embodiment, a kind of computer readable storage medium is also provided, computer program is stored thereon with, is calculated Realize that the psychological condition based on mixed reality and neural network as described in above-described embodiment is aobvious when machine program is executed by processor Show method.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage is situated between Matter can be magnetic disk, CD, read-only memory (Read-Only Memory, ROM) etc..
It may include as used herein non-volatile to any reference of memory, storage, database or other media And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application Range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of psychological condition display methods based on mixed reality and neural network characterized by comprising
Receive the drawing image that head-wearing display device is sent, the default void that the drawing image is shown in the head-wearing display device It draws and generates in quasi- scene;
The drawing image is analyzed by the analysis model pre-established, obtains user's corresponding with the head-wearing display device Psychological condition parameter, and output state classification, wherein the analysis model is according to the paintings drawn comprising different mental state Image data set training obtains;
Display data are generated according to the psychological condition parameter and status categories;
The display data are sent to the head-wearing display device, the display data are used to indicate the head-wearing display device Three-dimensional picture is shown according to the display data.
2. the method according to claim 1, wherein it is described reception head-wearing display device send image it Before, it is described further include:
Establish the analysis model for analyzing drawing image, comprising:
Pre-training is carried out to analysis model using image data base, initializes the parameter in the analysis model, described image number It include multiple and different image data sets according to library;
Secondary instruction is carried out to the analysis model after pre-training using the image data set for the paintings drawn comprising different mental state Practice, the parameter initialized in the analysis model is finely adjusted.
3. the method according to claim 1, wherein the analysis model by pre-establishing analyzes the figure Picture obtains the psychological condition parameter of user corresponding with the head-wearing display device, and output state classification, comprising:
The characteristics of image of the drawing image is extracted by the analysis model pre-established;
The psychological condition parameter of user corresponding with the head-wearing display device is obtained according to described image feature;
The probability of each status categories in the analysis model is determined according to the psychological condition parameter, and output probability is maximum Status categories.
4. the method according to claim 1, wherein the analysis model includes convolutional layer, full articulamentum and returns One changes layer;
The analysis model by pre-establishing analyzes described image, obtains user's corresponding with the head-wearing display device Psychological condition parameter, and output state classification, comprising:
The characteristics of image of the drawing image is extracted by the convolutional layer;
Psychological characteristics corresponding with described image feature is obtained according to described image feature, and using the psychological characteristics as psychology State parameter;
The psychological characteristics is assigned to each status categories by the full articulamentum;
The psychological characteristics is calculated in the probability of each status categories based on the normalization layer, and output probability is maximum Status categories.
5. according to the method described in claim 4, it is characterized in that, the analysis model include multiple residual units, it is each residual Poor unit includes A layers of convolutional layer, the input of B layers of convolutional layer inputted as B+C layers in the residual unit, wherein A For the integer more than or equal to 3, B is the integer greater than 0 and less than A-1, and C is the integer greater than 1.
6. method according to any one of claims 1 to 5, which is characterized in that the method also includes:
The psychological condition parameter and status categories are sent to client, the psychological condition parameter and status categories for referring to Show that the client generates display data, and the display data are sent to the head-wearing display device.
7. a kind of psychological condition display methods based on mixed reality and neural network is applied to head-wearing display device, feature It is, comprising:
The first idsplay order is received, it is described default virtual according to the default virtual scene that first idsplay order display is three-dimensional Scene includes one or more drawing Aids;
The drafting operation carried out by the drawing Aids chosen is obtained, and corresponding drafting is generated according to drafting operation Image;
The drawing image is sent to electronic equipment, the drawing image is used to indicate the electronic equipment by pre-establishing Analysis model analyze the drawing image, generate display data, wherein the analysis model is according to including different mental state The image data set training of the paintings of drafting obtains;
The display data that the electronic equipment is sent are received, and show three-dimensional picture according to the display data.
8. a kind of psychological condition display system based on mixed reality and neural network, which is characterized in that the system comprises electricity Sub- equipment and head-wearing display device, the electronic equipment and the head-wearing display device, which are established, to be communicated to connect;
The head-wearing display device is preset for receiving the first idsplay order according to first idsplay order display is three-dimensional Virtual scene, the default virtual scene include one or more drawing Aids;Obtain the drawing backman by choosing The drafting operation that tool carries out, and corresponding drawing image is generated according to drafting operation;The drawing image is sent to electricity Sub- equipment;
The electronic equipment passes through the analysis model point pre-established for receiving the drawing image of head-wearing display device transmission The drawing image is analysed, the psychological condition parameter of user corresponding with the head-wearing display device, and output state classification are obtained, Wherein, the analysis model is obtained according to the training of the image data set for the paintings drawn comprising different mental state;According to described Psychological condition parameter and status categories generate display data;The display data are sent to the head-wearing display device;
The head-wearing display device is also used to receive the display data that the electronic equipment is sent, and according to the display data Show three-dimensional picture.
9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored in the memory When program is executed by the processor, so that the processor realizes the method as described in claim 1 to 6 is any.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The method as described in claim 1 to 7 is any is realized when being executed by processor.
CN201910290677.6A 2019-04-11 2019-04-11 Psychological condition display methods and device based on mixed reality and neural network Pending CN110096145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290677.6A CN110096145A (en) 2019-04-11 2019-04-11 Psychological condition display methods and device based on mixed reality and neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290677.6A CN110096145A (en) 2019-04-11 2019-04-11 Psychological condition display methods and device based on mixed reality and neural network

Publications (1)

Publication Number Publication Date
CN110096145A true CN110096145A (en) 2019-08-06

Family

ID=67444733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290677.6A Pending CN110096145A (en) 2019-04-11 2019-04-11 Psychological condition display methods and device based on mixed reality and neural network

Country Status (1)

Country Link
CN (1) CN110096145A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402354A (en) * 2020-03-16 2020-07-10 浙江大学 Color contrast enhancement drawing method, device and system suitable for optical transmission type head-mounted display
CN112270281A (en) * 2020-11-02 2021-01-26 深圳市商汤科技有限公司 User psychology analysis system, method, apparatus and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105708436A (en) * 2016-04-18 2016-06-29 深圳竹信科技有限公司 Mental regulation method and device
CN106447042A (en) * 2016-08-31 2017-02-22 广州瑞基信息科技有限公司 Psychoanalysis method and apparatus based on drawing projection
CN107280693A (en) * 2017-06-20 2017-10-24 国网技术学院 Psychoanalysis System and method based on VR interactive electronic sand tables
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
CN108038414A (en) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 Character personality analysis method, device and storage medium based on Recognition with Recurrent Neural Network
CN108392213A (en) * 2018-03-27 2018-08-14 北京态极科技有限公司 Psychoanalysis method and device based on drawing psychology
CN109102002A (en) * 2018-07-17 2018-12-28 重庆大学 In conjunction with the image classification method of convolutional neural networks and conceptual machine recurrent neural network
CN109215804A (en) * 2018-10-09 2019-01-15 华南理工大学 Mental disorder assistant diagnosis system based on virtual reality technology and physio-parameter detection
CN109543749A (en) * 2018-11-22 2019-03-29 云南大学 Drawing sentiment analysis method based on deep learning
CN109584992A (en) * 2018-11-22 2019-04-05 段新 Exchange method, device, server, storage medium and sand play therapy system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105708436A (en) * 2016-04-18 2016-06-29 深圳竹信科技有限公司 Mental regulation method and device
CN106447042A (en) * 2016-08-31 2017-02-22 广州瑞基信息科技有限公司 Psychoanalysis method and apparatus based on drawing projection
CN107280693A (en) * 2017-06-20 2017-10-24 国网技术学院 Psychoanalysis System and method based on VR interactive electronic sand tables
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
CN108038414A (en) * 2017-11-02 2018-05-15 平安科技(深圳)有限公司 Character personality analysis method, device and storage medium based on Recognition with Recurrent Neural Network
CN108392213A (en) * 2018-03-27 2018-08-14 北京态极科技有限公司 Psychoanalysis method and device based on drawing psychology
CN109102002A (en) * 2018-07-17 2018-12-28 重庆大学 In conjunction with the image classification method of convolutional neural networks and conceptual machine recurrent neural network
CN109215804A (en) * 2018-10-09 2019-01-15 华南理工大学 Mental disorder assistant diagnosis system based on virtual reality technology and physio-parameter detection
CN109543749A (en) * 2018-11-22 2019-03-29 云南大学 Drawing sentiment analysis method based on deep learning
CN109584992A (en) * 2018-11-22 2019-04-05 段新 Exchange method, device, server, storage medium and sand play therapy system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
林封笑等: "基于混合结构卷积神经网络的目标快速检测算法", 《计算机工程》 *
段建等: "深度卷积神经网络在Caltech-101图像分类中的相关研究", 《计算机应用与软件》 *
裴颂文等: "网中网残差网络模型的表情图像识别研究", 《小型微型计算机系统》 *
陈宝权等: "混合现实中的虚实融合与人机智能交融", 《中国科学:信息科学》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402354A (en) * 2020-03-16 2020-07-10 浙江大学 Color contrast enhancement drawing method, device and system suitable for optical transmission type head-mounted display
CN112270281A (en) * 2020-11-02 2021-01-26 深圳市商汤科技有限公司 User psychology analysis system, method, apparatus and storage medium

Similar Documents

Publication Publication Date Title
Ihianle et al. A deep learning approach for human activities recognition from multimodal sensing devices
CN112990054B (en) Compact linguistics-free facial expression embedding and novel triple training scheme
CN110162779A (en) Appraisal procedure, device and the equipment of quality of case history
CN111316281A (en) Semantic classification of numerical data in natural language context based on machine learning
CN110427486A (en) Classification method, device and the equipment of body patient's condition text
CN112100406B (en) Data processing method, device, equipment and medium
CN106326857A (en) Gender identification method and gender identification device based on face image
Geng et al. Gated path selection network for semantic segmentation
WO2020224433A1 (en) Target object attribute prediction method based on machine learning and related device
CN114787883A (en) Automatic emotion recognition method, system, computing device and computer-readable storage medium
CN114021524B (en) Emotion recognition method, device, equipment and readable storage medium
CN111862261B (en) FLAIR modal magnetic resonance image generation method and system
CN112101162A (en) Image recognition model generation method and device, storage medium and electronic equipment
CN109918630A (en) Document creation method, device, computer equipment and storage medium
US20240046471A1 (en) Three-dimensional medical image recognition method and apparatus, device, storage medium, and product
CN111967334A (en) Human body intention identification method, system and storage medium
CN116113356A (en) Method and device for determining user dementia degree
Shahzad et al. Role of zoning in facial expression using deep learning
CN110096145A (en) Psychological condition display methods and device based on mixed reality and neural network
Kanna et al. Detection of Emotion Employing Deep Learning Modelling Approach
Omar et al. Automated realtime mask availability detection using neural network
Hekal et al. Breast cancer segmentation from ultrasound images using deep dual-decoder technology with attention network
CN109298820A (en) Interaction design Tool-file generation method, device, electronic equipment and storage medium
CN110415827A (en) Tcm inspection training method, electronic equipment and computer readable storage medium based on digital patient
CN115439179A (en) Method for training fitting model, virtual fitting method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190806

RJ01 Rejection of invention patent application after publication