CN109977101B - Method and system for enhancing memory - Google Patents

Method and system for enhancing memory Download PDF

Info

Publication number
CN109977101B
CN109977101B CN201910221663.9A CN201910221663A CN109977101B CN 109977101 B CN109977101 B CN 109977101B CN 201910221663 A CN201910221663 A CN 201910221663A CN 109977101 B CN109977101 B CN 109977101B
Authority
CN
China
Prior art keywords
information
user
emotion
image
information database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910221663.9A
Other languages
Chinese (zh)
Other versions
CN109977101A (en
Inventor
安宁
颉云华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gansu Baihe Iot Technology Information Co ltd
Original Assignee
Gansu Baihe Iot Technology Information Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gansu Baihe Iot Technology Information Co ltd filed Critical Gansu Baihe Iot Technology Information Co ltd
Priority to CN201910221663.9A priority Critical patent/CN109977101B/en
Publication of CN109977101A publication Critical patent/CN109977101A/en
Application granted granted Critical
Publication of CN109977101B publication Critical patent/CN109977101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Abstract

A method of enhancing memory, the method comprising the steps of: establishing an original information database to realize the initial review of the original information by a user; the trigger information database extracts and generates at least one recommendation of the original information in the original information database, so that the user can review the original information again; the triggering condition types in the triggering information database at least comprise emotion triggering, and the emotion triggering is the recommendation of at least one piece of original information generated by extracting the triggering information database based on the mood coefficient Y of the user, the mapping relation between the emotion triggering and the psychological state data of the associated information database and the mapping relation between the psychological state data and the original information. According to the invention, through combining the emotion information and the date information of the user, the photo is intelligently extracted and checked by the user, so that the memory capacity of the user can be stimulated and improved, the emotion condition of the user can be effectively predicted, and the positive effects on stimulating the memory of the user and improving the emotion of the user are achieved.

Description

Method and system for enhancing memory
The invention is a divisional application with application number CN201610349270.2, application date 2016, 5, month and 24, application type of the invention, and name of the invention, namely a method for constructing a memory chain and using the memory chain to strengthen memory.
Technical Field
The invention relates to a method for selectively presenting associated information based on mood data, in particular to a method and a system for enhancing memory.
Background
The cognitive function is the psychological function of the human brain to recognize and reflect objective things, including perception, learning and memory, attention, language, thinking and the like, and the cognitive function of the human is reduced along with the increase of the age of the human, and the general expression is the reduction of various functions such as memory deterioration and the like, such as hearing, vision reduction, inconvenient movement and the like, which can seriously affect the life quality of the human. The symptoms of senile dementia usually show serious loss of cognitive ability and memory reduction, and also show obvious reduction in reasoning and judging ability. Nostalgic therapy has proven to be an effective approach in the treatment and care of mild cognitive dysfunction and related diseases, and can reduce the fear and dependence of drugs and the secondary damage caused by the side effects of drugs compared with traditional drug therapy.
Chinese patent (CN 104571500A) discloses a method for strengthening memory, which is characterized by comprising: displaying the content to be memorized to the user; receiving information input by the user aiming at the content needing to be memorized; judging whether the content needing to be memorized is the content needing to be memorized in an enhanced way or not according to the information related to the input information; and when the content needing to be memorized is the content needing to be memorized in an enhanced manner, controlling the content needing to be memorized in an enhanced manner to be displayed to a user in a preset time period. The patent displays the contents needing strengthening to the user periodically in a preset time so as to obtain the effect of strengthening the memory. However, the patent cannot construct a memory network according to the psychological changes of the user, cannot adjust the memory contents according to the psychological changes of the user to the memory contents, and cannot enable the user to achieve the effect of strengthening the memory in a pleasant psychological environment. There is an urgent need in the market for a method and product for enhancing memory of users under different psychological states.
Chinese patent (CN102222422A) discloses a vocabulary learning assisting system and method, which displays the learned translated vocabulary for the user to browse, and provides a confirmation window for the user to set the memory state, when the user does not memorize the translated vocabulary, the vocabulary interpretation and the associated vocabulary are displayed, and when the user memorizes the translated vocabulary, the vocabulary test is generated to detect the familiarity of the user with the translated vocabulary, and the translated vocabulary in the test database is updated according to the detection result, thereby the interaction can be performed according to the learning state of the translated vocabulary by the user, and the technical effect of improving the vocabulary learning efficiency is further achieved. The patent fails to extract recommendations that yield raw information in at least one raw information database by generating mood coefficients for the user. Meanwhile, in the first review comment notice of the parent case of the present invention, the reviewer does not creatively comment on the technical feature that the emotional trigger is the recommendation of the original information in at least one original information database extracted by the trigger information database based on the mood coefficient Y of the user, the mapping relationship between the emotional trigger and the psychological state data of the associated information database, and the mapping relationship between the psychological state data and the original information, so that the technical feature is not disclosed by any existing patent document, and is creative.
Disclosure of Invention
In view of the deficiencies of the prior art, the present invention provides a method for constructing a memory network and using the memory network to enhance memory, the method at least comprising the following steps:
establishing an original information database, updating data of original information according to different psychological states of a user when the user checks the original information in the original information database, storing the updated data in an associated information database, and establishing a trigger information database;
the triggering information database extracts and generates recommendation of the original information in at least one original information database based on triggering information provided by the user and data information of the associated information database so as to assist the user in establishing a memory network.
According to a preferred embodiment, the trigger condition in the trigger information database is mapped with the mental state data of the associated information database and/or the original information of the original information database.
According to a preferred embodiment, the triggering condition types in the triggering information database at least comprise a time trigger, a place trigger, a person trigger, a content trigger and an emotion trigger;
and the emotion trigger is a recommendation for generating original information in at least one original information database by extracting the triggering information database based on the mood coefficient Y of the user, the mapping relation between the emotion trigger and the psychological state data of the associated information database and the mapping relation between the psychological state data and the original information.
According to a preferred embodiment, the mood coefficient Y is a prediction of the current mood information of the user in a probabilistic manner based on the historical mood information of the user, the current mood information being p (s | E)t-1,Et-2,…,Et-n),
Where s denotes the current emotional information of the user, Et-1As emotional information of the previous time, Et-nEmotional information of the previous nth time, Et-iIndicating the emotional information of the previous ith time with respect to the current time t.
According to a preferred embodiment, said mood coefficient Y is analyzed on the basis of historical mood information adjacent to n times, t representing the current time, tiIndicating the time point, t, corresponding to the adjacent ith emotioniHas an emotion value of et-iThen t and tiThe time difference between is ρ t ═ t-tiThen et-iThe emotion values at the current time are: et-i=et-i (t)=f(et-i,ρt)=et-iExp (- ρ t/(24 × 60)), the above formula takes into account the time-decay property of exponential decay of mood values, where et-iIs 1;
according to the historical emotion values of the past n times, calculating a predicted mood coefficient according to the following formula:
Y=maxs∈S(p(s|Et-1,Et-2,...,Et-n) In which the set of emotions is S ═ { happy, sad, fear, surprised, anger, jealousy };
if the change of the mood coefficient meets the Markov property, Y is maxs∈S(p(s|Et-1) I.e. the current mood depends on the mood at the last point in time.
According to a preferred embodiment, a recommendation of the original information in the at least one original information database is generated by extraction based on the predicted value of the mood coefficient Y.
According to a preferred embodiment, the data of the associated information database is different psychological state data when the user views the original information in the original information database, and the psychological state data can be obtained by recording and analyzing the facial expression of the user.
According to a preferred embodiment, the mental state data further comprises mental state data actively entered by the user based on the original information, the mental state data comprises positive and negative states, and a mapping relation is established between the mental state data and the corresponding original information.
According to a preferred embodiment, the step of enhancing memory comprises at least: establishing an original information database, completing the input of basic original information in the original information and related information describing the basic information by a user, and realizing the initial review of the original information by the user;
based on the triggering information provided by the user and the data information of the associated information database, the triggering information database extracts and generates recommendation of the original information in at least one original information database, and the user can review the original information again;
based on the impression depth of the user on the original information, the original information display device can achieve reminding display aiming at the original information so as to achieve re-review of the original information by the user and achieve the purpose of strengthening the memory of the user.
According to a preferred embodiment, the original information of the original information database comprises basic original information and related information describing the basic original information, wherein the basic original information is at least one of images or sounds, and the related information describing the basic original information is at least additional description of one of time, place, people and content of the basic original information.
According to yet another aspect of the present invention, there is provided a method for constructing a memory network and using the same for enhancing memory, wherein a multimedia information base is established and association information of respective images is created and updated in a manner related to psychological information of respective test subjects when viewing the respective images in the multimedia information base, so as to establish a memory association base related to the respective images for establishing the memory network for the respective test subjects.
According to a preferred embodiment, the test subject actively or passively feeds back psychological information generated for the images, including memory information and emotion information, when viewing the respective images in the multimedia information base.
According to a preferred embodiment, the psychological information fed back by the test object is mapped and associated with the image to form associated information of the image, so that a memory association library for storing the associated information is established.
According to a preferred embodiment, the associated information further includes information on the degree of preference of the test object for the image, information on the number of times of sharing, and memory information associated with the image and expressed by the test object in a form of voice, picture, and/or text.
According to a preferred embodiment, image information for adjusting the psychological state of the test subject to a positive psychological state is selected and displayed based on the correlation information and the emotional information of the test subject.
According to a preferred embodiment, the emotional information is analyzed in a probabilistic manner based on the historical emotional information of the test object, and the emotional information Y is
Y=maxs∈S(p(s|Et-1,Et-2,...,Et-n)),
Wherein, the emotion set S ═ { S ═ S1、S2、…Si},SiRepresenting the emotional category, Et-nRepresenting the previous nth history emotion information, n ≧ 1, and Et-nAnd E, obtaining the emotion information Y of the test object by analysis.
According to a preferred embodiment, the historical affective information Et-nAttenuation on a time basis, i.e.
Figure GDA0003270500800000051
Wherein et-nExpressing the value of the emotion of the previous nth time, and ρ t expressing the time t when the emotion of the previous nth time occursnThe time difference from the present time t.
According to a preferred embodiment, the test subject feeds back the emotional information when viewing the image in the form of selectable buttons.
According to a preferred embodiment, the emotional information of the test object when viewing each image in the multimedia information base is analyzed in a mode of collecting the micro expression of the test object.
According to a preferred embodiment, the multimedia information base comprises an image base for storing images and a related information base which establishes a mapping relation with the image base and is used for storing image related information, and the image information is displayed in a mode that the images are combined with the corresponding related information.
The invention has the beneficial technical effects that:
the memory network construction and the method for enhancing the memory by using the memory network construction can predict the psychological information of the user according to the psychological state of the original information checked by the user, and adjust the displayed image information according to the predicted emotional information, so that the memory enhancing effect is achieved by the user in the positive psychological state. The invention selects and displays the image information with positive effect by analyzing the psychological effect of the image information on the user, thereby adjusting the psychological state of the user and leading the life of the user to be healthier.
The memory network construction and the method for strengthening the memory by using the memory network construction are suitable for students, adults and old people to strengthen the memory, and particularly for the old people with gradually declining memory, warm and happy memory can be strengthened for the old people, negative memory is abandoned, and the old people are happy at a later time.
Drawings
FIG. 1 is a schematic logic diagram of the method of the present invention;
FIG. 2 is a schematic diagram of a preferred manner of recording psychological information by a test subject in accordance with the invention;
FIG. 3 is a graphical representation of the mood of a test subject of the present invention over time;
FIG. 4 is a schematic diagram of a preferred mode of displaying picture information in a time axis manner according to the present invention; and
FIG. 5 is a block diagram of a system for constructing a memory network and using it to enhance memory.
Detailed Description
The following detailed description is made with reference to the accompanying drawings and examples.
In the invention, the memory chain is a dynamic memory chain formed by arranging a plurality of image information formed after adjustment according to the psychological information of the test object along with time information. The plurality of memory chains form a memory network. The memory network carries memory information that has a positive psychological effect on the test subject. The memory network can be a one-dimensional memory network, a two-dimensional memory network or a three-dimensional memory network. Even more, the dimensionality of the memory network can be increased to form a multi-dimensional memory network.
The invention provides a method for constructing a memory network and using the memory network to strengthen memory, which at least comprises the following steps: establishing an original information database, updating data of the original information according to different psychological states of a user when the user checks the original information in the original information database, storing the updated data in an associated information database, and establishing a trigger information database; the triggering information database extracts and generates recommendation of the original information in at least one original information database based on triggering information provided by the user and data information of the associated information database so as to assist the user in establishing a memory network.
The memory network comprises a multidimensional network. The multidimensional network includes at least a time dimension and a space dimension in terms of longitude, latitude, and altitude. The user can complete the construction of the memory network based on the time information and the space information of the original information, thereby assisting the user to strengthen the related memory of the related original information.
Users in the present invention include patients with impaired memory, adults, children, learners, and robots (AIs).
The original information of the original information database in the invention comprises basic original information and additional original information, wherein the basic original information is at least one of image or sound, and the additional original information is at least additional description of one of time, place, person and content of the basic original information. The images include still images, moving images, and video images.
The psychological information of the present invention includes mood and emotion. Mood refers to an emotional state that is unspecified, general, and capable of widely affecting cognition and behavior. The emotion refers to the experience of the attitude of the external object generated along with the cognitive and consciousness processes, is the reaction of the human brain to the relationship between the objective external object and the main body demand, and is a psychological activity mediated by the individual demand.
The present invention forms a unique memory network of a user by associating images and related information thereof with psychological information of the user, selectively displays images and related information thereof capable of stimulating positive psychological effects according to changes in the psychological information of the user, so that the user can enhance memory in positive psychological states of mind, warmth, etc.
Example 1
This example further illustrates the present invention in detail.
As shown in FIG. 1, a method for constructing a memory network and using it to enhance memory includes the steps of:
s1: establishing an original information database;
s2: establishing an associated information database related to the corresponding image;
s3: and establishing an information database to realize the recommendation of at least one piece of original information so as to assist the user in establishing a memory network.
Step S1: and establishing an original information database.
The established original information database comprises a basic original information database and a related information database. The basic original information in the basic original information base can be information such as images, sound, characters and the like. This embodiment will be described with basic original information as an image. The basic original information base is used for storing images related to the memory information. The related information base is used for storing character information, graphic information and marking information related to each image.
The basic original information base and the related information base are associated together in a mapping mode. Each image in the basic original information base corresponds to at least one piece of relevant information in the relevant information base. The related information comprises character information describing the content of the image, the time, the place, the content of the event, the related person, the name and the identity of the person. The related information can be input by a user or related personnel in the process of acquiring the images, and can also be input by the user according to the memory information in the process of viewing each image in the graphic information base by the user. According to a preferred embodiment, the related information further includes voice information and graphic information recorded and inputted by the user or related person in a voice, graphic or text manner.
According to a preferred embodiment, the user and the associated personnel label specific labels for images of particular significance and importance. The special label is convenient for a user to view and memorize the image emphasis in the process of viewing each image. The special label may be a graphic having a plurality of colors. The different colors represent the degree of importance of the image. Different shapes of the graphics represent different special meanings. For example, red indicates very important to the user and blue indicates general importance to the user. The pentagram graph represents a family related image and the circle graph represents a friend related image.
And collecting and storing the image related to the memory information and the related information thereof to a basic original information base by a user or related personnel.
According to a preferred embodiment, the manner of capturing the image comprises taking a picture or image by the camera device and storing the picture or image as an image in the basic raw information base. The mode of collecting the image also comprises the step of transmitting and storing the image in the form of electronic digital code into the basic original information base in a wired or wireless mode.
According to a preferred embodiment, the user or the related person can also modify the related information corresponding to the images in the basic original information base, correct errors in the related information or supplement the related information. The related information corresponding to the graphic may be erroneous at the time of the first input. And the user or related personnel can modify the error in the related information in an identity verification manner in the subsequent viewing process. The authentication mode comprises password authentication, voice authentication, fingerprint authentication, lip print authentication and pupil authentication.
According to a preferred embodiment, the images in the basic original information base and the corresponding related information can be displayed simultaneously. When the user views the image, the image is displayed in the screen simultaneously with its related information.
According to a preferred embodiment, the related information corresponding to the image is displayed based on a display instruction issued by the user. The related information may be displayed in the form of a label. When the user remembers the content of the image is unclear, the user clicks a label representing the related information, and the related information is displayed based on a clicked instruction. Preferably, the name of the image of the person in the related information and the identity relative to the user are marked on the image. Preferably, the related information includes identification information of the person or the animal, which is tagged next to the copied head portrait of the face image of the person or the animal. The reproduction avatar is arranged around the image.
The basic original information base is arranged in a storage module of the display device. The relevant information base is arranged in a remote server. Because the image occupies a larger memory space, the image is arranged in a remote server, and the cache is required to be cached for a longer time in the transmission process, so that the image can be rapidly displayed in a storage module of the display device, and the time and the pressure of data transmission are reduced. And the memory space occupied by the related information is small, and no large transmission pressure exists in the transmission process. The setting in the remote server will not have an effect. The image and the related information are respectively stored in the basic original information base and the related information base in corresponding mapping relations. When the image information needs to be displayed, the image and the related information are extracted from the basic original information base and the related information base respectively and then synthesized into complete image information to be displayed to a user.
When the user views each image in the basic original information base, the user can send and display the image information in a forwarding mode on a network page or social software which allows at least one friend to browse, and the image information is shared with the friend.
The user may view the individual images within the base original information base in at least one manner. And if the user does not limit the content of the image, the image information is sequentially viewed according to the occurrence time of the events in the image. If the user specifies a time period during which the image event occurs, the image information within the specified time period may be viewed. If the user specifies image content, image information related to the specified image content can be viewed. If the user specifies an image location, image information relating to the specified image location may be viewed. If the user specifies a person, image information related to the specified person can be viewed.
Step S2: and establishing an associated information database. And updating the original information according to different psychological states when the user looks at the original information in the original information database, wherein the updated data describes the emotional colors represented by the original information, such as positive emotions or negative emotions, and is stored in the associated information database.
The associated information refers to information associated with each image in the basic original information base. When a user views an image, the mind and mood of the user fluctuate. Therefore, the psychological information generated when the user views the image is the psychological information associated with the image, i.e., the associated information. The associated information mainly comprises mood information, emotion change information, the like degree of the image information, the like times of expressing like praise, the like times of sharing, the like times of expressing dislike brick shooting and the like of the user associated with the image. The associated information is psychographic information actively fed back by the user when viewing each image in the basic original information base or psychographic information obtained by monitoring the micro expression of the user by related equipment. And establishing a mapping relation corresponding to each other between the associated information and the image. Preferably, the mood of the user is changed by generating mood fluctuations when the user views the respective images in the basic original information base. The emotions include positive emotions and negative emotions. The positive mood includes peace and joy. Negative emotions include fear and sadness. The category of emotion specifically included in the positive emotion and the negative emotion differs depending on the user.
According to a preferred embodiment, the emotional category of the user is obtained by a micro-expression analysis that monitors the face of the user. This approach is primarily applicable to humans with facial micro-expressions. As the user views the individual images within the base original information base, the user will have an emotional expression on the face forming a micro expression. The emotion of the user is known to belong to positive emotion or negative emotion through data analysis of the micro expression of the user. The micro-expression can express at least the emotions of happiness, hurry, fear, anger, disgust, surprise, slight. Happiness and surprise belong to positive emotions, and hurry, fear, anger, disgust and slight are negative emotions. The micro-expression can also express thoughts and calm emotions.
Happy facial movements include: the mouth corner is raised, the cheek is raised and wrinkled, the eyelid is contracted, and the tail part of the eye can form a 'fishtail line';
the heart-injuring facial actions include: squinting, tightening eyebrows, pulling down the corner of the mouth, and lifting or tightening the chin;
facial movements for fear include: the mouth and eyes are open, the eyebrows are raised, and the nostrils are enlarged;
angry facial movements include: eyebrow drop, forehead wrinkles, eyelid and lip tension;
aversive facial movements include: the upper lip is lifted due to the murder, the eyebrows droop, and the eyes squint;
surprising facial actions include: sagging jaw, relaxed lips and mouth, enlarged eyes, slightly raised eyelids and eyebrows;
the facial movement of the light stripe includes: one side of the mouth corner is lifted up and booming or laughing is done.
When a user views image information, the face of the user simultaneously has the actions of mouth corner tilting, cheek lifting wrinkle, eyelid contraction, eye tail forming 'fishtail' and the like, and the actions represent the pleasure of the user at the moment. And establishing a mapping relation between the image in the image information and the emotion category happiness in the associated information, namely, the image is associated with the positive emotion happiness.
When the next image information appears, the user's face simultaneously appears actions such as squinting, eyebrow tightening, mouth corner pulling down, chin lifting or tightening, and the like, so that the emotion change when the user views the image becomes a casualty. The image is likely to recall the user's impairment. The image is mapped with the emotional category impairment in the association information, i.e. the image is associated with a negative emotional impairment.
When the user is at a time of a casualty, the next image is not displayed if it is an image associated with a negative emotion. And if the next image has no associated information record or has an associated information record associated with positive emotion, displaying the next image. Therefore, the situation that the emotion of the user keeps falling down in the process of strengthening the memory and the user falls into a pessimistic psychological state is avoided.
According to a preferred embodiment, the user selects the mood or mood at the moment in the form of selectable buttons when viewing the individual images in the basic original information base. The user feeds back the emotion when viewing the image by clicking a button. Emotions such as happiness, hurry, fear, anger, disgust, surprise, and thoughts are displayed in the form of buttons in the vicinity of the image, respectively. The positions include the upper and lower sides of the image display area, the image, the right side, and a user-recognizable position within the image display area. The user only needs to select the emotion button according with the current psychology. The buttons may be represented by different colored graphics.
According to a preferred embodiment, the user selects a liked or disliked mood for the image information in the form of selectable buttons when viewing the individual images within the basic original information base. Like represents positive emotions and dislike represents negative emotions. Alternatively, only one thumbs-up button and/or "tile" button may be provided near the image display area. The "like" button represents a like of the image and represents an emotion of seeing the image. The "clap" button represents a dislike of the image, seeing the image creates an uncomfortable negative mood of injury, thoughts, anger, sadness, etc. The user indicates his/her liking of the image by liking, i.e., expresses whether his/her feeling of the image is a positive emotion or a negative emotion. In the case of only the "like" button, the user indicates his or her dislike of the image by not like, i.e., expresses his or her feeling of the image as a negative emotion. In the case where the "tile shooting" button is provided, the user expresses his or her emotional opposition to the image by clicking the "tile shooting" button. The "like" button and the "tile" button are names that express the user's liking and disliking of the image. The buttons may also express the user's emotions of support and non-support of the image, likes, and dislikes in other names.
The more times the user clicks "like" for an image, the more the image is, the more the influence of the image on the user's positive emotion is expressed, and there is a positive psychological effect. The more times the user clicks the "tile" button on the image, the more the image has been affected by the image, indicating a negative psychological effect on the user's negative mood. The number of clicks of the "like" button and the "tile-strike" button is stored as the associated information of the corresponding image in the associated information database. Preferably, the user expresses the degree of preference for the images by selecting the length of the bar pattern, the color depth of the pattern, or the number of graphics when viewing each image in the basic original information base.
The longer the length of the bar pattern, the more favorable the user has to the image, i.e. the more positive the image can have for the user. Conversely, the smaller the length of the bar pattern, the less the image can bring positive psychological effects to the user. Alternatively, the bar pattern may be increased in length by the user clicking a "like" button once.
A graphic button for expressing a favorite is provided in the image display area. Once the test pattern is clicked, the color of the pattern button is darkened once. The color of the image button represents the number of user's likes, indirectly indicating the effect of the image on the user's positive mood.
At least one graphic representing a positive emotion is provided in the image display area. For example, the graphic may be in the shape of a five-pointed star. When a user views each image of the basic original information base, the user can express the love degree or the mood happiness degree of the image by clicking the number of the five-pointed star graphs. The number of clicked pentagram images is recorded as the associated information of the images and stored in the associated information database. The shape of the figure is not limited to a pentagram, and may be circular, rectangular, triangular, hexagonal, or the like.
According to a preferred embodiment, the user records the perception of the image information in the form of voice, text or graphics input while viewing the individual images in the basic original information base. And analyzing to obtain whether the emotion of the user is a positive emotion or a negative emotion according to the impression record of the user. The emotion information of the user is analyzed according to the audio frequency, the pitch and the sound speed of the user, the emotion information and the checked image are associated and stored to the associated information database as the associated information of the image. Or the positive emotion information of the user is extracted according to words representing positive emotions, such as happy words, warm words, sweet words and the like appearing in the words recorded by the user; or extracting the message emotion information of the user according to the characters representing the negative emotion, such as the characters of depression, difficulty, embarrassment, discomfort and the like appearing in the characters recorded by the user. The invention extracts the emotion information of the user according to the emotion represented by the graphic characters recorded by the user. For example, the heart of peach represents a positive mood of sweetness, and the split heart represents a negative mood of hurting. The invention extracts the emotion information when the user views the image from the information recorded by the user in a mode of inputting voice, characters or graphics, and records and stores the emotion information as the associated information of the image in the associated information database.
The emotion type, the emotion change curve, the mood change curve, the degree of likeness or disagreement to the image information, the number of praise times, and the number of share times of the user when viewing each image in the basic original information base constitute associated information having a mapping relationship corresponding to each image. The association information is stored in the association information database according to the classification of the positive association information and the negative association information. The association information database is provided in the remote server. When the user views each image in the basic original information base, the image information and the corresponding associated information are displayed simultaneously. Or, when the user views each image in the basic original information base, the associated information is displayed based on the viewing instruction of the user.
When the user views each image information in the basic original information base, the fed back associated information related to the psychological information changes, and the associated information base is updated according to the latest feedback information of the user.
For example, the associated information of one image in the basic original information library is "like" four times for the user. The user has clicked the "clap brick" button once this time to view the image. Since the number of praise is greater than the number of "tiles", the image is still associated with a positive mood. And if the number of times of approval of the user to the image is less than the number of times of brick shooting as the number of times of viewing of the user is increased, the image is unmapped and associated with the positive emotion in the association information database, and the image is mapped and associated with the negative emotion.
The data of the associated information database is updated based on the emotion change of each image in the basic original information database by the user.
The user feeds back the emotional information every day while viewing each image in the basic original information base, and thus the long-time emotional information reflects the daily emotional change of the user. And if the user forgets to check the image information and does not record the emotion information, reminding the user to record the emotion information in a reminding manner.
The invention also analyzes the emotion information fed back by the user and predicts the emotion of the user. And analyzing the mood coefficient through the mood information recorded by the user every day to obtain the mood change curve and mood change curve of the user in a period of time.
For example, the mood coefficient Y is a probability prediction of the current mood of the user based on the historical mood information of the user, and the current mood information is p (s | E)t-1,Et-2,…,Et-n),
Where s denotes the current emotional information of the user, Et-1As emotional information of the previous time, Et-nEmotional information of the previous nth time, Et-iIndicating the emotional information of the previous ith time with respect to the current time t. The emotion information includes an emotion type and an emotion value.
According to a preferred embodiment, said mood coefficient Y is analyzed on the basis of historical mood information adjacent to n times, t representing the current time, tiIndicating the time point, t, corresponding to the adjacent ith emotioniHas an emotion value of et-iThen t and tiThe time difference between is ρ t ═ t-tiThen et-iThe emotion values at the current time are: et-i=et-i (t)=f(et-i,ρt)=et-iExp (- ρ t/(24 × 60)), the above formula takes into account the time-decay property of exponential decay of mood values, where et-iIs 1;
according to the historical emotion values of the past n times, calculating a predicted mood coefficient according to the following formula:
Y=maxs∈S(p(s|Et-1,Et-2,...,Et-n) In which S ═ happy, relaxed, calm, sad, fear, surprise, anger, jealousy }; if the result is positive emotions such as happiness, relaxation, calm, etc., the value of Y is defined as 1, and if the result is negative emotions such as sadness, fear, surprise, anger, jealousy, etc., the value of Y is defined as-1; if the change of the mood coefficient meets the Markov property, Y is maxs∈S(p(s|Et-1) I.e. the current mood depends on the mood at the last point in time.
When the calculated mood coefficient Y of the user is more than 0, the mood of the user is in a positive psychological state. At this time, only the image information needs to be displayed without considering psychological factors of the user. When the mood coefficient Y of the user is less than 0 through calculation, the mood of the user is a negative psychological state. Image information with positive associated information is needed at this time to adjust the user to present a positive mental state.
Fig. 2 shows a method for recording emotional information of a user. Changes in the mood of the user are recorded in the manner of a calendar. According to a preferred embodiment the kind of mood is represented in at least one color, figure and/or color in combination with the image. For example, the categories of emotions are expressed in the same color with different degrees of depth, and the user expresses the categories of emotions by selecting the depth of color of the day's date. Color A represents joy, color B represents thoughts, color C represents sadness, and color D represents anger. By looking at the depth of the color of the date in the calendar, the categories and changes of the emotion of the user can be visually observed.
As shown in fig. 3, the horizontal axis represents time t, and the vertical axis represents emotion categories, thereby forming an emotion change curve. Fig. 3 shows the curves of the changes in mood in the morning, noon, and evening, respectively. Emotion a stands for anger, emotion B stands for thoughts, emotion C stands for calm, emotion D stands for happy, and emotion E stands for no recording. . Three different line segments represent the mood at 8 am, 12 pm, afternoon or evening. As shown in FIG. 3, the emotional changes of the user were observed at time periods 2015.12.25-2016.1.3 as test times. The mood change of the user at 8 am was happy-calm-happy-no-record. The emotional change of the user at 12 pm is happy-minded-calm-happy-calm-minded-calm-no-record. The emotional change of the user in the afternoon is calm, happy, calm, no record, thought.
According to a preferred embodiment, the mood profile is formed with time t on the horizontal axis and the mood coefficient on the vertical axis.
According to the emotion change curve and the mood change curve, the user or related personnel can intuitively know the emotion change of the user. When negative emotions are found to appear for consecutive days, the relevant person can guide the user to choose to take some positive outdoor activities, or increase the time for accompanying the user, and adjust the mental state of the memory person.
Step S3: and establishing a trigger information database, finishing the association of the trigger conditions with the original information database and the associated information database, and simultaneously finishing the recommendation of the image information in at least one original information database based on the trigger information provided by the user.
The data information in the trigger information database comprises trigger condition data displayed by different image information and mapping relation data of the trigger condition and the image information or the associated information. The trigger information database extracts and generates a recommendation of the original information in the at least one original information database based on the trigger information provided by the user and the data information of the associated information database.
According to a preferred embodiment, the trigger condition data for the display of the image information comprises time trigger data, place trigger data, content trigger data, person trigger data and emotion trigger data. The time trigger data includes image information that a user viewed for a particular time period at a particular time. For example, on the day of 3 months and 1 day, the image information corresponding to the 3 months and 1 day of the past year can be triggered to be viewed. The location trigger data comprises image information trigger display of a specific location realized by a user at a specified location. The content trigger data includes image information trigger displays that a user effects a corresponding event when experiencing a particular event. The task trigger data includes image information trigger display in which the user realizes a designated person when contacting a specific person. The emotion trigger data comprises a specific emotion related image information trigger display implemented when the user is in a specific emotion.
The mapping relation data of the trigger conditions and the image information in the trigger information database refers to corresponding relation data which is established by the association information related to each image information according to the trigger conditions. The time trigger data in the trigger condition data is that the time condition is associated with the time description of the image information in the original information database, the place trigger data is that the place condition is associated with the place description of the image information in the original information database, the content trigger data is that the event content is associated with the event content description of the image information in the original information database, the person trigger data is that the person condition is associated with the person description of the image information in the original information database, and the emotion trigger data is that the emotion condition is associated with the emotion information in the associated information database. And each image in the basic original information base is associated with the emotion information in the associated information database.
According to a preferred embodiment, assisting the user in establishing the memory network comprises adjusting the display states of the respective images in the basic original information base according to the psychological information of the user, thereby avoiding the psychological state of the user from having negative emotions and enabling the user to enhance the memory of the related information in the positive psychological state. The plurality of image information formed after adjustment according to the psychological information of the user are arranged along with the time information, thereby forming a dynamic memory network of the user. The memory network carries memory information that has a positive psychological effect on the user. When the emotion of the user changes, the image information in the memory network is dynamically adjusted.
And when the mood coefficient Y of the user is less than 0 and the mood is in a negative state, extracting and displaying the image information corresponding to the positive mood in the associated information database according to the mapping relation, and shielding the image information associated with the negative mood.
Specifically, when the mood of the user is negative according to the historical emotion information analysis of the user, the image information associated with the positive emotion in the associated information database is selected according to the mapping relation between the image information in the basic original information database and the associated information in the associated information database. And displaying the image information associated with the positive emotion to the tester according to the sequence of the occurrence time of the graphic content, so that at least one image information forms a memory network attached with positive memory information. The image information associated with the negative emotions is not displayed to the tester, and the tester does not recall or memorize the negative memorized information, thereby not aging the negative emotions. The tester can memorize in the positive emotion, and the memory effect is better.
And when the mood coefficient Y of the user is more than 0 and the mood is in the positive state or the calm state, extracting image information corresponding to the positive mood and/or the negative mood in the associated information database according to the mapping relation, and displaying the image information to the user.
Specifically, when the mood of the user is positive mood or calm mood obtained by analyzing the historical emotion information of the user, the image information in the basic original information base does not need to be selected and displayed according to the associated information in the associated information database. At least one image information in the basic original information base is displayed to the tester according to the time sequence of the occurrence of the graphic content, so that the at least one image information forms a unique memory network for the tester to display according to the time sequence. In the process that the tester views the image information, the tester dynamically feeds back the emotion type of each image at the same time, and the associated information in the associated information database is updated according to the emotion information fed back by the user. And analyzing the mood change of the tester according to the emotion information dynamically fed back by the tester while the tester views the image information. When the mood change of the tester is analyzed to be a negative mood, the image information related to the negative mood is shielded according to the updated related information in the related information database, namely the image information related to the negative mood is not displayed. Until the emotion change of the test person is monitored as a positive emotion, the act of shielding the image information associated with the negative emotion is cancelled. Therefore, the display rule of the image information is adjusted according to the emotion change of the tester, and a memory network which dynamically changes according to the emotion change of the tester is formed.
Even if the user specifies the conditions of event, time, person, place, etc. in the image, in the case where the mood coefficient Y of the user is <0, image information associated with positive emotions in the positive association information database is first extracted according to the mapping relationship, and then image information that meets the test person-specified conditions is selected and displayed from the image information associated with positive emotions.
The image information respectively associated with the emotion categories of the users are arranged according to the time sequence to form a memory network which belongs exclusively to the users. The memory network of positive emotion and the memory network of negative emotion change the image information that needs to be displayed based on the emotion change of the user, and strengthen the memory of the user. The invention avoids the conflict situation caused by forced memory strengthening of the user.
Example 2
The embodiment provides a system for constructing a memory network and using the memory network to enhance memory, as shown in fig. 5, the system includes a mobile terminal 10 and a server terminal 20. The mobile terminal 10 is suitable for the user and the related person. The mobile terminal 10 is provided on a mobile terminal device or a computer. The mobile terminal equipment comprises a smart phone, a tablet computer or a notebook computer. The mobile terminal device realizes data transmission between the mobile terminal 10 and the server 20 through WiFi, Bluetooth, ZigBee, mobile 2G, mobile 3G or mobile 4G.
The mobile terminal 10 includes an information input module 101, a psychological information collection module 102, an original information database module 103, an original information display module 104, and a trigger information database module 105. The server 20 includes a data analysis module 201 and an association information base module 202. The server 20 is provided in a remote server. The information input module 101 is used for expressing the feeling of the image information in a mode of inputting voice, text or images when the user views each image in the original information database module 103. The information input module 101 is also used for the user to feed back moods, emotion categories, degrees of likeability or likeability to image information, likes and dislikes, and shares when viewing the respective images of the original information database.
According to a preferred embodiment, the information input module 101 is further adapted for a user or a person concerned to input an image into the raw information database module 103 in order to update the raw information database.
The psychological information collecting module 102 is used for collecting psychological information of the user. The psychological information includes mood and emotion information generated when the user views each image in the original information database.
According to a preferred embodiment, the psychological information collecting module 102 obtains the psychological information of the user by collecting micro-expression analysis of the face of the user. The psychological information collecting module 102 collects micro expressions of the testers through the camera device, and sends the micro expressions of the users to the micro expression analyzing module 203 of the mobile terminal 20 for analysis, so as to obtain emotional information of the users when the users check each image.
The raw information database module 103 is used for storing a raw information database for storing image information related to the memory of the user. The image information includes an image and related information. The raw information database includes a basic raw information database and a related information database related to the image. The basic original information base is used for storing at least one image related to information which needs to be memorized by a user. The related information includes information such as the time, place, person, event, etc. at which each image content occurs.
The original information display module 104 is configured to display the image information meeting the display condition to the user. The original information display module 104 selects the image information associated with the associated information in the corresponding associated information base module 202 according to the analysis result of the data analysis module 201 and displays the image information to the user.
The trigger information database module 105 is configured to store a trigger information database, where data information in the trigger information database includes trigger condition data for displaying different image information and mapping relationship data between the trigger condition and the image information or the associated information. The trigger information database module 105 completes matching with the trigger condition information in the trigger information database based on the information provided by the user through the information input module 101, and sends the matching result to the data analysis module 201. After being analyzed and confirmed by the data analysis module 201, the recommendation of the original information in the at least one original information database is extracted and generated.
The data analysis module 201 is used for performing data analysis on the psychological information of the user. Preferably, the data analysis module 201 analyzes the mood of the user according to the collected mood information or the mood information fed back by the user, and predicts the real-time mood of the user.
The association information database of the association information base module 202 stores association information associated with each image of the original information database. The associated information includes information such as moods, mood categories, mood change curves, degrees of likeability or disagreeability to image information, the number of praise times and the number of share times of the user when viewing each image of the original information database.
The micro-expression analysis module 203 is configured to analyze the received micro-expression information and extract emotion information in the micro-expressions. The micro-expression analysis module 203 analyzes the emotion type of the user according to the facial actions of the user in the micro-expression. The emotions include joy, hurry, fear, anger, disgust, surprise, and thoughts. The micro-episodic analysis module 203 associates the images viewed by the user with the corresponding emotional information and stores the emotional information and the associated information of the images to the associated information base module 202, thereby updating the associated information database.
The working principle of the system for constructing the memory network and using the memory network for enhancing the memory provided by the embodiment is as follows:
the original information database module 103 is established by the user or the related person at the mobile terminal 10, and the image and the related information thereof are stored into the basic original information database and the related information database in the original information database module 103, respectively. For example, the electronic digital image is transmitted by the relevant person in a wired or wireless manner and stored in the basic original information library. Or shooting information related to the memory information by related personnel based on a camera device of the mobile terminal and storing the information to the basic original information base.
According to a preferred embodiment, the association library module 202 is located on a remote server and communicates data information with the raw information display module 104 and the information input module in a wired or wireless manner. And the data memory occupied by the associated information is small, so that remote transmission is facilitated. The image information occupies a large amount of data memory, and if the image information is transmitted remotely, the image information is likely to need a long time to be cached. Therefore, the original information database 103 is provided in the mobile terminal 10.
The relevant test personnel inputs and stores the relevant information of the image into the relevant information base through the information input module 101. The image and the related information have a one-to-one correspondence relationship. The original information display module 104 arranges and displays the images in time sequence based on the event occurrence time of each image in the original information database to form an initial memory network viewed by the user.
When the user views each image in the initial memory network, the psychological information collecting module 102 collects the micro-expressions of the test images when viewing each image information. The psychological information collecting module 102 marks the micro expression of the user as the micro expression associated with the corresponding image and sends the micro expression to the micro expression analyzing module 203 of the mobile terminal 20. The micro-expression analysis module 203 analyzes the facial actions in the received micro-expressions to obtain emotion information of the user when viewing the corresponding images. The micro-emotion analysis module 203 stores the association relationship between the analyzed emotion information and the corresponding image information in the association information database. Therefore, as the user views each image in the initial memory network, emotional information associated with each image is continuously formed and stored into the association information base module 202.
According to a preferred embodiment, when the user views each image in the initial memory network, the emotion category associated with the image is fed back by clicking the selectable button, and the emotion category is associated information associated with the image. For example, the user clicks a "like" or "brick-beating" button on the image information to feed back whether his or her emotion is a positive emotion or a negative emotion. Or the user shares the image information, and the sharing represents positive emotion related to the image. The information input module 101 associates emotion information, sharing times, praise times or brick shooting times input by the user in the form of a selection button with corresponding images to form relationship information, and stores the relationship information into the association information base module 202.
The emotional information fed back by the user is not only the associated information associated with the image, but also forms a daily record of emotional information.
The data analysis module 201 predicts real-time emotional information from the historical emotional information of the user.
The mood coefficient Y is used for predicting the current mood of the user in a probability mode based on the historical mood information of the user, and the current mood information is p (s | E)t-1,Et-2,…,Et-n),
Where s denotes the current emotional information of the user, Et-1As emotional information of the previous time, Et-nEmotional information of the previous nth time, Et-iIndicating the emotional information of the previous ith time with respect to the current time t.
According to a preferred embodiment, said mood coefficient Y is analyzed on the basis of historical mood information adjacent to n times, t representing the current time, tiIndicating the time point, t, corresponding to the adjacent ith emotioniHas an emotion value of et-iThen t and tiThe time difference between is ρ t ═ t-tiThen et-iThe emotion values at the current time are: et-i=et-i (t)=f(et-i,ρt)=et-iExp (- ρ t/(24 × 60)), the above formula takes into account the time-decay property of exponential decay of mood values, where et-iIs 1;
according to the historical emotion values of the past n times, calculating a predicted mood coefficient according to the following formula:
Y=maxs∈S(p(s|Et-1,Et-2,...,Et-n) In which S ═ happy, relaxed, calm, sad, fear, surprise, anger, jealousy }; if the result is positive emotions such as happiness, relaxation, calm, etc., the value of Y is defined as 1, and if the result is negative emotions such as sadness, fear, surprise, anger, jealousy, etc., the value of Y is defined as-1; if the change of the mood coefficient meets the Markov property, Y is maxs∈S(p(s|Et-1) I.e. the current mood depends on the mood at the last point in time.
When the mood coefficient Y of the user is less than 0, the mood of the user is in a negative state. The data analysis module 201 transmits the mood category of the user to the original information display module 104. The original information display module 104 extracts at least one image information corresponding to the positive emotion in the associated information database according to the mapping relationship and displays the at least one image information to the user in the order of the occurrence time of the event. The image information selected and displayed according to the mood of the user forms a brand new memory network corresponding to the psychological information of the user.
When the mood coefficient Y of the user is more than 0, the mood is in an active state. The data analysis module 201 transmits the mood category of the user to the original information display module 104. At this time, the psychological state of the user is good and negative image information can be accepted, so that the original information display module 104 does not need to select image information. The original information display module 104 displays all the image information in the original information database 103 in the order of the image occurrence time.
The psychological information module 102 collects psychological information of the user at the same time when the user views each image information displayed by the image information module 104, or feeds back related information at the same time. The association information base module 202 continuously updates the association information database according to the psychological information of the user and the fed-back association information. The data analysis module 201 dynamically analyzes the mood of the user according to the updated data of the association information base module 202. When the data analysis module 201 analyzes that the mood coefficient Y of the user is less than 0 and the mood changes to be suitable for the negative state, the data analysis module 201 sends the mood category of the user to the original information display module 104. The original information display module 104 adjusts the image information displayed in the original information database again, and the image information associated with the negative emotion is not displayed, thereby forming a brand new memory network. Therefore, the image information in the memory network changes based on the psychological state of the user, the user memorizes the image information in the positive psychological state, and the memory effect is obviously improved.
For example, when the user's mood is happy, the original information display module 104 does not mask image information associated with negative emotions, but displays the image information according to the user's request. When the mood of the user is sad, the original information display module 104 can shield the image information associated with the negative emotion, and display the image information associated with the positive emotion, including the image information praised by the user and the image information of which the user expresses the liking degree, according to a certain sequence for the user to check, so that the memory of the positive emotion of the user is further enhanced, and the psychological state of the user is more positive and upward.
Example 3
This embodiment further improves and explains embodiment 1 and embodiment 2, and this embodiment explains the user as an example of an elderly person.
And establishing an original information database. The present embodiment is described with original information as image information. Family members of old people or nursing personnel or the old people store the non-electronic photo once as an electronic image in a shooting mode. Alternatively, electronic images stored by other electronic devices are transmitted to the original information database in a wired or wireless manner. Family members of the old people or nursing staff or the old people can input the relevant information of the image into the original information database. The related information includes the time, place, person and event when the image occurs. For example, "in 2015, two children with granddaughter sprouted home to the mid-autumn festival".
The original information display module 104 displays each image and related information in the original information database in a time sequence. The time of each image interval is set by the user and can be 10S or 30S. The user moves the image in a touch manner. And the image moves out of the image information display screen according to the action track of the user. According to a preferred embodiment, the image information is automatically changed to the next image information within a limited time if the user does not flip the image information.
And creating and updating the associated information of the corresponding image according to a mode related to the psychological information of the corresponding user when the user views each image in the original information database, and creating an associated information database related to the corresponding image.
As shown in fig. 4, the original information display module 104 displays two images P in chronological order1And image P2. Image P1Time t of1Earlier than picture P1Time t of2. Related information A1And related information A2Are respectively arranged in the image P1And image P2Is detected. According to a preferred embodiment, the related information A1It may also be arranged above, below, to the left and to the right of the image, or in free positions of the image. According to a preferred embodiment, the related information may be combined with the image as an image with text attached.
Button B1And B2To like the button. When the elderly are viewing images, their emotions are expressed by clicking the like button. According to a preferred embodiment, when the elderly are viewing images, the emotional categories of the elderly are expressed by clicking different emoticons. The number of times of approval of the old person and the selected emotion category are stored as the associated information of the image in the associated information database.
When the old people check each image of the original information database, the feeling of checking the image information and the recalled memory information are input by the old people through the information input module in the form of voice, characters or images.
If the old person does not view the image information and provide the emotion record information in one day. The information input module 101 reminds the elderly to record emotions or moods in a voice manner.
The data analysis module 201 analyzes the historical emotional information of the old, and obtains an emotional change curve of the old's emotion along with time, as shown in fig. 3. The emotional change curve shows the emotional change condition of the old person of the child and the old person in an intuitive mode. When the negative emotion appears in a few consecutive days, the time of accompanying the old can be properly increased, or the system displays photos related to the positive emotion to the old in many ways according to the mood label condition of one photo, so that the memory of the old can be better aroused, the emotion of the old can be adjusted, and the self-luxury feeling and the happiness of the old can be increased to a certain extent.
Meanwhile, the data analysis module 201 predicts real-time emotions according to historical emotion information of the elderly.
The data analysis module 201 counts the number of times of image praise and the number of times of click context type, and sends the data information to the associated information database to update the associated information database.
For example, the emotions of the elderly are artificially selected from four types: happy, calm, thought, engendering qi. The first two are positive emotions, and the latter two are negative emotions. Because the factors influencing the human emotion are various, real-time influencing factors are considered on the basis of analyzing historical emotion. The mood coefficient Y is used for predicting the current mood of the user in a probability mode based on the historical mood information of the user, and the current mood information is p (s | E)t-1,Et-2,…,Et-n) Where s denotes the current mood information of the user, Et-1As emotional information of the previous time, Et-nEmotional information of the previous nth time, Et-iIndicating the emotional information of the previous ith time with respect to the current time t.
According to a preferred embodiment, said mood coefficient Y is analyzed on the basis of historical mood information adjacent to n times, t representing the current time, tiIndicating the time point, t, corresponding to the adjacent ith emotioniHas an emotion value of et-iThen t and tiThe time difference between is ρ t ═ t-tiThen et-iThe emotion values at the current time are: et-i=et-i (t)=f(et-i,ρt)=et-iExp (- ρ t/(24 × 60)), the above formula takes into account the time-decay property of exponential decay of mood values, where et-iIs 1;
according to the historical emotion values of the past n times, calculating a predicted mood coefficient according to the following formula:
Y=maxs∈S(p(s|Et-1,Et-2,...,Et-n) Wherein, the emotion set is S ═ happy, calm, thought, angry }; if the result is that Y is open or calm, the value of Y is defined as 1, and if the result is that Y is thoughtful or angry, the value of Y is defined as-1.
When the mood coefficient Y is less than 0, the mood of the old people is low, the original information display module 104 displays images recorded by the current happening affairs related to the positive mood information, the images are displayed for the old people to look over, and the system discards the negative photos. The displayed image information of positive emotions forms a positive memory network for the elderly. Therefore, the memory can help the old to remember, stimulate the memory ability of the old and adjust the emotion of the old to a certain degree.
When the mood coefficient Y is more than 0, the mood of the old people is more happy, the mood factor of the old people can be ignored when the image information is extracted, and only the time factor needs to be considered. And when Y is less than 0, the extracted image must simultaneously accord with the time factor and the mood factor of the old. Therefore, different memory networks are displayed according to the emotion change of the old people, so that the life of the old people is far away from negative emotion and the memory is enhanced.
According to a preferred embodiment, the elderly and their associated persons can label images of particular significance. Such as images of things that occur for particular festivals. The original information display module displays a "remember/don't remember" button when displaying image information with a special label. The old people can feed back whether the old people remember the special event or not by clicking a button. If the old people choose not to remember, the reminding can be continued after a period of time. However, if the user remembers the reminding result, the reminding is not carried out or is carried out after a longer time interval.
According to a preferred embodiment, the original information display module 104 displays the image information and its associated information in a time axis manner. When the old people check image information, the old people mainly check four important contents, namely images, related information, occurrence time and emotion information. Simple information display is more favorable for the old to accept and understand. The embodiment combines the mood of the old people, pertinence is given to the old people for association reminding, and the memory of the old people is effectively strengthened.
According to a preferred embodiment, the original information display module 104 displays the image information in chronological order in chronological units. For example, the elderly specify viewing image information for mid-autumn festival. The original information display module 104 displays image information related to mid-autumn festival and suitable for the mental state of the old at the moment in time units of years according to the specified conditions of the old. As shown in FIG. 4, the time t1 was 80 years in the 20 th century, t2The time is 20 th century 90 s.
Example 4
According to another aspect of the present invention, the present embodiment provides a method for constructing a memory network and using the memory network to strengthen memory, comprising the steps of:
s11: establishing a multimedia information base; s12: creating and updating the associated information of the corresponding images in a manner related to the psychological information of the corresponding test object when the corresponding test object views each image in the multimedia information base; s13: establishing a memory association library related to the corresponding image; s14: and establishing a memory network for the corresponding test object.
Step S11: and establishing a multimedia information base.
The established multimedia information base comprises an image base and a related information base. The image library is used for storing images related to the memory information. The related information base is used for storing character information, graphic information and marking information related to each image.
The image library and the related information library are associated together in a mapping manner. Each image in the image library corresponds to at least one piece of relevant information in the relevant information library. The related information comprises character information describing the image content, the time, the place, the related person, the name and the identity of the person and the time of occurrence of the image content. The related information can be input by the test object or related personnel in the process of acquiring the images, and can also be input by the test object according to the memory information in the process of viewing each image in the graphic information base by the test object.
According to a preferred embodiment, the means for capturing the image comprises capturing a picture or image by the camera device as an image stored in the multimedia information base. The mode of collecting the image also comprises the step of transmitting and storing the image in the form of electronic digital codes into a multimedia information base in a wired or wireless mode. Preferably, the test object or the related person can also modify the related information corresponding to the image in the multimedia information base, correct errors in the related information or supplement the related information.
According to a preferred embodiment, the images in the multimedia information base and the corresponding related information can be displayed simultaneously. When the test object views the image, the image and its related information are displayed in the screen at the same time. Preferably, the related information corresponding to the image is displayed based on a display instruction issued by the test object. The related information may be displayed in the form of a label. The test subject may view the individual images within the multimedia information library in at least one manner. And if the test object does not limit the content of the image, checking the image information according to the occurrence time of the events in the image as a sequence. If the test object specifies the time period, the content, the place and the person of the image event, the image information of the content, the place and the person in the specified time period can be checked.
Step S12: the associated information of the respective images is created and updated in a manner related to the psychological information of the respective test subjects when viewing the respective images in the multimedia information base.
The related information refers to information related to each image in the multimedia information base. When the image is viewed by the test subject, the psychology of the test subject fluctuates, and the mood or emotion changes. The psychological information generated when the test subject views the image is psychological information associated with the image, i.e., associated information. The associated information mainly comprises emotional information, emotion change information, the like degree of the image information, the like times of expressing like praise, the like times of sharing, the like times of expressing like brick shooting and the like of the test object associated with the image. The associated information is psychological information actively fed back by the test object when the test object views each image in the multimedia information base or psychological information obtained by relevant equipment through monitoring the micro expression of the test object. And establishing a mapping relation corresponding to each other between the associated information and the image. One image corresponds to at least one piece of associated information.
According to a preferred embodiment, the emotional information of the test object when viewing each image in the multimedia information base is analyzed in a mode of collecting the micro expression of the test object. When the image information appears, micro-expression actions such as squinting, eyebrow tightening, mouth corner pulling-down, chin lifting or tightening and the like simultaneously appear on the face of the test object, and the emotion change of the test object when the test object views the image is worried. The image is mapped with the passive emotional impairment in the association information, i.e. the image is associated with the passive emotional impairment.
According to a preferred embodiment, when viewing the individual images in the multimedia information base, the test subject selects the mood or mood at the time or the like or dislike mood to the image information in the form of selectable buttons. The test subject fed back the emotion when viewing the image by clicking a button. According to a preferred embodiment, the test subject records the perception of the image information in the form of input speech, text or graphics while viewing the individual images within the multimedia information library.
The invention also analyzes the emotional information fed back by the test object and predicts the emotion of the test object. And analyzing the emotion coefficient through the emotion information recorded by the test object every day to obtain an emotion change curve and a mood change curve of the test object in a period of time.
According to a preferred embodiment, the historical emotion of the test subject decays over time, so that the current emotion of the test subject changes according to the change of the historical emotion information. The current emotional information of the test object is based on theThe historical emotion information of the test object is analyzed in a probability mode, and the current emotion information Y is Y maxs∈S(p(s|Et-1,Et-2,...,Et-n) S) { S ═ S), where the emotion set S ═ S1、S2、…Si},SiRepresenting the emotional category, Et-1As emotional information of the previous time, Et-nRepresenting the previous nth history emotion information, n ≧ 1, and Et-nAnd E, obtaining the emotion information Y of the test object by analysis. For example, S1Indicates happy, S2Indicating sadness, S3Indicating fear, S4Shows surprise, S5Indicating anger.
Preferably, if the change in affective information for the test subject satisfies the Markov trait, then the current affective information is dependent upon a limited number of previous historical affective information, i.e., Y ═ p (E)t|Et-1,Et-2,…,Et-n). For example, when n is 2, the current emotion information Y is maxs∈S(p(s|Et-1,Et-2) ); when n is 1, the current emotion information Y is maxs∈s(p(s|Et-1))。
Historical affective information Et-nHaving a damping property, which decays exponentially with time, i.e.
Figure GDA0003270500800000251
Wherein et-nExpressing the value of the emotion of the previous nth time, and ρ t expressing the time t when the emotion of the previous nth time occursnThe time difference from the present time t. The current affective information depends on the decay change of the historical affective information with time. According to a preferred embodiment, et-nThe initial value of (a) may be measured using a standard sentiment scale, or may be given an initial value of 1 by default. For example, the manual setting of the emotion scale is shown in table 1.
Table 1: emotion scale
Emotion categories Magnitude of
Joyous 2
Sadness and sorrow 2
Fear of 2
Is surprised 2
Anger and anger 2
For example, the previous first history emotion information is recorded as happy, n is 1, the difference between the recording time and the current time is 24 hours, et-1Is 2. The pleasure value of the test subject at the current moment is
Figure GDA0003270500800000252
Current emotion information Y ═ maxs∈S(p(s|Et-1) And 0.5, namely the current emotion information of the test object is happy, and the happy value is 0.5.
The historical emotional information of the previous second time is recorded as sadness, n is 2, the difference between the recording time and the current time d is 48 hours, and e is sett-2The default value is 2. The sad value of the test object at the current moment is
Figure GDA0003270500800000253
Then the emotional information value of the current moment of the test object
Y=maxs∈S(p(s|Et-1,Et-2)=maxs∈S(p (s |0.5, 0.25) ═ 0.5, and the emotion classification is happy.
Step S13: and establishing a memory association library related to the corresponding image.
The emotion type, the emotion change curve, the mood change curve, the degree of likeness or disagreement to image information, the number of praise times and the number of sharing times of the test object when the test object views each image in the multimedia information base constitute the associated information of the mapping relation corresponding to each image. The association information is stored in a memory association library according to the classification of the positive association information and the negative association information. The memory association library is arranged in a remote server. When the test object views each image in the multimedia information base, the image information and the corresponding associated information are displayed simultaneously. Or when the test object views each image in the multimedia information base, the associated information is displayed based on the viewing instruction of the test object.
When the test object checks each image information in the multimedia information base, the fed back correlation information related to the psychological information changes, and then the memory correlation base is updated according to the latest feedback information of the test object. And updating the emotion change of each image in the multimedia information base based on the test object by memorizing the data of the association base.
Step S14: and establishing a memory network for the corresponding test object.
And each image in the multimedia information base is associated with the emotion information in the memory association base. Therefore, the display state of each image in the multimedia information base is adjusted according to the psychological information of the test object, so that the psychological state of the test object is prevented from generating negative emotion, and the test object can strengthen the memory of related information in the positive psychological state. And arranging a plurality of image information formed after adjustment according to the psychological information of the test object along with time information so as to form a dynamic memory network of the test object. The memory network carries memory information that has a positive psychological effect on the test subject. When the emotion of the test object changes, the image information in the memory network is dynamically adjusted. And when the emotion of the test object is a negative emotion, extracting and displaying image information corresponding to the positive emotion in the memory association library according to the mapping relation, and shielding the image information associated with the negative emotion.
Specifically, when the mood of the test object is negative emotion according to the historical emotion information analysis of the test object, image information related to the positive emotion in the memory association library is selected according to the mapping relation between the image information in the multimedia information library and the associated information in the memory association library. And displaying one image information associated with the positive emotion to the tester according to the sequence of the occurrence time of the graphic content, so that at least one image information forms a memory network attached with positive memory information. The image information associated with the negative emotions is not displayed to the tester, and the tester does not recall or memorize the negative memorized information, thereby not generating the negative emotions. The tester can memorize in the positive emotion, and the memory effect is better.
And when the emotion of the test object is positive emotion, extracting image information corresponding to the positive emotion and/or the negative emotion in the memory association library according to the mapping relation, and displaying the image information to the test object.
Specifically, when the mood of the test object is obtained as a positive mood according to the historical emotional information analysis of the test object, the image information in the multimedia information base does not need to be selected and displayed according to the associated information in the associated database. At least one image information in the multimedia information base is displayed to the tester according to the time sequence of the occurrence of the graphic content, so that the at least one image information forms a unique memory network for the tester to display according to the time sequence. In the process that the tester views the image information, the tester dynamically feeds back the emotion type of each image at the same time, and the associated information in the memory association library is updated according to the emotion information fed back by the test object. And analyzing the mood change of the tester according to the emotion information dynamically fed back by the tester while the tester views the image information. When the mood change of the test person is analyzed to be negative emotion, the image information related to the negative emotion is shielded according to the updated related information in the memory related library, namely the image information related to the negative emotion is not displayed. Until the emotion change of the test person is monitored as a positive emotion, the act of shielding the image information associated with the negative emotion is cancelled. Therefore, the display rule of the image information is adjusted according to the emotion change of the tester, and a memory network which dynamically changes according to the emotion change of the tester is formed.
Even if the test object specifies the conditions of event, time, person, place, etc. in the image, in the case where the emotion of the test object is a negative emotion, image information associated with a positive emotion in the positive memory association library is first extracted in accordance with the mapping relationship, and then image information conforming to the specified conditions of the test person is selected and displayed from the image information associated with the positive emotion.
The image information respectively associated with the emotion categories of the test subject are arranged according to the time sequence to form a memory network which belongs exclusively to the test subject. The memory network of positive emotion and the memory network of negative emotion change the image information that needs to be displayed based on the emotion change of the test object, and strengthen the memory of the test object. The invention avoids the conflict situation of the test object caused by forced memory strengthening.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (6)

1. A method of enhancing memory, the method comprising at least the steps of:
establishing an original information database, finishing the input of basic original information in the original information and related information describing the basic information by a user, and realizing the initial review of the original information by the user, wherein the established original information database comprises a basic original information database and a related information database, the basic original information database is used for storing images related to memory information, and the related information database is used for storing character information, graphic information and annotation information related to each image;
establishing a trigger information database based on trigger information provided by a user and data information of an associated information database;
the data of the associated information database is different psychological state data when the user views the original information in the original information database, and the psychological state data can be obtained by recording and analyzing the facial expression of the user, wherein the associated information database can be updated according to the latest feedback information of the user;
the trigger information provided by the user comprises time trigger data, place trigger data, content trigger data, person trigger data and emotion trigger data;
the triggering conditions in the triggering information database can establish a mapping relation with psychological state data of the associated information database and/or original information of the original information database, wherein the triggering conditions in the triggering information database comprise time triggering, place triggering, person triggering, content triggering and emotion triggering;
the emotion trigger is used for generating at least one recommendation of the original information by the trigger information database based on the mapping relation between the mood coefficient Y of the user and the psychological state data of the associated information database and the mapping relation between the psychological state data and the original information;
the emotional trigger is the ability of the triggering information database to generate a recommendation of the original information in at least one of the original information databases only by the predicted value of the mood coefficient Y, wherein,
the original information of the original information database comprises basic original information and related information describing the basic original information, wherein the basic original information is at least one of images or sounds, and the related information describing the basic original information is at least additional description of one of time, place, people and content of the basic original information;
the mood coefficient Y is used for predicting the current mood of the user in a probability mode based on the historical mood information of the user, and the current mood information is p (s | E)t-1,Et-2,…,Et-n),
Wherein s represents the current emotional information of the user, Et-1The emotional information of the previous time, Et-nThe last nth-order of the emotional information, Et-iRepresenting the emotional information of the previous ith time relative to the current time t.
2. The method for enhancing memory according to claim 1, wherein the original information database is capable of performing data update on the original information according to different psychological states of a user when viewing the original information in the original information database, and storing the updated data in the associated information database while establishing the trigger information database.
3. The method for enhancing memory according to claim 2, wherein the mood coefficient Y is analyzed based on the historical emotional information of the adjacent n times, t represents the current time, tiIndicating the time point, t, corresponding to the adjacent ith emotioniHas an emotion value of et-iThen t and tiThe time difference between is ρ t ═ t-tiThen et-iThe value of the emotion at the current time is,
Et-i=et-i (t)=f(et-i,ρt)=et-i*exp(-ρt/(24*60*60))
the above formula takes into account the time-decay property of exponential decay of the mood value, where et-iIs 1;
according to the historical emotion values of the past n times, calculating a predicted mood coefficient according to the following formula:
Y=maxs∈S(p(s|Et-1,Et-2,…,Et-n) Wherein the emotion set is S ═ { happy ═Sadness, fear, surprise, anger, jealousy };
if the change of the mood coefficient meets the Markov property, Y is maxs∈S(p(s|Et-1) I.e. the current mood depends on the mood at the last point in time.
4. The method for reinforcing memory according to claim 3, wherein the mental state data further comprises mental state data actively entered by the user based on original information, the mental state data comprising positive and negative states and being mapped with the corresponding original information, wherein,
the associated information database can extract the emotion information of the user according to the emotion represented by the graphic characters recorded by the user.
5. A system for strengthening memory is characterized in that the system at least comprises a mobile terminal (10) and a service terminal (20), the mobile terminal (10) at least comprises an original information database module (103) and a trigger information database module (105),
the original information database module (103) establishes an original information database to complete the input of the basic original information in the original information and the related information describing the basic information by the user, realize the initial review of the original information by the user,
the trigger information database module (105) establishes a trigger information database based on trigger information provided by a user and data information of an associated information database;
the trigger information provided by the user comprises time trigger data, place trigger data, content trigger data, person trigger data and emotion trigger data;
the triggering conditions in the triggering information database can establish a mapping relation with psychological state data of the associated information database and/or original information of the original information database, wherein the triggering conditions in the triggering information database comprise time triggering, place triggering, person triggering, content triggering and emotion triggering;
the data of the associated information database is different psychological state data when the user views the basic original information in the original information database, and the psychological state data can be obtained by recording and analyzing the facial expression of the user, wherein the associated information database can be updated according to the latest feedback information of the user;
the emotion trigger is used for generating at least one recommendation of the original information by the trigger information database based on the mapping relation between the mood coefficient Y of the user and the psychological state data of the associated information database and the mapping relation between the psychological state data and the original information;
the emotional trigger is that the triggering information database is also capable of extracting and generating a recommendation of the original information in at least one original information database only by the predicted value of the mood coefficient Y, wherein,
the original information of the original information database comprises basic original information and related information describing the basic original information, wherein the basic original information is at least one of images or sounds, and the related information describing the basic original information is at least additional description of one of time, place, people and content of the basic original information;
the established original information database comprises a basic original information base and a related information base, wherein the basic original information base is used for storing images related to memory information, and the related information base is used for storing character information, graphic information and annotation information related to each image;
the trigger information database extracts and generates recommendation of the original information in at least one original information database based on the trigger information provided by the user and the data information of the associated information database so as to assist the user in establishing a memory network;
the mood coefficient Y is used for predicting the current mood of the user in a probability mode based on the historical mood information of the user, and the current mood information is p (s | E)t-1,Et-2,…,Et-n),
Where s denotes the current emotional information of the user, Et-1Is the former oneSecond emotional information, Et-nEmotional information of the previous nth time, Et-iIndicating the emotional information of the previous ith time with respect to the current time t.
6. The system for enhancing memory according to claim 5, wherein the original information database module (103) is capable of performing data update on the original information according to different psychological states when a user views the original information in the original information database, and storing the updated data in the associated information database.
CN201910221663.9A 2016-05-24 2016-05-24 Method and system for enhancing memory Active CN109977101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910221663.9A CN109977101B (en) 2016-05-24 2016-05-24 Method and system for enhancing memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610349270.2A CN105956174B (en) 2016-05-24 2016-05-24 It a kind of constructive memory chain and is used for reinforcing the method for memory
CN201910221663.9A CN109977101B (en) 2016-05-24 2016-05-24 Method and system for enhancing memory

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610349270.2A Division CN105956174B (en) 2016-05-24 2016-05-24 It a kind of constructive memory chain and is used for reinforcing the method for memory

Publications (2)

Publication Number Publication Date
CN109977101A CN109977101A (en) 2019-07-05
CN109977101B true CN109977101B (en) 2022-01-25

Family

ID=56909561

Family Applications (4)

Application Number Title Priority Date Filing Date
CN201610349270.2A Active CN105956174B (en) 2016-05-24 2016-05-24 It a kind of constructive memory chain and is used for reinforcing the method for memory
CN201910221091.4A Active CN110222026B (en) 2016-05-24 2016-05-24 Method for constructing memory network and predicting emotion by using memory network
CN201910221663.9A Active CN109977101B (en) 2016-05-24 2016-05-24 Method and system for enhancing memory
CN201910221503.4A Active CN109977100B (en) 2016-05-24 2016-05-24 Method for enhancing positive psychological state

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201610349270.2A Active CN105956174B (en) 2016-05-24 2016-05-24 It a kind of constructive memory chain and is used for reinforcing the method for memory
CN201910221091.4A Active CN110222026B (en) 2016-05-24 2016-05-24 Method for constructing memory network and predicting emotion by using memory network

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910221503.4A Active CN109977100B (en) 2016-05-24 2016-05-24 Method for enhancing positive psychological state

Country Status (1)

Country Link
CN (4) CN105956174B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107870999B (en) * 2017-11-07 2020-09-08 Oppo广东移动通信有限公司 Multimedia playing method, device, storage medium and electronic equipment
CN108937967A (en) * 2018-05-29 2018-12-07 智众伟业(天津)科技有限公司南宁分公司 A kind of psychology data memory promotion detection method and system based on VR technology
CN109522927A (en) * 2018-10-09 2019-03-26 北京奔影网络科技有限公司 Sentiment analysis method and device for user message
CN110502481A (en) * 2019-01-29 2019-11-26 上海摩通文化发展有限公司 A kind of file picture management system and method
CN112906399B (en) * 2021-02-20 2023-11-10 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for determining emotional state
CN113572893B (en) * 2021-07-13 2023-03-14 青岛海信移动通信技术股份有限公司 Terminal device, emotion feedback method and storage medium
US20230041497A1 (en) * 2021-08-03 2023-02-09 Sony Interactive Entertainment Inc. Mood oriented workspace
CN115274061B (en) * 2022-09-26 2023-01-06 广州美术学院 Interaction method, device, equipment and storage medium for soothing psychology of patient

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102300163A (en) * 2011-09-22 2011-12-28 宇龙计算机通信科技(深圳)有限公司 Information pushing method, mobile terminal and system
CN103235644A (en) * 2013-04-15 2013-08-07 北京百纳威尔科技有限公司 Information displaying method and device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8154615B2 (en) * 2009-06-30 2012-04-10 Eastman Kodak Company Method and apparatus for image display control according to viewer factors and responses
CN102222422A (en) * 2010-04-16 2011-10-19 英业达股份有限公司 Assisted learning system and method for vocabularies
CN102314775A (en) * 2010-07-07 2012-01-11 辛绍祺 Learning memory device and auxiliary method thereof
US9251184B2 (en) * 2011-01-07 2016-02-02 International Business Machines Corporation Processing of destructive schema changes in database management systems
CN102663445B (en) * 2012-03-29 2015-04-15 中国科学院上海光学精密机械研究所 Image understanding system based on layered temporal memory algorithm and image understanding method thereof
CN102970326B (en) * 2012-10-22 2015-11-25 百度在线网络技术(北京)有限公司 A kind of method and apparatus of the mood indication information for sharing users
CN102946491B (en) * 2012-11-23 2014-11-05 广东欧珀移动通信有限公司 Method and system for automatically adjusting wallpaper according to user mood
WO2014116262A1 (en) * 2013-01-28 2014-07-31 Empire Technology Development Llc Communication using handwritten input
CN103530283A (en) * 2013-10-25 2014-01-22 苏州大学 Method for extracting emotional triggers
KR101584685B1 (en) * 2014-05-23 2016-01-13 서울대학교산학협력단 A memory aid method using audio-visual data
KR101719974B1 (en) * 2014-07-02 2017-03-27 한국전자통신연구원 Emotion Coaching Apparatus and Method Using Network
CN104571500B (en) * 2014-12-11 2019-04-23 惠州Tcl移动通信有限公司 Strengthen the method and wearable intelligent equipment of memory
CN104636425B (en) * 2014-12-18 2018-02-13 北京理工大学 A kind of network individual or colony's Emotion recognition ability prediction and method for visualizing
CN105389548A (en) * 2015-10-23 2016-03-09 南京邮电大学 Love and marriage evaluation system and method based on face recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102300163A (en) * 2011-09-22 2011-12-28 宇龙计算机通信科技(深圳)有限公司 Information pushing method, mobile terminal and system
CN103235644A (en) * 2013-04-15 2013-08-07 北京百纳威尔科技有限公司 Information displaying method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于HMM的情感模型及在心理咨询专家系统中的应用研究";董静;《中国优秀硕士学位论文全文数据库(电子期刊)》;20091015;摘要、第9页第1页至第29页倒数第6行 *

Also Published As

Publication number Publication date
CN105956174B (en) 2019-10-29
CN109977101A (en) 2019-07-05
CN110222026B (en) 2021-03-02
CN105956174A (en) 2016-09-21
CN109977100A (en) 2019-07-05
CN110222026A (en) 2019-09-10
CN109977100B (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN109977101B (en) Method and system for enhancing memory
US11929156B2 (en) Method and system for providing automated conversations
US11875895B2 (en) Method and system for characterizing and/or treating poor sleep behavior
US20230402147A1 (en) Method and system for improving care determination
Washington et al. A wearable social interaction aid for children with autism
US20210248656A1 (en) Method and system for an interface for personalization or recommendation of products
US20090132275A1 (en) Determining a demographic characteristic of a user based on computational user-health testing
US20090119154A1 (en) Determining a demographic characteristic based on computational user-health testing of a user interaction with advertiser-specified content
US20120164613A1 (en) Determining a demographic characteristic based on computational user-health testing of a user interaction with advertiser-specified content
US20090118593A1 (en) Determining a demographic characteristic based on computational user-health testing of a user interaction with advertiser-specified content
US20140142967A1 (en) Method and system for assessing user engagement during wellness program interaction
US20220280085A1 (en) System and Method for Patient Monitoring
EP3897388B1 (en) System and method for reading and analysing behaviour including verbal, body language and facial expressions in order to determine a person&#39;s congruence
CN110033867A (en) Merge the polynary cognitive disorder interfering system and method for intervening path
US20210183509A1 (en) Interactive user system and method
US20230157631A1 (en) Systems and methods for monitoring and control of sleep patterns
KR101836985B1 (en) Smart e-learning management server for searching jobs
Dawood et al. Natural-spontaneous affective-cognitive dataset for adult students with and without asperger syndrome
Parousidou Personalized Machine Learning Benchmarking for Stress Detection
US20240086761A1 (en) Neuroergonomic api service for software applications
EP3846177A1 (en) An interactive user system and method
Cernisov et al. A Doctoral Dissertation submitted to Keio University Graduate School of Media Design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant