CN110018738B - Emotion conversion system based on real scene emotion expression - Google Patents
Emotion conversion system based on real scene emotion expression Download PDFInfo
- Publication number
- CN110018738B CN110018738B CN201910159043.7A CN201910159043A CN110018738B CN 110018738 B CN110018738 B CN 110018738B CN 201910159043 A CN201910159043 A CN 201910159043A CN 110018738 B CN110018738 B CN 110018738B
- Authority
- CN
- China
- Prior art keywords
- emotion
- scene
- visual
- current
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 194
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 44
- 230000000007 visual effect Effects 0.000 claims abstract description 95
- 230000000638 stimulation Effects 0.000 claims abstract description 34
- 238000004364 calculation method Methods 0.000 claims abstract description 27
- 230000003313 weakening effect Effects 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 41
- 238000001514 detection method Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 9
- 230000001915 proofreading effect Effects 0.000 claims description 8
- 230000002996 emotional effect Effects 0.000 claims description 5
- 210000001747 pupil Anatomy 0.000 claims description 5
- 238000009877 rendering Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 230000004424 eye movement Effects 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 210000001508 eye Anatomy 0.000 description 11
- 210000005252 bulbus oculi Anatomy 0.000 description 7
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000000034 method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000001671 psychotherapy Methods 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an emotion conversion system based on real scene emotion expression, which is used for a VR (virtual reality) system, reduces stimulation of a visual environment to a user, mainly carries out emotion calculation through a current visual field focal region, simultaneously detects a local image of the visual field focal region, weakens an object with potential strong visual stimulation to a wearer through a weakening visual region stimulation network and then combines the object with a real scene, carries out emotion conversion on the scene when the current scene emotion does not meet the selected scene emotion so that the emotion of the scene meets the requirement, and displays the converted scene on VR equipment in real time.
Description
Technical Field
The invention relates to the technical field of information, in particular to an emotion conversion system based on real scene emotion expression.
Background
Human emotion affects human cognition and daily behavior, and emotion calculation enables a computer to quantitatively understand human emotional feeling. At present, objects for emotion calculation mainly include electroencephalograms, multimedia data (such as audio and images), text materials and the like. For images, emotion calculation depends mainly on the tone and texture features of the incoming image. Efficient emotion calculation enables a computer to extract features of the incoming object's emotional expression and to change the emotional expression of the incoming object according to the specified emotion. Currently, emotion calculation is mainly applied to multimedia data mining, such as social platforms. However, emotion calculation is not well applied to psychotherapy due to the lack of targeted methods and effective protocols.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides an emotion conversion system based on real scene emotion expression. According to the invention, the real scene is converted into the selected scene emotion and displayed on the VR equipment, so that the stimulation of the real scene to the user is reduced, and accidents are avoided.
The invention adopts the following technical scheme:
an emotion conversion system based on real scene emotion expression is used for a VR system, the VR system is provided with a first camera for collecting real-time scene videos watched by a user and a second camera for collecting eye movements of the user, and the emotion conversion system comprises
The module for acquiring the visual field focal region judges the scene difference of the image data of the real-time scene video stream watched by the user to obtain the visual field focal region of the current frame image;
the scene emotion calculation module is used for carrying out scene calculation on the image data of the focus area of the visual field to obtain emotion distribution vectors for carrying out scene emotion correction, judging whether the current scene emotion accords with the user selected scene emotion or not, and entering the scene emotion conversion module if the current scene emotion does not accord with the user selected scene emotion;
the visual stimulation object detection module is used for detecting a visual stimulation object in the visual field focal region and obtaining the position of a potential visual stimulation object in the visual field focal region;
the weakening visual area stimulation module is used for carrying out edge detection on the potential visual stimulation object, fuzzifying the object after the outline of the potential visual object is obtained, and then rendering a new layer of texture for the fuzzified object;
and the real-time scene emotion conversion module is used for converting the potential visual stimulus object after the real scene image is combined with the rendered texture into the scene emotion expression selected by the user, displaying the scene emotion expression on VR equipment in real time, calculating the overall effect of the converted result and judging the conversion effect.
The step of obtaining the visual field focal region by the visual field focal region obtaining module is as follows:
firstly, carrying out scene difference judgment on the transmitted current frame image data and the previous frame image data, specifically comprising the following steps:
let the current frame image data be IcurThe last frame of image data is IpreW is the scene difference of the upper and lower frame data, H1Is the threshold value of the scene difference, g is the result after the convolution operation of the frame data,
W=∑(gcur-gpre);
When W is not less than H1Then, calculating to obtain the visual field focus area of the current frame, specifically:
let the camera shoot any eye of the user, the left eye angular coordinate of the eye is (x1, y1), the right eye angular coordinate is (x2, y2), the pupil center coordinate is (x, y), and the horizontal relative position S of the pupil in the eye is:
let the length h and width of the current frame image be w, the coordinates of the upper left corner of the current field of view focus region be ((S-0.25) × w,0.25 × h), the coordinates of the lower right corner be ((S +0.25) × w,0.75 × h), and the focus region be a matrix of two coordinates.
The visual stimulation object detection module is used for detecting visual stimulation objects in the visual field focal area and obtaining the position of potential visual stimulation objects in the visual field focal area, and specifically comprises the following steps:
and dividing the visual field focal region image into local images, judging whether the area of the local image corresponding to the current visual field focal region contains visual stimulation objects, and if so, determining the position of the local image in the visual field focal region and the position of the visual field focal region in the scene.
The scene emotion calculation module transmits the image of the focus area of the visual field to a scene calculation network, calculates the emotion distribution vector of the current scene, and then performs scene emotion proofreading on the emotion distribution vector of the scene, wherein the scene emotion proofreading specifically comprises the following steps:
let W2Is the difference value of scene emotion distribution, E is the emotion distribution vector, EmaxIs the maximum value in the emotion distribution vector, EiFor the ith element in the emotion distribution vector, H2 is the threshold of emotion distribution difference value
When W2 is not less than H2, EmaxThe corresponding emotion label is an emotion label of the current scene; otherwise, judging that the current scene watched by the current user is the fuzzy emotion label.
The new texture is particularly a texture that the emotion label is pleasant.
The emotion distribution vector is a vector consisting of 6 elements and corresponds to anger, disgust, fear, pleasure, sadness and surprise respectively, the maximum value of the emotion distribution vector does not absolutely represent the emotion label of the current real scene, and the maximum value of the scene emotion distribution vector must meet the condition that the difference value of the scene emotion distribution is greater than a threshold value H2The emotion label corresponding to the maximum value of the emotion distribution vector can be determined to be the emotion label of the current scene, and when the maximum value of the scene emotion distribution vector does not meet the requirement that the difference value is larger than the threshold value H2, the current scene is considered to be a fuzzy emotion scene.
The real-time scene emotion conversion module comprises a real-time scene emotion conversion network, in particular to a confrontation network for training a real scene picture data set with emotion labels.
The invention has the beneficial effects that:
(1) by the module for acquiring the focus area of the visual field, the next operation is not performed on the system when the difference between the upper frame scene and the lower frame scene is insufficient, the focus area of the visual field currently seen by the wearer is extracted on the occasion that the difference between the upper frame scene and the lower frame scene is large enough, and the next operation is performed on the system; emotion calculation is performed using only the focal region of the visual field and potential visual field stimulating objects in the visual field are detected. Therefore, the calculation cost of the system can be effectively reduced, and the fluency of the system on VR equipment is improved;
(2) when the current scene emotion is determined in the scene emotion calculating and detecting module, the scene emotion proofreading operation is used, the emotion label corresponding to the maximum value in the calculated scene emotion distribution vector is not directly used as the scene emotion label, but the emotion label is determined as the current emotion label only when the difference between the maximum value in the scene emotion distribution vector and the values of other elements is large enough, so that the emotion expression of the current scene can be effectively determined to be definitely consistent for most viewers, and the immersion of the scene and the recognition degree of the scene emotion are improved.
(3) According to the visual area blurring module, the visual stimulation of the object to the viewer is weakened in a mode of blurring the object with the potential visual stimulation in the scene and rendering a new texture on the object, and the emotional conversion is not only carried out on the color texture of the scene by the object, so that the visual stimulation of the environment to the wearer is reduced to a greater extent, and the pleasure degree of the wearer is improved;
(4) the real-time scene emotion conversion module comprises integral effect calculation after scene emotion conversion, whether results influencing wearer experience such as unnatural transition between obvious color blocks and colors can appear after scene conversion can be detected, parameters of the real-time scene emotion conversion network are adjusted through the integral effect calculation results, the conversion effect of the system can be continuously improved in the use process of a wearer, and the adaptability of the system to different environments is improved.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
fig. 2(a) and 2(b) are schematic views of installation positions of the first camera and the second camera according to the present invention, respectively.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Examples
An emotion conversion system based on real scene emotion expression is suitable for a VR system. The system is used for converting the emotion of the directional real scene of a user and mainly comprises a visual field focal region acquisition module, a scene emotion calculation module, a visual stimulus object detection module, a weakened visual region stimulus module and a real-time scene emotion conversion module.
As shown in fig. 1, 2(a) and 2(b), when a user uses a VR device equipped with the system, the user enters the system and selects one of six emotion tags of anger, disgust, fear, pleasure, sadness and surprise as scene emotion. After the scene emotion is selected, the system starts to work, the system enters a preparation state, a first camera C1 is arranged in front of the VR device to start collecting real scene images watched by a user, a second camera C2 in the VR device starts shooting images of the left eyeball, real-time positioning is carried out on the pupil of the left eyeball, and the relative position of the pupil of the left eyeball in the left eyeball is calculated. In the embodiment, the left eyeball and the right eyeball of the wearer are consistent by default, and the motion of the left eyeball is selected to represent the motion tracks of the eyes.
The module for acquiring the visual field focal region judges scene differences of image data of the real-time scene video stream watched by the user to obtain the visual field focal region of the current frame image, and specifically comprises the following steps:
firstly, carrying out scene difference judgment on the transmitted current frame image data and the previous frame image data, specifically comprising the following steps:
let the current frame image data be IcurThe last frame of image data is IpreW is the scene difference of the upper and lower frame data, H1Is the threshold value of the scene difference, g is the result after the convolution operation of the frame data,
W=∑(gcur-gpre);(3)
When W is not less than H1Then, calculating to obtain the visual field focus area of the current frame, specifically:
let the camera shoot any eye of the user, the left eye angular coordinate of the eye is (x1, y1), the right eye angular coordinate is (x2, y2), the pupil center coordinate is (x, y), and the horizontal relative position S of the pupil in the eye is:
let the length h and width of the current frame image be w, the coordinates of the upper left corner of the current field of view focus region be ((S-0.25) × w,0.25 × h), and the coordinates of the lower right corner be ((S +0.25) × w,0.75 × h).
The default system preparation is done and the first operation is to calculate the difference between the current frame and the blank frame.
The scene emotion calculating module and the visual stimulus object detecting module simultaneously transmit the image of the visual field focal region to the internal visual stimulus object detection and scene emotion calculation.
The scene emotion calculation module is used for carrying out scene calculation on the image data of the focus area of the visual field to obtain emotion distribution vectors for carrying out scene emotion correction, judging whether the current scene emotion accords with the user selected scene emotion or not, and entering the scene emotion conversion module if the current scene emotion does not accord with the user selected scene emotion;
the visual stimulation object detection module is used for detecting a visual stimulation object in the visual field focal region and obtaining the position of a potential visual stimulation object in the visual field focal region;
and transmitting the view focus area image to a scene calculation network by scene emotion calculation, calculating a current scene emotion distribution vector, and transmitting the obtained scene emotion distribution vector to a scene emotion proofreading. The scene emotion proofreading formula is as follows:
let W2 be the difference value of scene emotion distribution, E be emotion distribution vector, EmaxIs the maximum value in the emotion distribution vector, EiFor the ith element in the emotion distribution vector, H2 is the threshold of emotion distribution difference value
When W2 is not less than H2, EmaxThe corresponding emotion label is an emotion label of the current scene; otherwise, judging that the current scene watched by the current wearer is a fuzzy emotion tag;
the scene emotion calculation network is configured as a convolutional neural network trained on a large-scale real scene picture data set with emotion distribution vectors, the last layer is a full connection layer, the input is a real scene picture, and the output is an emotion distribution vector;
in this embodiment, the visual stimulus object detection network is configured as a convolutional neural network trained based on a large-scale visual stimulus object image training set, and the input is image data, the output is a classification result, and whether a potential visual stimulus object is included is judged;
the weakening visual area visual stimulation module is connected with the visual stimulation object detection, and the visual stimulation object detection transmits the local image position containing the potential stimulation object to the weakening visual area stimulation module. After the approximate position of the potential stimulus object is obtained (i.e., the position of the layout image including the potential stimulus object), the edge detection is performed on the local region to obtain the contour coordinates of the object. Pooling image data of the object according to the contour coordinates of the object to enable the image of the object to be blurred, and rendering a layer of texture with positive emotion labels on the object according to the contour coordinates of the object.
The texture with the positive emotion label in the embodiment refers to a texture with the emotion label being pleasant.
When a potential visual stimulus object exists and the object is weakened through the weakening visual area stimulus module, the input result is an image R, the image data of the potential visual stimulus object is replaced by weakened object image data R with positive emotion label texture at the corresponding position of the real scene image I of the current frame by combining the position of the potential stimulus object in the visual field focal area and the position of the visual field focal area in the real scene shot by the C1 camera, and I is obtainedcombine. Then adding IcombineAnd transmitting the real-time scene emotion conversion module. When the current visual field focus area has no potential visual stimulation object, the real-time field is transmittedScene emotion conversion module IcombineIs equal to I. The real-time scene emotion conversion module is connected with scene emotion proofreading and real-time video stream data in the weakening visual area stimulation module and the scene emotion calculation and detection module. The real-time scene emotion conversion module obtains a real scene I after weakened stimulation from the weakened visual area stimulation module and the real-time video streamcombineObtaining whether the emotion label of the current real scene meets the selected scene emotion from the scene emotion correction, and when the emotion label of the current real scene does not meet the selected scene emotion, displaying the image IcombineI of incoming real-time scene change network to outputcombine_newThe emotion expression of (a) meets the emotion requirement of the selected scene. After the conversion network completes one conversion, the result I is convertedcombine_newDisplaying the result on VR equipment in real time, calculating the integral effect of the converted result, judging whether the converted result has obvious unnatural transition between color blocks and colors and the like, feeding the calculated result back to a real-time scene conversion network, and improving the effect of the network by adjusting the parameters of the network;
the real-time scene emotion conversion network described in this embodiment is configured as a countermeasure network trained based on a large-scale real scene picture data set with emotion tags, and performs scene emotion conversion on an input real scene image according to a selected scene emotion.
In this embodiment, the emotion distribution vector is characterized by a vector consisting of 6 elements, where each element is a probability value corresponding to six Ekman's emotion classifications (Ekman's emotions) tags of anger, disgust, fear, pleasure, sadness, and surprise, respectively, and the Ekman's emotion classifications (Ekman's emotions) are associated with the probability valuesHas a value of 1. And the maximum value of the emotion distribution vector does not absolutely represent the emotion label of the current real scene, and the emotion label corresponding to the maximum value of the emotion distribution vector can be determined to be the emotion label of the current scene only when the maximum value of the scene emotion distribution vector meets the requirement that the difference value of the scene emotion distribution is greater than the threshold value H2. When the maximum value of the scene emotion distribution vector is notWhen the requirement that the difference value is larger than the threshold value H2 is met, the current scene is considered as the fuzzy emotion scene.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (7)
1. The emotion conversion system is used for a VR system, the VR system is provided with a first camera for collecting real-time scene videos watched by a user and a second camera for collecting eye movements of the user, and the emotion conversion system comprises
The module for acquiring the visual field focal region judges the scene difference of the image data of the real-time scene video stream watched by the user to obtain the visual field focal region of the current frame image;
the scene emotion calculation module is used for carrying out scene calculation on the image data of the focus area of the visual field to obtain emotion distribution vectors for scene emotion correction, judging whether the current scene emotion accords with the scene emotion selected by the user or not, and entering the real-time scene emotion conversion module if the current scene emotion does not accord with the scene emotion selected by the user;
the visual stimulation object detection module is used for detecting a visual stimulation object in the visual field focal region and obtaining the position of a potential visual stimulation object in the visual field focal region;
the weakening visual area stimulation module is used for carrying out edge detection on the potential visual stimulation object, fuzzifying the object after the outline of the potential visual object is obtained, and then rendering a new layer of texture for the fuzzified object;
and the real-time scene emotion conversion module is used for converting the potential visual stimulus object after the real scene image is combined with the rendered texture into the scene emotion expression selected by the user, displaying the scene emotion expression on VR equipment in real time, calculating the overall effect of the converted result and judging the conversion effect.
2. The emotion conversion system of claim 1, wherein the step of obtaining the focal region of the field of view by the module for obtaining the focal region of the field of view is as follows:
firstly, carrying out scene difference judgment on the transmitted current frame image data and the previous frame image data, specifically comprising the following steps:
let the current frame image data be IcurThe last frame of image data is IpreW is the scene difference of the upper and lower frame data, H1Is the threshold value of the scene difference, g is the result after the convolution operation of the frame data,
W=∑(gcur-gpre);
When W is not less than H1Then, calculating to obtain the visual field focus area of the current frame, specifically:
let the camera shoot any eye of the user, the left eye angular coordinate of the eye is (x1, y1), the right eye angular coordinate is (x2, y2), the pupil center coordinate is (x, y), and the horizontal relative position S of the pupil in the eye is:
let the length h and width of the current frame image be w, the coordinates of the upper left corner of the current field of view focus region be ((S-0.25) × w,0.25 × h), the coordinates of the lower right corner of the current field of view focus region be ((S +0.25) × w,0.75 × h), and the field of view focus region be a matrix defined by these two coordinates.
3. The emotion conversion system according to claim 1, wherein the visual stimulus object detection module is configured to perform visual stimulus object detection on the visual field focal region and obtain the position of the potential visual stimulus object in the visual field focal region, and specifically:
and dividing the visual field focal region image into local images, judging whether the area of the local image corresponding to the current visual field focal region contains visual stimulation objects, and if so, determining the position of the local image in the visual field focal region and the position of the visual field focal region in the scene.
4. The emotion conversion system according to claim 1, wherein the scene emotion calculation module transmits the view focal region image to a scene calculation network, calculates a current scene emotion distribution vector, and then performs scene emotion proofreading on the scene emotion distribution vector, and the scene emotion proofreading specifically includes:
let W2 be the difference value of scene emotion distribution, E be emotion distribution vector, EmaxIs the maximum value in the emotion distribution vector, EiIs the ith element, H, in the emotion distribution vector2As a threshold for the value of the variance of the emotional distribution, i.e.
When W2 is more than or equal to H2When E is greatermaxThe corresponding emotion label is an emotion label of the current scene; otherwise, judging that the current scene watched by the current user is the fuzzy emotion label.
5. An emotion conversion system as claimed in claim 1, wherein the new texture is specifically a texture for which the emotion tag is pleasant.
6. The emotion conversion system of claim 4, wherein the emotion distribution vector is a vector consisting of 6 elements, and corresponds to anger, disgust, fear, pleasure, sadness and surprise, respectively, and the maximum value of the emotion distribution vector does not absolutely represent the emotion label of the current real scene, when the scene emotion distribution vector is presentThe maximum value of the vector satisfies that the difference value of the scene emotion distribution is larger than the threshold value H2Then, the emotion label corresponding to the maximum value of the emotion distribution vector can be determined as the emotion label of the current scene; when the maximum value of the scene emotion distribution vector does not meet the requirement that the difference value is greater than the threshold value H2Then the current scene is considered as a blurry emotion scene.
7. The emotion conversion system of claim 1, wherein the real-time scene emotion conversion module comprises a real-time scene emotion conversion network, in particular a confrontation network trained on real scene picture dataset with emotion labels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910159043.7A CN110018738B (en) | 2019-03-04 | 2019-03-04 | Emotion conversion system based on real scene emotion expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910159043.7A CN110018738B (en) | 2019-03-04 | 2019-03-04 | Emotion conversion system based on real scene emotion expression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110018738A CN110018738A (en) | 2019-07-16 |
CN110018738B true CN110018738B (en) | 2021-09-21 |
Family
ID=67189251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910159043.7A Active CN110018738B (en) | 2019-03-04 | 2019-03-04 | Emotion conversion system based on real scene emotion expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110018738B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111259674B (en) * | 2020-01-13 | 2023-07-25 | 山东浪潮科学研究院有限公司 | Text proofreading and emotion analysis method, equipment and medium based on GAN network |
CN111339878B (en) * | 2020-02-19 | 2023-06-20 | 华南理工大学 | Correction type real-time emotion recognition method and system based on eye movement data |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902356A (en) * | 2012-09-18 | 2013-01-30 | 华南理工大学 | Gesture control system and control method thereof |
CN104257380A (en) * | 2014-10-22 | 2015-01-07 | 南京邮电大学 | Electroencephalograph collecting and processing system |
CN104915658A (en) * | 2015-06-30 | 2015-09-16 | 东南大学 | Emotion component analyzing method and system based on emotion distribution learning |
CN106648107A (en) * | 2016-12-30 | 2017-05-10 | 包磊 | VR scene control method and apparatus |
CN106874914A (en) * | 2017-01-12 | 2017-06-20 | 华南理工大学 | A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks |
CN107578807A (en) * | 2017-07-17 | 2018-01-12 | 华南理工大学 | A kind of creation method of virtual reality emotion stimulating system |
CN109416842A (en) * | 2016-05-02 | 2019-03-01 | 华纳兄弟娱乐公司 | Geometric match in virtual reality and augmented reality |
CN109887095A (en) * | 2019-01-22 | 2019-06-14 | 华南理工大学 | A kind of emotional distress virtual reality scenario automatic creation system and method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9781392B2 (en) * | 2015-09-16 | 2017-10-03 | Intel Corporation | Facilitating personal assistance for curation of multimedia and generation of stories at computing devices |
-
2019
- 2019-03-04 CN CN201910159043.7A patent/CN110018738B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902356A (en) * | 2012-09-18 | 2013-01-30 | 华南理工大学 | Gesture control system and control method thereof |
CN104257380A (en) * | 2014-10-22 | 2015-01-07 | 南京邮电大学 | Electroencephalograph collecting and processing system |
CN104915658A (en) * | 2015-06-30 | 2015-09-16 | 东南大学 | Emotion component analyzing method and system based on emotion distribution learning |
CN109416842A (en) * | 2016-05-02 | 2019-03-01 | 华纳兄弟娱乐公司 | Geometric match in virtual reality and augmented reality |
CN106648107A (en) * | 2016-12-30 | 2017-05-10 | 包磊 | VR scene control method and apparatus |
CN106874914A (en) * | 2017-01-12 | 2017-06-20 | 华南理工大学 | A kind of industrial machinery arm visual spatial attention method based on depth convolutional neural networks |
CN107578807A (en) * | 2017-07-17 | 2018-01-12 | 华南理工大学 | A kind of creation method of virtual reality emotion stimulating system |
CN109887095A (en) * | 2019-01-22 | 2019-06-14 | 华南理工大学 | A kind of emotional distress virtual reality scenario automatic creation system and method |
Non-Patent Citations (2)
Title |
---|
情感虚拟人技术在人机交互中的应用;林本敬;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100731;全文 * |
虚拟现实增强技术综述;周忠等;《中国科学:信息科学》;20150220;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110018738A (en) | 2019-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10939034B2 (en) | Imaging system and method for producing images via gaze-based control | |
CN109952594B (en) | Image processing method, device, terminal and storage medium | |
CN104199544B (en) | Advertisement orientation put-on method based on eye tracking | |
CN112884637B (en) | Special effect generation method, device, equipment and storage medium | |
US20200023157A1 (en) | Dynamic digital content delivery in a virtual environment | |
CN111770299B (en) | Method and system for real-time face abstract service of intelligent video conference terminal | |
US6885761B2 (en) | Method and device for generating a person's portrait, method and device for communications, and computer product | |
KR20210142177A (en) | Methods and devices for detecting children's conditions, electronic devices, memory | |
CN108449596B (en) | 3D stereoscopic image quality evaluation method integrating aesthetics and comfort | |
JP2002501234A (en) | Human face tracking system | |
CN110827193A (en) | Panoramic video saliency detection method based on multi-channel features | |
CN110018738B (en) | Emotion conversion system based on real scene emotion expression | |
CN113192132B (en) | Eye catch method and device, storage medium and terminal | |
WO2018225518A1 (en) | Image processing device, image processing method, program, and telecommunication system | |
CN110837750A (en) | Human face quality evaluation method and device | |
Petajan et al. | Robust face feature analysis for automatic speachreading and character animation | |
CN110378234A (en) | Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building | |
CN106778576A (en) | A kind of action identification method based on SEHM feature graphic sequences | |
US9323981B2 (en) | Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored | |
CN108399358B (en) | Expression display method and system for video chat | |
WO2020263277A1 (en) | Landmark temporal smoothing | |
Peng et al. | A real-time user interest meter and its applications in home video summarizing | |
CN115657859A (en) | Intelligent interaction system based on virtual reality | |
CN114882553A (en) | Micro-expression recognition method and system based on deep learning | |
CN115797523B (en) | Virtual character processing system and method based on face motion capture technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |