CN115113733A - Information generation method and device - Google Patents

Information generation method and device Download PDF

Info

Publication number
CN115113733A
CN115113733A CN202210802579.8A CN202210802579A CN115113733A CN 115113733 A CN115113733 A CN 115113733A CN 202210802579 A CN202210802579 A CN 202210802579A CN 115113733 A CN115113733 A CN 115113733A
Authority
CN
China
Prior art keywords
user
preset
emotion
information
pupil diameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210802579.8A
Other languages
Chinese (zh)
Inventor
刘欣
宋浩杰
陈双喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN202210802579.8A priority Critical patent/CN115113733A/en
Publication of CN115113733A publication Critical patent/CN115113733A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Psychiatry (AREA)
  • Pathology (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Human Computer Interaction (AREA)
  • Fuzzy Systems (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Optics & Photonics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)

Abstract

The application discloses an information generation method and device, and relates to the technical field of deep learning. One embodiment of the method comprises: acquiring a variation range of a focusing area of a user sight on the intelligent display device within a preset time period; in response to the fact that the change range meets the preset condition, pupil diameter change data of the user in a preset time period are obtained; based on the pupil diameter change data, emotion information of the user to the focus area is determined. The embodiment effectively improves the accuracy of the emotion information generated for the focusing area.

Description

Information generation method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to the field of deep learning technologies, and in particular, to an information generation method and apparatus.
Background
With the development of the related technology of the metaclass, more and more users use intelligent equipment of the metaclass, how to accurately perceive the emotional states of the users to the virtual space people and things, and how to acquire the hobbies and interests and the psychological states of the people are an important technology of the metaclass. The study at home and abroad usually takes the measured value of the pupil size of the observed person as an objective index for the emotional state evaluation.
In the prior art, the user emotion is usually determined by the pupil size at a single point, but the method is easily interfered by the environment and an effective method for accurately identifying a target object or a target area is lacked.
Disclosure of Invention
The embodiment of the application provides an information generation method, an information generation device, information generation equipment and a storage medium.
According to a first aspect, an embodiment of the present application provides an information generating method, including: acquiring a variation range of a focusing area of a user sight on the intelligent display device within a preset time period; in response to the fact that the change range meets the preset condition, pupil diameter change data of the user in a preset time period are obtained; based on the pupil diameter change data, emotion information of the user to the focus area is determined.
According to a second aspect, the embodiment of the application provides an information generation apparatus, which comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is configured to acquire a variation range of a focus area of a user sight on an intelligent display device within a preset time period; a second obtaining module configured to obtain pupil diameter change data of the user within a preset time period in response to determining that the change range satisfies a preset condition; a determine emotion module configured to determine emotion information of the user to the focus area based on the pupil diameter change data.
According to a third aspect, embodiments of the present application provide smart glasses comprising one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the information generating method as any one of the embodiments of the first aspect.
According to a fourth aspect, embodiments of the present application provide a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the information generating method according to any one of the embodiments of the first aspect.
The method comprises the steps of obtaining the variation range of a focusing area of a user sight on the intelligent display device within a preset time period; in response to the fact that the change range meets the preset condition, pupil diameter change data of the user in a preset time period are obtained; based on the pupil diameter change data, the emotion information of the user to the focusing area is determined, the problems that in the prior art, the emotion of the user is easily interfered by the environment according to the pupil size at a single point moment and the focusing area cannot be accurately identified are solved, and the accuracy of the emotion information of the focusing area is effectively improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of an information generation method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of an information generation method according to the present application;
FIG. 4 is a schematic diagram of yet another application scenario of an information generation method according to the present application;
FIG. 5 is a schematic diagram of yet another application scenario of an information generation method according to the present application
FIG. 6 is a flow diagram of yet another embodiment of an information generation method according to the present application;
FIG. 7 is a schematic diagram of one embodiment of an information generating apparatus according to the present application;
FIG. 8 is a schematic block diagram of a computer system suitable for use in implementing a server according to embodiments of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the information generation methods of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105, and between the terminal devices. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with other terminal devices or a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have client application software installed thereon, for example, video playing application software, communication application software, and the like.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices, including but not limited to smart phones, smart bracelets, smart glasses (glasses with positioning devices and image capturing devices, AR glasses, etc.), VR headsets, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as a plurality of software or software modules or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, for example, acquiring a variation range of a focus area of a user's sight on the smart display device within a preset time period; in response to the fact that the change range meets the preset condition, pupil diameter change data of the user in a preset time period are obtained; based on the pupil diameter change data, emotion information of the user to the focus area is determined.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of pieces of software or software modules (for example, for providing an information generating service), or may be implemented as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the information generation method provided by the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal devices 101, 102, and 103, or may be executed by the server 105 and the terminal devices 101, 102, and 103 in cooperation with each other. Accordingly, each part (for example, each unit, sub-unit, module, and sub-module) included in the information generating apparatus may be provided entirely in the server 105, entirely in the terminal devices 101, 102, and 103, or may be provided in the server 105 and the terminal devices 101, 102, and 103, respectively.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 shows a flow diagram 200 of an embodiment of an information generation method that can be applied to the present application. The information generation method comprises the following steps:
step 201, obtaining a variation range of a focusing area of a user sight on the intelligent display device within a preset time period.
In this embodiment, the execution subject (e.g., the server 105 or the terminal devices 101, 102, 103 in fig. 1) may determine a focus area of the line of sight of the user on the smart display device via a positioning device worn on the head of the user, a positioning device disposed on the smart display device, an optical axis, and a visual axis, and acquire a variation range of the focus area of the line of sight of the user on the smart display device within a preset time period.
The positioning device may be a device for determining a position in the prior art or in the future, for example, an Ultra Wide Band (UWB) system, a LANDMARK system, or the like.
Here, uwb (ultra Wide band) technology is a wireless carrier communication technology, which uses nanosecond-level non-sinusoidal wave narrow pulses to transmit data, and thus occupies a Wide spectrum range. The UWB technology has the advantages of low power spectral density of transmitted signals, insensitivity to channel fading, high positioning precision and the like, and is particularly suitable for indoor high-precision positioning, and the error range is in centimeter level.
Specifically, as shown in fig. 3, the positioning device worn on the head of the user is a smart glasses with a built-in UWB chip a, the positioning device disposed on the smart display device is a device with a built-in UWB chip B, and the execution main body may first determine a distance and an angle between the smart glasses and the smart display device according to a coordinate P (x, y, z) indicated by the UWB chip a in a spatial rectangular coordinate system, i.e., a first coordinate, and a coordinate O (0,0,0) indicated by the UWB chip B, i.e., a second coordinate. Wherein the angle between OP and OY can be calculated by the following formula:
Figure BDA0003734681690000051
in addition, the distance between the smart glasses and the smart display device can be determined by the time delay of sending data packets between the smart display device and the smart glasses, such as: the time for the smart display device to send the data packet is T1, the time for the smart glasses to receive the data packet is T2, the time for the smart glasses to send the response data packet is T3, the time for the smart display device to receive the response data packet is T4, and the distance D between the smart glasses and the smart display device can be represented by the following formula:
Figure BDA0003734681690000052
wherein C is the wave velocity.
Further, the coordinate difference between the pupil coordinate Q and the coordinate P of the UWB chip a is (dx, dy, dz), and the connection line between the UWB chip a and the UWB chip B is at an angle β to the X axis and parallel to the optical axis.
Further, the executing body can calculate the distance of MN through the P and Q coordinates, the angles of alpha, beta and theta and the connecting line distance D of the UWB chip A and the UWB chip B, and the finally determined sight line focus area is estimated to be a square coverage area with the N point coordinate as the center and the length of MN 2 as the side length.
Wherein Alpha is Alpha angle, namely visual axis angle, the included angle between the visual axis and the optical axis, and the normal value of Alpha is more than or equal to 4 degrees and less than or equal to 8 degrees.
Here, the visual axis is a line connecting the light source and the center of the macula lutea and passing through the nodes, and the optical axis is an imaginary line connecting the point light source and all the images.
It should be noted that the viewing angle may include only the normal viewing angle, or may include both the normal viewing angle and the oblique viewing angle, which is not limited in this application.
In some alternatives, the focus area is determined by: determining an optical axis based on the first coordinate, the second coordinate and a preset coordinate difference value between the pupil coordinate of the user and the first coordinate; determining the center of a focus area according to a first intersection point of the optical axis and the intelligent display equipment; determining a second intersection point of the visual axis and the intelligent display equipment according to the optical axis and a preset visual axis angle, and determining the distance between the first intersection point and the second intersection point as the side length of one half of the focusing area; and determining the focus area according to the side length and the center of the focus area.
In this implementation manner, the execution main body may first determine a first coordinate according to a positioning device worn on the eyes of the user, determine a second coordinate according to the positioning device disposed on the intelligent display device, determine coordinates of the eyes of the user according to the first coordinate and a preset coordinate difference between the pupil coordinates of the user and the first coordinate, determine a line passing through the pupil coordinates of the user and parallel to a connection line between the first coordinate and the second coordinate as an optical axis, and determine a center of the focus area at a first intersection point of the optical axis and the intelligent display device.
Further, a second intersection point of the visual axis and the intelligent display device is determined according to the optical axis and a preset visual axis angle, and the distance between the first intersection point and the second intersection point is determined as one-half side length of the focusing area. And determining the focus area according to the side length and the central point of the focus area.
The viewing axis angle may include a forward viewing axis angle (angle between the forward viewing axis and the optic axis) and an oblique viewing axis angle (angle between the oblique viewing axis and the forward viewing axis).
Here, since the human eye view angle is generally about 50 degrees, which means the view range in front of the front view, the 50mm standard lens of the camera is closest to the human eye view angle. Without turning the eyes, the maximum viewing angle for both eyes is 124 degrees, which is about one fifth when focusing, i.e. 25 degrees, which is similar to the central area of a camera focusing, close to single point focusing. In order to more accurately consider the oblique vision or eyeball rotation condition, the angle gamma between the orthophoria axis and the oblique vision axis can be estimated according to the pupil picture, and a sample pupil angle training model is labeled. Typically, γ is in the range <25 degrees.
Specifically, as shown in fig. 4, the front view axis angle is α, the oblique view axis angle is γ, the human eye focus region is a region with N as a center point and 2 × NR as a side length.
The implementation mode determines the optical axis based on the first coordinate, the second coordinate and a preset coordinate difference value between the pupil coordinate of the user and the first coordinate; determining the center of a focus area according to a first intersection point of the optical axis and the intelligent display equipment; determining a second intersection point of the visual axis and the intelligent display equipment according to the optical axis and a preset visual axis angle, and determining the distance between the first intersection point and the second intersection point as the side length of one half of the focusing area; and the focus area is determined according to the side length and the center of the focus area, so that the accuracy of the determined focus area is improved.
Step 202, in response to determining that the variation range meets a preset condition, acquiring pupil diameter variation data of the user within the preset time period.
In this embodiment, after the execution main body obtains the variation range, it may further determine whether the variation range meets a preset condition, and if the variation range meets the preset condition, obtain variation data of the pupil diameter of the user over time in the time period.
Here, since the user view focusing area is generally square, the user view focusing area can be represented by a central point and a side length, and the execution subject can determine whether the variation range of the user view focusing area satisfies a preset condition by determining whether the variation range of the central point and the side length satisfies the preset condition.
Wherein, the pupil diameter of the user can be determined based on the image of the pupil of the user collected by the image collecting device worn on the head of the user.
Here, the manner of determining the pupil diameter based on the image of the user pupil by the execution subject may be determined according to the image of the user pupil and a preset comparison table of the image of the pupil and the pupil diameter, or may be determined according to the image of the user pupil and a preset prediction model of the pupil diameter, which is not limited in this application.
It should be noted that, here, the preset condition may be set according to experience and actual requirements, for example, the change of the central point is less than 1cm, the change of the side length is less than 1cm, and the like, which is not limited in the present application.
In some optional manners, acquiring pupil diameter variation data of the user within a preset time period includes: acquiring images of pupils of the user at all times in a preset time period through an image acquisition device worn on eyes of the user; for the image of the user pupil at each moment, inputting the image of the user pupil into a preset pupil diameter prediction model to obtain the pupil diameter; and determining pupil diameter change data of the user within a preset time period according to the pupil diameter.
In this implementation manner, the execution main body may acquire images of pupils of the user at each time of a preset time period via an image acquisition device worn on eyes of the user, for example, a micro camera or the like disposed in smart glasses; and for the image of the pupil of the user at each moment, inputting the image of the pupil of the user into a preset pupil diameter prediction model to obtain the pupil diameter, and determining pupil diameter change data of the user within a preset time period, namely the change data of the pupil diameter of the user along with time according to the pupil diameter.
The preset pupil diameter prediction model is obtained by training based on the image sample of the user pupil marked with the pupil diameter.
Here, the pupil diameter marked in the image sample of the user pupil can be obtained by extracting the size of the pupil diameter in the pupil image through Photoshop software.
The realization mode collects the images of the pupils of the user at all times of the preset time period through an image collecting device worn on the eyes of the user; for the image of the user pupil at each moment, inputting the image of the user pupil into a preset pupil diameter prediction model to obtain the pupil diameter; according to the pupil diameter, the pupil diameter change data of the user in the preset time period is determined, and the accuracy of the determined pupil diameter change data is improved.
And step 203, determining emotion information of the user to the focusing area based on the pupil diameter change data.
In this embodiment, after determining change data of the pupil diameter with time in the preset time period according to the pupil diameter size data at each time in the preset time period, the executive body may determine emotion information of the user on the focusing area according to the pupil diameter change data and the preset pupil diameter change data and the emotion information comparison table, and may also determine emotion information of the user on the focusing area in the preset time period according to the pupil diameter change data and the preset emotion prediction model, which is not limited in this application.
Here, the emotional information may include various kinds, for example, pleasure, calm, anxiety, impatience, and the like.
Specifically, in a preset time period, for example, 1 to T, the change data of the pupil diameter of the user with time is, at time 1, the pupil diameter is 2.5 mm; at time t1, the pupil diameter is 2.6 mm; at time t2, the pupil diameter is 2.7 mm; at time T, the pupil diameter is 2.8 mm. After acquiring the change data of the pupil diameter of the user along with time, the execution main body can input the change data of the pupil diameter along with time into a preset emotion prediction model to obtain the emotion information of the user.
In some optional ways, determining emotional information of the user to the focus area based on the pupil diameter variation data includes: and inputting the pupil diameter change data into a preset emotion prediction model to obtain the emotion information of the user to the focusing area.
In this implementation, the execution subject may input the pupil diameter change data into a preset emotion prediction model, and generate emotion information of the user on the focusing area.
The preset emotion prediction model can be obtained by training based on the pupil diameter change data sample marked with emotion information.
Here, the preset emotion prediction model may be a deep learning model in the prior art or a future development technology, for example, RNN (Recurrent Neural Network), LSTM (Long-Short Term Memory model), and the like, which is not limited in this application.
Specifically, the executive subject may input the pupil diameter change data within a preset time period into a preset emotion prediction model, such as an LSTM model, to obtain the probability of 4 emotional states of happiness, calmness, anxiety, and impatience, and based on the probabilities of the four emotional states, determine the emotional information of the user to the focus area.
With continued reference to fig. 5, fig. 5 is a schematic diagram of an application scenario of the information generation method according to the present embodiment.
In the application scenario of fig. 5, the execution subject 501 may determine a focus area 505 of the user's sight on the smart display device 504 via a positioning device worn on the head of the user 502, such as smart glasses 503 provided with a UWB positioning chip, a positioning device disposed on the smart display device 504, an optical axis, an orthographic axis, and the like, and acquire a variation range of the focus area 505 of the user's sight on the smart display device 504 within a preset time period. In response to determining that the variation range meets the preset condition, the execution subject may acquire images of pupils of the user at each time within a preset time period via an image acquisition device, for example, a micro camera disposed on the smart glasses, determine pupil diameter variation data of the user within the preset time period according to the images of the pupils of the user, and further determine emotion information of the user to the focusing area based on the pupil diameter variation data.
According to the information generation method provided by the embodiment of the disclosure, the variation range of the focusing area of the sight of a user on the intelligent display equipment in a preset time period is obtained; in response to the fact that the change range meets the preset condition, pupil diameter change data of the user in a preset time period are obtained; the emotion information of the user to the focusing area is determined based on the pupil diameter change data, the problem that the emotion of the user is easily interfered by the environment according to the size of the pupil at a single point moment in the prior art is solved, and the accuracy of the emotion information generated aiming at the focusing area is effectively improved.
With further reference to fig. 6, a flow 600 of yet another embodiment of an information generation method is shown. In this embodiment, the flow 500 of the information generating method may include the following steps:
step 601, obtaining a variation range of a focusing area of a user sight on the intelligent display device within a preset time period.
In this embodiment, details of implementation and technical effects of step 601 may refer to the description of step 201, and are not described herein again.
Step 602, in response to determining that the variation range meets a preset condition, acquiring pupil diameter variation data of the user within the preset time period.
In this embodiment, details of implementation and technical effects of step 602 may refer to the description of step 202, and are not described herein again.
Step 603, determining emotion information of the user to the focusing area based on the pupil diameter change data.
In this embodiment, reference may be made to the description of step 203 for implementation details and technical effects of step 603, which are not described herein again.
In step 604, processing operations corresponding to the emotion information and the current scene category are performed.
In this embodiment, after obtaining the emotion information, the execution main body may determine and execute the corresponding processing operation according to the emotion information, the current scene type, the preset emotion information, the scene type, and the processing operation comparison table, or may determine and execute the corresponding processing operation according to the emotion information, the current scene type, and the preset operation prediction model, which is not limited in this application.
The preset operation prediction model is obtained by training based on emotion information and scene type samples marked with corresponding processing operations.
In some alternative ways, the processing operation corresponding to the emotion information and the current scene category is performed, including: determining characters contained in the focusing area in response to the fact that the current scene type is determined to be the watching video program scene and the emotion information is a first preset emotion; and pushing the related information of the person to the user.
In this implementation manner, the execution subject may determine the emotion information and the current scene type, determine a character included in the focus area if it is determined that the current scene type is a video program scene to be watched, and the emotion information is a first preset emotion, and push related information of the character, such as identity information and work information, to the user.
Wherein the first predetermined emotion may be any positive emotion, such as pleasure, excitement, feeling, etc.
Specifically, when a user watches a video program, if it is determined that the variation range of the focus area of the line of sight of the user on the intelligent display device in a preset time period meets a preset condition, and emotion information determined according to pupil variation data of the user in the preset time period is a first preset emotion, for example, pleasure, a person included in the focus area, for example, the person X, can be analyzed and determined by the cloud server, and a classical work of a leading actor of the person X is recommended to the user.
The implementation mode determines characters contained in a focusing area by responding to the fact that the current scene type is determined to be a video program watching scene, and the emotion information is a first preset emotion; the related information of the people is pushed to the user, and accurate recommendation of information interesting to the user is facilitated based on the emotion information.
In some alternative ways, the processing operation corresponding to the emotion information and the current scene category is performed, including: and outputting prompt information indicating the psychological abnormality in response to the fact that the current scene type is determined to be the psychological test scene and the emotion information is the second preset emotion.
In this implementation, the execution main body may judge the emotion information and the current scene type, and if it is determined that the current scene type is the psychological test scene and the emotion information is the second preset emotion, output prompt information indicating psychological abnormality.
Wherein the second preset emotion is an emotion different from the preset emotion.
Specifically, in a psychological test scenario, if it is determined that a variation range of a focusing area of a user's sight line on the intelligent display device in a preset time period satisfies a preset condition, and emotion information determined according to pupil variation data of the user in the preset time period is a second preset emotion, for example, anxiety, the second preset emotion is different from the preset emotion, for example, the preset emotion is cheerful, prompt information indicating a psychological abnormality may be output.
The implementation mode outputs prompt information indicating psychological abnormality by responding to the fact that the current scene type is determined to be a psychological test scene and the emotion information is the second preset emotion, and is beneficial to achieving detection of user psychology based on the emotion information.
In some alternative ways, the processing operation corresponding to the emotion information and the current scene category is performed, including: and outputting alarm information in response to the fact that the current scene type is determined to be the driving scene and the emotion information is the third preset emotion.
In this implementation manner, the execution subject may determine the emotion information and the current scene type, and output the warning information if it is determined that the current scene type is the driving scene and the emotion information is the third preset emotion.
Wherein the third preset mood can be set according to experience and actual requirements, such as anxiety, agitation and the like.
Specifically, in a driving scene, if it is determined that a variation range of a focusing area of a user's sight line on the smart display device within a preset time period satisfies a preset condition, and emotion information determined according to pupil variation data of the user within the preset time period is a third preset emotion, for example, anxiety, alarm information may be output to prompt the user to drive carefully.
According to the implementation mode, the warning information is output in response to the fact that the current scene type is determined to be the driving scene and the emotion information is the third preset emotion, and therefore the driving safety of the user is improved.
In some alternative ways, the processing operation corresponding to the emotion information and the current scene category is performed, including: in response to determining that the current scene category is one of: the method comprises the steps of obtaining a virtual reality scene, an augmented reality scene and a metauniverse scene, wherein emotion information is a fourth preset emotion, and the rendering quality level of a picture in a focusing area is increased by a preset level.
In this implementation manner, the execution subject may determine the emotion information and the current scene type, and if it is determined that the current scene type is one of the following scenes: the method comprises the steps of obtaining a virtual reality scene, an augmented reality scene and a metauniverse scene, wherein emotion information is a fourth preset emotion, and the rendering quality level of a picture in a focusing area is increased by a preset level.
Wherein, the fourth preset emotion can be set according to experience and actual requirements, such as pleasure, excitement and the like.
Specifically, in the virtual reality scene, if it is determined that the variation range of the focusing area of the user's sight line on the intelligent display device in the preset time period meets the preset condition, and the emotion information determined according to the pupil variation data of the user in the preset time period is a fourth preset emotion, for example, joyful, the rendering quality level of the picture in the focusing area is increased by the preset level, so that the rendering effect is enhanced.
In addition, the execution subject may determine a person included in the picture in the focus area, and determine a fourth preset emotion as an emotion of the user to the person included in the picture.
The implementation is made in response to determining that the current scene category is one of: the emotion information of the virtual reality scene, the augmented reality scene and the meta-universe scene is a fourth preset emotion, the rendering quality grade of the picture in the focusing area is increased by a preset grade, and the interaction effect in the virtual reality environment is effectively improved.
For the above implementation manner, the first preset emotion, the second preset emotion, the third preset emotion, and the fourth preset emotion may be the same or different, and no limitation is applied thereto.
Compared with the embodiment corresponding to fig. 2, the flow 600 of the information generating method in this embodiment represents that, based on the pupil diameter change data, emotion information of a user on a focusing area is determined, and processing operation corresponding to the emotion information and a current scene category is executed, which is beneficial to executing targeted processing operation according to the emotion information.
With further reference to fig. 7, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of an information generating apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which can be applied in various electronic devices.
As shown in fig. 7, the information generating apparatus 700 of the present embodiment includes: a first acquisition module 701, a second acquisition module 702, and a emotion determination module 703.
The first obtaining module 701 may be configured to obtain a variation range of a focus area of a user's sight line on the smart display device within a preset time period.
The second obtaining module 702 may be configured to obtain pupil diameter variation data of the user within a preset time period in response to determining that the variation range satisfies a preset condition.
A determine emotion module 703, which may be configured to determine emotion information of the user to the focus area based on the pupil diameter change data.
In some optional modes of the embodiment, the apparatus further comprises an execution operation module, which can be configured to execute processing operations corresponding to the emotion information and the current scene category.
In some alternatives of this embodiment, the execution operation module is further configured to: in response to the fact that the current scene type is determined to be the video program scene, the emotion information is a first preset emotion, and the characters contained in the focusing area are determined; and pushing the related information of the person to the user.
In some alternatives of this embodiment, the execution operation module is further configured to: and outputting prompt information indicating the psychological abnormality in response to the fact that the current scene type is determined to be the psychological test scene and the emotion information is the second preset emotion.
In some alternatives of this embodiment, the execution operation module is further configured to: and outputting alarm information in response to the fact that the current scene type is determined to be the driving scene and the emotion information is the third preset emotion.
In some alternatives of this embodiment, the execution operation module is further configured to: in response to determining that the current scene category is one of: the method comprises the steps of obtaining a virtual reality scene, an augmented reality scene and a meta-universe scene, wherein emotion information is a fourth preset emotion, and the rendering quality level of the picture of the focusing area is increased by a preset level.
In some alternatives of this embodiment, the focus area is determined by: determining an optical axis based on the first coordinate, the second coordinate and a preset coordinate difference value between the pupil coordinate of the user and the first coordinate; determining the center of a focus area according to a first intersection point of the optical axis and the intelligent display equipment; determining a second intersection point of the visual axis and the intelligent display device according to the angle between the optical axis and a preset visual axis, and determining the distance between the first intersection point and the second intersection point as the side length of one half of the focusing area; and determining the focus area according to the side length and the center of the focus area.
In some optional aspects of this embodiment, the second obtaining module is further configured to: acquiring images of pupils of the user at all times in a preset time period through an image acquisition device worn on eyes of the user; for the image of the user pupil at each moment, inputting the image of the user pupil into a preset pupil diameter prediction model to obtain the pupil diameter; and determining pupil diameter change data of the user within a preset time period according to the pupil diameter.
In some alternatives of this embodiment, the determine emotion module is further configured to: inputting pupil diameter change data into a preset emotion prediction model to obtain emotion information of a user on a focusing area
According to an embodiment of the present application, the present application also provides smart glasses and a readable storage medium.
As shown in fig. 8, the present invention is a block diagram of smart glasses according to an information generation method according to an embodiment of the present application.
800 is a block diagram of smart glasses according to an information generation method of an embodiment of the present application. As shown in fig. 8, the smart glasses include: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the smart eyewear, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple smart glasses may be connected, with each device providing some of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 8 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the information generating method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the information generation method provided by the present application.
The memory 802, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the information generation method in the embodiment of the present application (for example, the first obtaining module 701, the second obtaining module 702, and the emotion determination module 703 shown in fig. 7). The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 802, that is, implements the information generation method in the above-described method embodiments.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by use of the smart glasses for information generation, and the like. Further, the memory 802 may include high speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 optionally includes memory located remotely from the processor 801, which may be connected to the information generating smart glasses over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The smart glasses of the information generating method may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, and are exemplified by a bus in fig. 8.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function control of smart glasses for quality monitoring of live video streams, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or like input device. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the accuracy of the emotion information aiming at the focusing area is effectively improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method of information generation, the method comprising:
acquiring the variation range of a focusing area of a user sight on the intelligent display equipment within a preset time period;
in response to the fact that the change range meets a preset condition, pupil diameter change data of the user in the preset time period are obtained;
and determining emotion information of the user on the focusing area based on the pupil diameter change data.
2. The method of claim 1, further comprising:
and executing processing operation corresponding to the emotion information and the current scene category.
3. The method of claim 2, wherein the performing processing operations corresponding to the mood information and the current scene category comprises:
in response to the fact that the current scene type is determined to be the video program scene, the emotion information is a first preset emotion, and the characters contained in the focusing area are determined;
and pushing the related information of the person to the user.
4. The method of claim 2, wherein the performing processing operations corresponding to the mood information and the current scene category comprises:
and in response to the fact that the current scene type is determined to be a psychological test scene, the emotion information is a second preset emotion, and prompt information indicating psychological abnormality is output, wherein the second preset emotion is different from the preset emotion.
5. The method of claim 2, wherein the performing processing operations corresponding to the mood information and the current scene category comprises:
and outputting alarm information in response to the fact that the current scene type is determined to be the driving scene and the emotion information is a third preset emotion.
6. The method of claim 2, wherein said performing processing operations corresponding to said mood information and current scene category comprises:
in response to determining that the current scene category is one of: the emotion information is a fourth preset emotion, and the rendering quality level of the picture in the focusing area is increased by a preset level.
7. The method according to any of claims 1-6, wherein the focal region is determined by:
determining an optical axis based on a first coordinate, a second coordinate and a coordinate difference value of a preset pupil coordinate of a user and the first coordinate, wherein the first coordinate is used for indicating a coordinate determined according to a positioning device worn on the eyes of the user, and the second coordinate is used for indicating a coordinate determined according to a positioning device arranged on the intelligent display equipment;
determining the center of a focusing area according to a first intersection point of the optical axis and the intelligent display equipment;
determining a second intersection point of the visual axis and the intelligent display equipment according to the optical axis and a preset visual axis angle, and determining the distance between the first intersection point and the second intersection point as one-half side length of a focusing area;
and determining a focus area according to the side length and the center of the focus area.
8. The method of claim 1, wherein the acquiring pupil diameter variation data of the user within the preset time period comprises:
collecting images of pupils of the user at all times of the preset time period through an image collecting device worn on eyes of the user;
for the image of the pupil of the user at each moment, inputting the image of the pupil of the user into a preset pupil diameter prediction model to obtain the pupil diameter, wherein the preset pupil diameter prediction model is obtained by training based on an image sample of the pupil of the user marked with the pupil diameter;
and determining pupil diameter change data of the user within the preset time period according to the pupil diameter.
9. The method of claim 1, the determining emotional information of a user to the focus region based on the pupil diameter change data, comprising:
and inputting the pupil diameter change data into a preset emotion prediction model to obtain emotion information of the user on the focusing area, wherein the preset emotion prediction model is obtained by training based on a pupil diameter change data sample marked with the emotion information.
10. An information generating apparatus, the apparatus comprising:
the first acquisition module is configured to acquire a variation range of a focusing area of a user sight on the intelligent display device within a preset time period;
a second obtaining module configured to obtain pupil diameter change data of the user within the preset time period in response to determining that the change range satisfies a preset condition;
a determine mood module configured to determine mood information of a user to the focus area based on the pupil diameter change data.
11. A smart eyewear, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory is stored with instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
CN202210802579.8A 2022-07-07 2022-07-07 Information generation method and device Pending CN115113733A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210802579.8A CN115113733A (en) 2022-07-07 2022-07-07 Information generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210802579.8A CN115113733A (en) 2022-07-07 2022-07-07 Information generation method and device

Publications (1)

Publication Number Publication Date
CN115113733A true CN115113733A (en) 2022-09-27

Family

ID=83332522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210802579.8A Pending CN115113733A (en) 2022-07-07 2022-07-07 Information generation method and device

Country Status (1)

Country Link
CN (1) CN115113733A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115904075A (en) * 2022-11-28 2023-04-04 中国汽车技术研究中心有限公司 Vehicle configuration improvement method, system, device and storage medium
CN117237786A (en) * 2023-11-14 2023-12-15 中国科学院空天信息创新研究院 Evaluation data acquisition method, device, system, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115904075A (en) * 2022-11-28 2023-04-04 中国汽车技术研究中心有限公司 Vehicle configuration improvement method, system, device and storage medium
CN115904075B (en) * 2022-11-28 2024-01-02 中国汽车技术研究中心有限公司 Vehicle configuration improvement method, system, device and storage medium
CN117237786A (en) * 2023-11-14 2023-12-15 中国科学院空天信息创新研究院 Evaluation data acquisition method, device, system, electronic equipment and storage medium
CN117237786B (en) * 2023-11-14 2024-01-30 中国科学院空天信息创新研究院 Evaluation data acquisition method, device, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN115113733A (en) Information generation method and device
CN110018736B (en) Object augmentation via near-eye display interface in artificial reality
US9696798B2 (en) Eye gaze direction indicator
TWI639931B (en) Eye tracking based selective accentuation of portions of a display
US9153195B2 (en) Providing contextual personal information by a mixed reality device
US11809213B2 (en) Controlling duty cycle in wearable extended reality appliances
CN109923462A (en) Sensing spectacles
JP2021152916A (en) Eye protection mode presentation method, device, electronic facility, storage medium, and program
CN111709362B (en) Method, device, equipment and storage medium for determining important learning content
CN111538862B (en) Method and device for explaining video
US11277358B2 (en) Chatbot enhanced augmented reality device guidance
US20170311861A1 (en) Mood-conscious interaction device and method
JP2018509693A (en) Method, system, and computer program for device interaction via a head-up display
CN111916203A (en) Health detection method and device, electronic equipment and storage medium
Koh et al. Preliminary investigation of augmented intelligence for remote assistance using a wearable display
CN111695516A (en) Thermodynamic diagram generation method, device and equipment
CN112561059B (en) Method and apparatus for model distillation
JP2018092416A (en) Information processing method, device, and program for causing computers to execute the information processing method
CN113591515B (en) Concentration degree processing method, device and storage medium
CN112270303A (en) Image recognition method and device and electronic equipment
CN112307323A (en) Information pushing method and device
Bărbuceanu et al. Evaluation of the average selection speed ratio between an eye tracking and a head tracking interaction interface
CN111524123A (en) Method and apparatus for processing image
CN111767988A (en) Neural network fusion method and device
CN113495976A (en) Content display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination