CN112433664A - Man-machine interaction method and device used in book reading process and electronic equipment - Google Patents
Man-machine interaction method and device used in book reading process and electronic equipment Download PDFInfo
- Publication number
- CN112433664A CN112433664A CN202011289359.7A CN202011289359A CN112433664A CN 112433664 A CN112433664 A CN 112433664A CN 202011289359 A CN202011289359 A CN 202011289359A CN 112433664 A CN112433664 A CN 112433664A
- Authority
- CN
- China
- Prior art keywords
- user
- display screen
- content
- image
- book
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a man-machine interaction method used in the process of reading books, which comprises the following steps: acquiring a content image of a book in real time through electronic equipment and displaying the content of the book through a display screen; collecting a user image in real time, and analyzing the gazing point position of a user; and perform corresponding operations on the content of the point of regard including, but not limited to, performing an augmented search on the display screen for the content of the point of regard. When a user reads a book and encounters a problem, for example, an unknown word or an unknown background knowledge, the user can shift the sight line from the book to the display screen and watch the corresponding position of the display screen, and the electronic device can perform expansion search on the corresponding content of the watching point. The invention can realize accurate eye tracking and can better realize effective interaction between the user and the book content. In addition, the invention also discloses a man-machine interaction device, electronic equipment and a computer readable storage medium used in the book reading process.
Description
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a human-computer interaction method and device used in a book reading process, electronic equipment and a computer readable storage medium.
Background
Nowadays, more and more human-computer interaction technologies change the life style of people, and along with the progress of science and technology, human-computer interaction gradually changes to an inaccurate interaction style. In daily life of people, people generally communicate with each other in an inaccurate mode, so that the inaccurate man-machine interaction mode conforms to the daily interaction habit of people. The imprecise man-machine interaction mode has various forms, wherein typical representatives are voice recognition, gesture recognition and eye tracking.
The eyes are the windows of the soul, and compared with other sensing organs, the amount of information received by the eyes is 80% -90% of the total information received by the human body. The eye tracking opens rich new experience of human-computer visual interaction, and can most embody the advantages of the eye tracking technology for some applications requiring quick response, such as games, VR and AR. Eye tracking also becomes a third man-machine interaction mode besides finger and voice.
In recent years, many eye movement tracking systems using human eye images as input have been developed at home and abroad, and are widely applied to industries such as medicine, education, industrial control, artificial intelligence and the like, and two main branches of a desktop type eye movement instrument and a head-mounted portable eye movement instrument are derived. The eye tracker needs to participate in many human-computer interaction scenes, for example, the desktop eye tracker can be used in cooperation with automobile driving, so that the driving state of a driver can be monitored; the pilot may wear a portable eye tracker for airplane pilot studies. The eye tracker can also provide powerful help for industries such as flat panel design, advertising services, user experience and the like.
In the past, when people need to know more background knowledge and tutoring besides book contents during reading and learning of books, people often need to search related background knowledge, teaching videos and the like through manual online. For example, when people read english short texts by using textbooks, if people encounter unknown words, people need to manually search the meanings of the words through the network, which affects the learning efficiency and the experience sense of people in the learning process. Nowadays, solve this problem through setting up desktop formula eye-movement appearance, reduced the loaded down with trivial details manual processing of user, improved user's learning efficiency, bring fine experience for the user, can not appear because the condition of forgetting after untimely looking up.
However, when a user reads a book, the book is often placed below the desktop type eye tracker, when the user reads the book, due to the shielding of the upper eyelid, the desktop type eye tracker cannot acquire complete eye feature information, and due to the loss of the eye feature information, the desktop type eye tracker cannot accurately track the eye movement, and even fails to track the book. Thus, users do not interact well with the content on the book using the desktop eye tracker.
Therefore, it is desirable to provide a human-computer interaction method, device, electronic device and computer-readable storage medium for use in reading a book, which can solve the above problems.
Disclosure of Invention
The invention aims to provide a man-machine interaction method and device used in a book reading process, electronic equipment and a computer readable storage medium, so as to better realize interaction between a user and book contents.
In order to achieve the above object, the present invention provides a human-computer interaction method for use in a book reading process, comprising: book content acquisition and display step: the method comprises the steps of collecting a content image of a book and displaying the content image through a display screen. A user image acquisition step: user images are collected in real time. A fixation point obtaining step: and obtaining the fixation point of the user on the display screen from the user image. And, the operation steps: and performing corresponding operation on the content of the gazing point, wherein the corresponding operation comprises performing expansion search on the content of the gazing point on the display screen.
Preferably, the gaze point is acquired when the user encounters a problem in the process of reading a book and synchronously views the corresponding position on the display screen, and the "performing an extended search on the content of the gaze point on the display screen" includes at least one of idiom paraphrasing, background information query, english translation, question answer query, and image retrieval.
Preferably, the "obtaining the gaze point of the user on the display screen from the user image" is specifically: judging the head direction of the user according to the user image, and judging whether the user gazes at the display screen or not according to the head direction of the user; and when the user is judged to watch the display screen, the user image is analyzed to obtain the watching point of the user on the display screen.
Preferably, when the content display size of the gazing point is smaller than the tracking accuracy, the area of the gazing point is enlarged, the gazing point of the user in the enlarged area is further acquired, and the gazing point of the user in the enlarged area is used as the gazing point of the user.
In order to achieve the above object, the present invention further provides a human-computer interaction device for use in a book reading process, which includes a first acquisition module, a display module, a second acquisition module, an image processing module, an eye tracking module, and an operation module. The first acquisition module is used for acquiring content images of books; the display module is used for displaying the content image; the second acquisition module is used for acquiring a user image in real time; the image processing module is used for processing the user image; the eye tracking module is used for obtaining a fixation point of a user on the display screen according to the output calculation of the image processing module; the operation module is used for executing corresponding operation on the content of the gazing point, and the corresponding operation comprises the step of performing expansion search on the content of the gazing point on the display screen.
Preferably, the gaze point is acquired when the user encounters a problem in the process of reading a book and synchronously views the corresponding position on the display screen, and the "performing an extended search on the content of the gaze point on the display screen" includes at least one of idiom paraphrasing, background information query, english translation, question answer query, and image retrieval.
Preferably, the image processing module judges the head direction of the user according to the user image, and judges whether the user gazes at the display screen according to the head direction of the user; and when the user is judged to watch the display screen, the image processing module and the eye movement tracking module analyze the user image to obtain the watching point of the user on the display screen.
Preferably, the human-computer interaction device used in the book reading process further comprises an amplifying module, and when the content display size of the gazing point is smaller than the tracking precision, the amplifying module amplifies the area of the gazing point; the image processing module and the eye tracking module further acquire a fixation point of the user in the amplified region, and the fixation point of the user in the amplified region is used as the fixation point of the user.
In order to achieve the above object, the present invention further provides an electronic device, including a first camera device, a display screen, a light source module, a second camera device, a processor, and a memory, where the first camera device is used to capture a content image of a book; the display screen is used for displaying the content image; the light source module is used for emitting a light source to illuminate a user; the second camera device is used for shooting a user image; the processor is in communication connection with the first camera device and the second camera device; the memory is used for storing one or more programs, and when the one or more programs are executed by the processor, the processor is enabled to realize the human-computer interaction method for the book reading process.
Preferably, the second camera device is an infrared camera, and the light source module includes two infrared emitters, and the two infrared emitters are disposed on two sides of the second camera device in a facing manner.
Preferably, the first camera device is arranged on the upper side of the display screen, and the light source module and the second camera device are arranged on the lower side of the display screen.
Preferably, the electronic device further includes a housing, the first camera device is disposed at the top of the housing, and the first camera device can be turned over by a preset angle relative to the housing to adjust a shooting area of the first camera device.
To achieve the above object, the present invention further provides a computer-readable storage medium storing a computer program, which can be executed by a processor to implement the human-computer interaction method for use in a book reading process as described above.
Compared with the prior art, the method and the device have the advantages that the content images of the book are collected and displayed by the display screen, when the user encounters problems in the process of reading the book (for example, the user encounters unknown words, unknown background knowledge and the like), the corresponding position on the display screen can be synchronously watched, at the moment, the fixation point of the user on the display screen can be obtained, then the content of the fixation point is expanded and searched on the display screen, the problem that the eye feature information is not completely obtained due to the fact that the upper eyelid blocks the eye part is solved, accurate eye movement tracking can be achieved, and effective interaction between the user and the book content can be better achieved. In addition, the content which is displayed in a smaller size is enlarged and then tracked and recognized, so that the content watched by the user can be accurately positioned, and the user can interact with the content on the book in a more refined manner.
Drawings
Fig. 1 is a flowchart of a man-machine interaction method used in a book reading process.
Fig. 2 is a block diagram of a human-computer interaction device used in a book reading process.
Fig. 3 is a schematic structural diagram of an electronic device.
Fig. 4 is a side view of the electronic device shown in fig. 3.
Fig. 5 is a block diagram of the electronic device.
Detailed Description
In order to explain technical contents and structural features of the present invention in detail, the following description is further made with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the described embodiments without inventive effort, shall fall within the scope of protection of the invention.
The invention relates to a man-machine interaction method and device used in a book reading process, electronic equipment and a computer readable storage medium, so that effective interaction between a user and a book can be realized, the learning efficiency of the user is improved, and better learning experience is brought to the user. For example, idioms watched by a user in the process of reading Chinese texts can be positioned, so that idioms can be automatically explained through a network; for example, when a user reads English short texts, the word watched by the user can be positioned so as to automatically find the meaning of the word through the network; for another example, when a user does a mathematical topic, the topic that the user gazes at can be located, so as to automatically search a solving method, an answer and the like through a network. Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 shows a flowchart of an embodiment of a man-machine interaction method used in a book reading process. As shown in fig. 1, the man-machine interaction method for use in the reading process of a book includes:
s101, acquiring a content image of the book and displaying the content image through the display screen 320.
S102, acquiring a user image in real time, and acquiring a fixation point of the user on the display screen 320 by the user image. The method specifically comprises the following steps: judging the head direction of the user according to the user image, and judging whether the user watches the display screen 320 or not according to the head direction of the user; for example, when it is determined that the head of the user is in a raised state, the user is considered to be watching the display screen 320, and when the user is in a lowered state, the user is considered to be watching a book. When it is determined that the user gazes at the display screen 320, the user gazing point on the display screen 320 is obtained by the user image analysis. When the user is judged to watch on the display screen 320, the eye region in the user image is further analyzed, so that unnecessary calculation can be reduced, and power consumption is reduced. Of course, the head direction of the user may not be determined, and the eye region analysis may be performed on the user image regardless of whether the user gazes at the display screen 320. In this embodiment, the user image is a face image, but this should not be construed as a limitation.
S103, performing a corresponding operation on the content of the gazing point, the corresponding operation including performing an extended search on the display screen 320 on the content of the gazing point. The extended search includes, but is not limited to, idiom paraphrasing, background data query, English translation, question answer query, image retrieval, and the like. Incidentally, how to implement the extended search of the content of the point of regard can be implemented by the prior art, and details are not repeated herein. Further, in some embodiments, the user's gaze point may also be displayed through the display screen 320, which may help parents to know the learning status of the student when applied in the field of learning machines.
In an embodiment, when the content display size of the gazing point is smaller than the tracking accuracy, the area of the gazing point is also enlarged, the gazing point of the user in the enlarged area is further acquired, and the gazing point of the user in the enlarged area is used as the gazing point of the user. For example, in the process of reading by the user, if the text content displayed in the display screen 320 is too small, the corresponding content cannot be accurately located due to the limitation of tracking accuracy, and the content can be accurately located by performing tracking identification after being enlarged, so that the user can perform more refined interaction with the content in the book.
FIG. 2 is a block diagram illustrating components of one embodiment of a human-computer interaction device 200 for use in reading a book. As shown in fig. 2, the human-computer interaction device 200 for use in a book reading process includes a first acquisition module 210, a display module 220, a second acquisition module 230, an image processing module 240, an eye tracking module 250, a data storage module 260, and an operation module (not shown). The first collecting module 210 is configured to collect a content image of a book. The display module 220 is used for displaying the content image of the book. The second acquisition module 230 is used to acquire user images in real time. The image processing module 240 is used for processing the user image. The eye tracking module 250 is used for calculating and obtaining the gazing point of the user on the display screen 320 according to the output of the image processing module 240. The data storage module 260 is used for storing the position information of the gazing point of the user on the display screen 320, the content information of the display screen 320 and the like after processing, or transmitting the information to the network for storage. The operation module is used for executing corresponding operation on the content of the gazing point, and the corresponding operation comprises the expansion search of the content of the gazing point on the display screen 320. The extended search includes, but is not limited to, idiom paraphrasing, background data query, English translation, question answer query, image retrieval, and the like. Incidentally, how to implement the extended search of the content of the point of regard can be implemented by the prior art, and details are not repeated herein.
In one embodiment, the image processing module 240 determines the head direction of the user from the user image, and determines whether the user gazes at the display screen 320 from the head direction of the user; for example, when it is determined that the head of the user is in a raised state, the user is considered to be watching the display screen 320, and when the user is in a lowered state, the user is considered to be watching a book. When the image processing module 240 determines that the user gazes at the display screen 320, the user image further analyzes the eye region of the user, specifically, the image segmentation is performed by using a self-adaptive threshold method, a region closest to a circle is selected for pupil center positioning, the centers of two bright spot regions closest to the pupil center are selected as the centers of the reflective spots, and then the distance between the eye and the second camera device 340 is calculated according to the distance between the two reflective spots; the eye tracking module 250 tracks the user's gaze point on the display screen 320 in the real scene through the pupil center, the center of the reflection point and the distance between the eye and the second camera 340 extracted by the image processing module 240. When it is determined that the user gazes at the display screen 320, the image processing module 240 further analyzes the eye region in the user image, which can reduce unnecessary calculation and power consumption. Of course, the head direction of the user may not be determined, and the eye region analysis may be performed on the user image regardless of whether the user gazes at the display screen 320. In this embodiment, the user image is a face image, but this should not be construed as a limitation.
In an embodiment, the human-computer interaction device 200 for use in a book reading process further includes an enlarging module (not shown), when the content display size of the gazing point is smaller than the tracking accuracy, the enlarging module enlarges the region of the gazing point, then further obtains the gazing point of the user in the enlarged region through the image processing module 240 and the eye tracking module 250, and takes the gazing point of the user in the enlarged region as the gazing point of the user. For example, in the process of reading by the user, if the text content displayed in the display screen 320 is too small, the corresponding content cannot be accurately located due to the limitation of tracking accuracy, and the content can be accurately located by performing tracking identification after being enlarged, so that the user can perform more refined interaction with the content in the book.
Fig. 3 to 5 show schematic views of an electronic apparatus 300, and as shown in fig. 3 to 5, the electronic apparatus 300 includes a first camera 310, a display screen 320, a light source module 330, a second camera 340, a processor 350, and a memory 360. The first camera 310 is used to capture an image of the content of the book. The display screen 320 is coupled to the processor 350 for displaying content images. The light source module 330 is used to emit a light source to illuminate a user. The second camera 340 is used for capturing the user image. The processor 350 is communicatively coupled to the first camera 310 and the second camera 340 to obtain the content image and the user image. The memory 360 is used for storing one or more programs, such as human-computer interaction programs for use in the book reading process, which when executed by the processor 350, cause the processor 350 to implement the above-described human-computer interaction method for use in the book reading process.
The first camera 310 is a visible light camera, but not limited thereto. The memory 360 may be any type of random access memory, read only memory, flash memory or the like integrated into the electronic device 300. The processor 350 may be a central processing unit or other programmable general purpose or special purpose microprocessor, digital signal processor, programmable controller, special integrated circuit or other similar device or combination of devices.
In the embodiment shown in fig. 3, the second camera device 340 is an infrared camera, the light source module 330 includes two infrared emitters 331, 332, and the two infrared emitters 331, 332 are disposed on two sides of the second camera device 340, so as to calculate and obtain the distance between the eye and the second camera device 340. Of course, the light source module 330 may only include one infrared emitter, and in this case, the second camera device 340 may include two infrared cameras respectively located at two sides of the infrared emitter. In addition, the light source module 330 may also include only one infrared emitter, and the second camera 340 may be an infrared camera, and then the user may attach the comparison mark thereto. In this embodiment, the processor 350 performs image segmentation by using an adaptive threshold method, selects a region closest to a circle for pupil center positioning, selects two centers of bright spot regions closest to the pupil center as centers of reflective points, calculates a distance between an eye and the second camera 340 according to the distance between the two reflective points, and finally completes tracking of a gaze point of a user on the display screen 320 in a real scene according to the extracted pupil center, the centers of the reflective points, and the distance between the eye and the second camera 340.
Under a visible light environment, images of human eyes, particularly images of asians' eyes, have pupils that are difficult to distinguish from iris colors. For infrared light, the iris and the pupil have different absorptivity and reflectivity, and under the condition of infrared illumination, the pupil part is darker, and the iris part is brighter, so that the difference is obvious. In this embodiment, the light source module 330 emits infrared light to illuminate the user, and the infrared light is reflected by the surface of the eyeball and enters the second camera device 340 for imaging, so that a near-infrared image of the eye can be obtained, and thus, the eye data can be calculated, and the tracking of the fixation point can be realized.
As shown in fig. 3, the first camera 310 is disposed on the upper side of the display screen 320, and the light source module 330 and the second camera 340 are disposed on the lower side of the display screen 320. The contents of the book can be better captured by locating the first camera 310 on the upper side, and the user image can be better captured by locating the second camera 340 on the lower side. The electronic device 300 further includes a housing 370, and further, the first camera 310 is disposed on the top of the housing 370, and the first camera 310 can be turned over by a predetermined angle (according to the prior art) relative to the housing 370, so as to adjust its shooting area to be enough to shoot the content of the book being read by the user.
Accordingly, the present invention also relates to a computer-readable storage medium, which stores a computer program, and when the computer program is executed by the processor 350, the method for man-machine interaction in the book reading process in the above embodiments is completed. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer memory, Read Only Memory (ROM), Random Access Memory (RAM), or the like.
In summary, the content image of the book is collected and displayed by the display screen 320, so that when a user encounters a problem in the process of reading the book (for example, an unknown word, unknown background knowledge, etc.), the user can synchronously watch the corresponding position on the display screen 320 (i.e., the position of the unknown word/unknown background knowledge on the display screen 320), at this time, the gazing point of the user on the display screen 320 can be obtained, and then the content of the gazing point is expanded and searched on the display screen 320. In addition, the content which is displayed in a smaller size is enlarged and then tracked and recognized, so that the content watched by the user can be accurately positioned, and the user can interact with the content on the book in a more refined manner.
The present invention has been described in connection with the preferred embodiments, but the present invention is not limited to the embodiments disclosed above, and is intended to cover various modifications, equivalent combinations, which are made in accordance with the spirit of the present invention.
Claims (10)
1. A man-machine interaction method used in the process of reading books is characterized by comprising the following steps:
acquiring a content image of a book and displaying the content image through a display screen;
acquiring a user image in real time, and acquiring a fixation point of a user on the display screen by the user image;
and performing corresponding operation on the content of the gazing point, wherein the corresponding operation comprises performing expansion search on the content of the gazing point on the display screen.
2. The human-computer interaction method for book reading of claim 1, wherein the point of regard is obtained when a user synchronously views a corresponding position on the display screen while encountering a problem during book reading, and the "performing an extended search on the content of the point of regard on the display screen" includes at least one of idiom, background information query, english translation, question answer query, and image retrieval.
3. The human-computer interaction method for the book reading process as claimed in claim 1, wherein the "obtaining the user's gaze point on the display screen from the user image" is specifically:
judging the head direction of the user according to the user image, and judging whether the user gazes at the display screen or not according to the head direction of the user;
and when the user is judged to watch the display screen, the user image is analyzed to obtain the watching point of the user on the display screen.
4. A human-computer interaction method for use in a book reading process as claimed in claim 1,
and when the content display size of the gazing point is smaller than the tracking precision, amplifying the area of the gazing point, further acquiring the gazing point of the user in the amplified area, and taking the gazing point of the user in the amplified area as the gazing point of the user.
5. A human-computer interaction device for use in a book reading process, comprising:
the first acquisition module is used for acquiring content images of books;
a display module for displaying the content image;
the second acquisition module is used for acquiring the user image in real time;
the image processing module is used for processing the user image; and
the eye tracking module is used for obtaining a fixation point of a user on the display screen according to the output calculation of the image processing module;
and the operation module is used for executing corresponding operation on the content of the gazing point, and the corresponding operation comprises the step of performing expansion search on the content of the gazing point on the display screen.
6. An electronic device, comprising:
the first camera device is used for shooting a content image of the book;
a display screen for displaying the content image;
a light source module for emitting a light source to illuminate a user;
the second camera shooting device is used for shooting a user image;
the processor is in communication connection with the first camera device and the second camera device; and
a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the human-computer interaction method for use in a book reading process as claimed in any one of claims 1 to 4.
7. The electronic device of claim 6, wherein the second camera is an infrared camera, and the light source module comprises two infrared emitters, and the two infrared emitters are disposed opposite to two sides of the second camera.
8. The electronic device of claim 6, wherein the first camera is disposed on an upper side of the display screen, and the light source module and the second camera are disposed on a lower side of the display screen.
9. The electronic device of claim 6, further comprising a housing, wherein the first camera device is disposed at a top of the housing, and the first camera device can be flipped over a preset angle with respect to the housing to adjust a shooting area thereof.
10. A computer-readable storage medium, characterized in that it stores a computer program executable by a processor to perform a method of human-machine interaction for use in a book reading process according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289359.7A CN112433664A (en) | 2020-11-17 | 2020-11-17 | Man-machine interaction method and device used in book reading process and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289359.7A CN112433664A (en) | 2020-11-17 | 2020-11-17 | Man-machine interaction method and device used in book reading process and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112433664A true CN112433664A (en) | 2021-03-02 |
Family
ID=74692685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011289359.7A Pending CN112433664A (en) | 2020-11-17 | 2020-11-17 | Man-machine interaction method and device used in book reading process and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112433664A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113217839A (en) * | 2021-06-15 | 2021-08-06 | 读书郎教育科技有限公司 | Intelligent desk lamp capable of relieving cervical vertebra and eye fatigue and method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160017463A (en) * | 2014-08-06 | 2016-02-16 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
CN105892685A (en) * | 2016-04-29 | 2016-08-24 | 广东小天才科技有限公司 | Question searching method and device of intelligent equipment |
CN108647354A (en) * | 2018-05-16 | 2018-10-12 | 广东小天才科技有限公司 | Tutoring learning method and lighting equipment |
CN109343707A (en) * | 2018-11-07 | 2019-02-15 | 圣才电子书(武汉)有限公司 | The control method and device of e-book reading |
CN111026901A (en) * | 2019-02-19 | 2020-04-17 | 广东小天才科技有限公司 | Learning content searching method and learning equipment |
-
2020
- 2020-11-17 CN CN202011289359.7A patent/CN112433664A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160017463A (en) * | 2014-08-06 | 2016-02-16 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
CN105892685A (en) * | 2016-04-29 | 2016-08-24 | 广东小天才科技有限公司 | Question searching method and device of intelligent equipment |
CN108647354A (en) * | 2018-05-16 | 2018-10-12 | 广东小天才科技有限公司 | Tutoring learning method and lighting equipment |
CN109343707A (en) * | 2018-11-07 | 2019-02-15 | 圣才电子书(武汉)有限公司 | The control method and device of e-book reading |
CN111026901A (en) * | 2019-02-19 | 2020-04-17 | 广东小天才科技有限公司 | Learning content searching method and learning equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113217839A (en) * | 2021-06-15 | 2021-08-06 | 读书郎教育科技有限公司 | Intelligent desk lamp capable of relieving cervical vertebra and eye fatigue and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9039419B2 (en) | Method and system for controlling skill acquisition interfaces | |
US9953214B2 (en) | Real time eye tracking for human computer interaction | |
US9489574B2 (en) | Apparatus and method for enhancing user recognition | |
EP1691670B1 (en) | Method and apparatus for calibration-free eye tracking | |
CN112732071B (en) | Calibration-free eye movement tracking system and application | |
US20030038754A1 (en) | Method and apparatus for gaze responsive text presentation in RSVP display | |
Magee et al. | A human–computer interface using symmetry between eyes to detect gaze direction | |
US20020039111A1 (en) | Automated visual tracking for computer access | |
US20170156589A1 (en) | Method of identification based on smart glasses | |
Rozado et al. | Controlling a smartphone using gaze gestures as the input mechanism | |
CN109375765B (en) | Eyeball tracking interaction method and device | |
KR101455200B1 (en) | Learning monitering device and method for monitering of learning | |
CN109634407B (en) | Control method based on multi-mode man-machine sensing information synchronous acquisition and fusion | |
CN106681509A (en) | Interface operating method and system | |
CN114600031A (en) | Sight tracking system and method of intelligent glasses | |
Lander et al. | hEYEbrid: A hybrid approach for mobile calibration-free gaze estimation | |
CN114821753B (en) | Eye movement interaction system based on visual image information | |
Rakhmatulin | A review of the low-cost eye-tracking systems for 2010-2020 | |
CN112433664A (en) | Man-machine interaction method and device used in book reading process and electronic equipment | |
Chen et al. | Gaze Gestures and Their Applications in human-computer interaction with a head-mounted display | |
Czuszynski et al. | Septic safe interactions with smart glasses in health care | |
CN115762772B (en) | Method, device, equipment and storage medium for determining emotional characteristics of target object | |
Griffin et al. | A technical introduction to using speakers’ eye movements to study language | |
Lin et al. | A novel device for head gesture measurement system in combination with eye-controlled human–machine interface | |
CN106662911A (en) | Gaze detector using reference frames in media |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210302 |