CN111078096A - Man-machine interaction method and electronic equipment - Google Patents

Man-machine interaction method and electronic equipment Download PDF

Info

Publication number
CN111078096A
CN111078096A CN201910494081.8A CN201910494081A CN111078096A CN 111078096 A CN111078096 A CN 111078096A CN 201910494081 A CN201910494081 A CN 201910494081A CN 111078096 A CN111078096 A CN 111078096A
Authority
CN
China
Prior art keywords
user
electronic device
content
display screen
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910494081.8A
Other languages
Chinese (zh)
Other versions
CN111078096B (en
Inventor
郑洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910494081.8A priority Critical patent/CN111078096B/en
Publication of CN111078096A publication Critical patent/CN111078096A/en
Application granted granted Critical
Publication of CN111078096B publication Critical patent/CN111078096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/433Query formulation using audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Abstract

The embodiment of the invention relates to the technical field of human-computer interaction, and discloses a human-computer interaction method and electronic equipment, wherein the method comprises the following steps: identifying a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page; and identifying the content touched by the target finger from the learning page as the content pointed by the user. The embodiment of the invention is beneficial to improving the identification rate and accuracy of the content pointed by the user and improving the operation experience of the user.

Description

Man-machine interaction method and electronic equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to a human-computer interaction method and electronic equipment.
Background
Currently, many student users use electronic devices (e.g., family education machines) to assist in learning. In a common usage scenario of the electronic device, a student user may click on a certain content (e.g., a raw word that cannot be read) on a learning page (e.g., a paper page) with a certain finger (e.g., an index finger) to trigger the electronic device to recognize the content pointed by the student user and to read the content.
In practice, it is found that when a student user clicks a certain content (e.g., a raw word or a word that cannot be read) on a learning page (e.g., a paper page) with a certain finger (e.g., an index finger), other fingers need to be bent and retracted to avoid affecting the recognition rate and accuracy of the electronic device for the content pointed by the student user. Such a requirement is severe, which may reduce the operation experience of the student user; especially, a student user who is younger may stretch out a plurality of fingers or lie prone on the study page in the in-service use, very easily influence the electronic equipment to the identification rate and the accuracy of student user's content of pointing.
Disclosure of Invention
The embodiment of the invention discloses a man-machine interaction method and electronic equipment, which are beneficial to improving the recognition rate and accuracy of contents pointed by a user and improving the operation experience of the user.
The first aspect of the embodiment of the invention discloses a man-machine interaction method, which comprises the following steps:
identifying a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page;
and identifying the content touched by the target finger from the learning page as the content pointed by the user.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the meaning of the designated graph is used to indicate that reading is performed on content, and the method further includes:
acquiring preset voiceprint characteristics bound by the specified graph;
and adopting the preset voiceprint characteristics to read the content pointed by the user.
As another optional implementation manner, in the first aspect of the embodiment of the present invention, the meaning of the designated graph is further used to represent a search result corresponding to query content, and the method further includes:
inquiring a search result corresponding to the content pointed by the user;
and controlling a display screen to output a search result corresponding to the content pointed by the user.
As another optional implementation manner, in the first aspect of the embodiment of the present invention, after the controlling the display screen to output the search result corresponding to the content pointed by the user, the method further includes:
detecting whether the eye sight of the user falls on the display screen;
if the eye sight of the user does not fall on the display screen, controlling the display screen to enter a standby state;
and reporting and reading the search result corresponding to the content pointed by the user by adopting the preset voiceprint characteristic.
As another optional implementation, in the first aspect of the embodiment of the present invention, the method further includes:
and if the eye sight of the user falls on the display screen, identifying whether the user is a blind user, and if so, executing the step of controlling the display screen to enter a standby state.
As another optional implementation manner, in the first aspect of the embodiment of the present invention, the method is applied to an electronic device arranged in a library, and is characterized in that the electronic device establishes a communication connection with an interactive device arranged at an entrance of the library, the interactive device places the finger stall marked with the designated graphic, and when the interactive device monitors that a user picks up the finger stall marked with the designated graphic, the interactive device outputs a cartoon object set, and the interactive device binds the designated graphic with a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set and then sends the bound specified graphic to the electronic device.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the first identification unit is used for identifying a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page;
and the second identification unit is used for identifying the content touched by the target finger from the learning page as the content pointed by the user.
As another optional implementation manner, in the second aspect of the embodiment of the present invention, the meaning of the designated graph is used to indicate that reading is performed on content, and the electronic device further includes:
the acquisition unit is used for acquiring the preset voiceprint characteristics bound by the specified graphics;
and the reading unit is used for reading the content pointed by the user by adopting the preset voiceprint characteristics.
As another optional implementation manner, in the second aspect of the embodiment of the present invention, the meaning of the designated graph is further used to represent a search result corresponding to query content, and the electronic device further includes:
the query unit is used for querying a search result corresponding to the content pointed by the user;
and the first control unit is used for controlling the display screen to output the search result corresponding to the content pointed by the user.
As another optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the detection unit is used for detecting whether the eye sight of the user falls on the display screen or not after the first control unit controls the display screen to output a search result corresponding to the content pointed by the user;
the second control unit is used for controlling the display screen to enter a standby state when the detection unit detects that the eye sight of the user does not fall on the display screen;
the reading unit is further configured to read the search result corresponding to the content pointed by the user by using the preset voiceprint feature.
As another optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the third identification unit is used for identifying whether the user is a blind user or not when the detection unit detects that the eye sight line of the user falls on the display screen;
the second control unit is also used for controlling the display screen to enter a standby state when the third identification unit identifies that the user is a blind user.
As another optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device establishes a communication connection with an interactive device set at an entrance of the library, the interactive device places a finger stall marked with a specific graphic thereon, and when the interactive device monitors that a user picks up the finger stall marked with the specific graphic, the interactive device outputs a cartoon object set, and the interactive device binds the specific graphic with a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set and sends the bound specific graphic to the electronic device.
A third aspect of the embodiments of the present invention discloses another electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of any one of the methods disclosed in the first aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention discloses a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute all or part of the steps in any one of the methods disclosed in the first aspect of the embodiments of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which, when running on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the electronic equipment can identify the target finger wearing the finger stall marked with the specified graph from a plurality of fingers touching a certain learning page, and further can identify the content touched by the target finger from the learning page as the content pointed by the user. Therefore, by implementing the embodiment of the invention, even if a plurality of fingers of a younger student user stretch or a palm of the younger student user lies on the learning page in the actual use process, the identification rate and the accuracy of the electronic equipment on the content pointed by the student user are not influenced, so that the identification rate and the accuracy of the content pointed by the user are favorably improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a human-computer interaction method disclosed by the embodiment of the invention;
FIG. 2 is a flow chart of another human-computer interaction method disclosed in the embodiment of the invention;
FIG. 3 is a flow chart of another human-computer interaction method disclosed in the embodiments of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 6 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, of embodiments of the present invention are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a man-machine interaction method and electronic equipment, which are beneficial to improving the recognition rate and accuracy of contents pointed by a user and improving the operation experience of the user. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a human-computer interaction method according to an embodiment of the present invention. As shown in fig. 1, the human-computer interaction method may include the following steps.
101. The electronic device identifies a target finger wearing a finger glove marked with a specified graphic from among a plurality of fingers touching a certain learning page.
In one embodiment, a user (e.g., a student user) may touch several of his fingers on a learning page (e.g., a paper learning page), and a target finger of the several fingers may be provided with a finger sleeve marked with a specified graphic (e.g., a specified LOGO), such that the electronic device may identify the target finger from the several fingers touching the learning page, the finger sleeve marked with the specified graphic (e.g., the specified LOGO). For example, the electronic device may identify, through its own shooting module or an external shooting module, a target finger wearing a finger glove marked with a specified graphic (e.g., a specified LOGO) from the plurality of fingers touching the learning page.
In one embodiment, the electronic device may detect whether a learning page (e.g., a paper learning page) is touched through a sensor (e.g., a pressure sensor), and if the learning page is touched, may start a shooting module of the electronic device itself or an external shooting module to shoot the learning page (e.g., the paper learning page) being touched to obtain a shot image; the shot image can comprise a plurality of fingers, and a finger stall marked with a specified graph (such as a specified LOGO) is worn on a certain target finger in the plurality of fingers; the shot image also comprises learning contents which are not shielded by the plurality of fingers on the learning page (such as a paper learning page); further, the electronic device may identify a target finger wearing a finger glove marked with a specified graphic (e.g., a specified LOGO) from among the plurality of fingers included in the captured image.
In another embodiment, after the electronic device identifies a target finger wearing a finger glove marked with a specified graphic from a plurality of fingers touching a learning page, the following steps can be further performed:
the electronic device can inquire whether the finger stall marked with the specified graph is bound with the allowable use time range, and if not, directly execute the step 102; if so, the electronic device may check whether the current system display time is within the allowable use time range, and if so, the electronic device performs step 102; otherwise, the electronic device may end the process.
Wherein, the allowable use time range can be the same as the working time range of the electronic equipment; alternatively, the permitted usage time range may be different from the operating time range of the electronic device, for example, the permitted usage time range may be a partial time range included in the operating time range of the electronic device, and the remaining battery capacity of the electronic device in the partial time range is greater than the specified capacity.
102. The electronic equipment identifies the content touched by the target finger from the learning page as the content pointed by the user.
In an embodiment, the electronic device may obtain, according to the learning content on the learning page (e.g., a paper learning page) included in the captured image and not blocked by the plurality of fingers, an electronic page corresponding to the learning page (e.g., the paper learning page), and compare the content of the electronic page with the learning content on the learning page (e.g., the paper learning page) and not blocked by the plurality of fingers, so as to determine the learning content on the learning page (e.g., the paper learning page) and blocked by the plurality of fingers; and determining the content shielded by the target finger (namely touched by the target finger) from the learning content shielded by the plurality of fingers on the learning page (such as a paper learning page) as the content pointed by the user according to the position of the target finger on the learning page.
Therefore, by implementing the man-machine interaction method described in fig. 1, even if a plurality of fingers of a younger student user stretch or a palm of the younger student user lies on a learning page in the actual use process, the identification rate and accuracy of the electronic device on the content pointed by the student user are not affected, so that the identification rate and accuracy of the content pointed by the user are improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of another human-computer interaction method according to an embodiment of the invention. In the man-machine interaction method described in fig. 2, an electronic device is set in a library (e.g., a library of a school or a public library in a city), and the electronic device establishes a communication connection with an interactive device (e.g., an interactive robot) set at an entrance of the library, a finger stall marked with a designated graphic is placed on the interactive device, and when the interactive device monitors that a user picks up the finger stall marked with the designated graphic, the interactive device outputs a set including a cartoon object, and the interactive device binds the designated graphic with a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set and then sends the bound image to the electronic device for pre-storage. Further, the electronic device may store the meaning of the designated graph in advance, for example, the meaning of the designated graph is only used for indicating reading of the content, or the meaning of the designated graph is only used for indicating the search result corresponding to the query content, or the meaning of the designated graph is used for indicating both reading of the content and the search result corresponding to the query content. As shown in fig. 2, the human-computer interaction method may include the following steps.
201. The electronic equipment identifies a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page; the meaning of the designated graph is used for indicating the reading of the content and indicating the search result corresponding to the query content.
202. The electronic equipment identifies the content touched by the target finger from the learning page as the content pointed by the user.
203. And the electronic equipment acquires the preset voiceprint characteristics bound by the specified graph.
As described above, the preset voiceprint feature bound by the designated graphics may be a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set output by the interactive device. When the interactive equipment monitors that a user picks up a finger stall marked with a specified graph, the interactive equipment can acquire attribute information of the user, and the attribute information of the user can comprise facial image information of the user; and, the interactive device may obtain the gender and age of the user by analyzing the facial image information of the user; furthermore, the interactive device can find a hot cartoon object set which is matched with the gender and age of the user at the same time from the network, and output the cartoon object set to enable the user to select a certain cartoon object from the cartoon object set, bind the specified graph and the preset voiceprint feature corresponding to the certain cartoon object selected from the cartoon object set by the user, and then send the bound graph and the preset voiceprint feature to the electronic device for pre-storage. Thereby can promote the user and use the experience sense that is marked with the finger stall of appointed figure, be favorable to better attraction user to use the finger stall that is marked with appointed figure, promote the user's that is marked with the finger stall of appointed figure degree of viscosity.
204. The electronic equipment adopts the preset voiceprint characteristics to read the content pointed by the user.
For example, the electronic device may use the preset voiceprint feature to read the content pointed by the user through a speaker or an external earphone.
205. And the electronic equipment inquires the search result corresponding to the content pointed by the user.
For example, the search result corresponding to the content pointed by the user may include a test question or a paraphrase corresponding to the content pointed by the user, which is not limited in the embodiment of the present invention.
206. And the electronic equipment controls the display screen to output the search result corresponding to the content pointed by the user.
Therefore, by implementing the man-machine interaction method described in fig. 2, even if a plurality of fingers of a younger student user are stretched or a palm of the younger student user lies on a learning page, the recognition rate and accuracy of the electronic device on the content pointed by the student user are not affected, so that the recognition rate and accuracy of the content pointed by the user are improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
In addition, by implementing the human-computer interaction method described in fig. 2, the experience of the user using the finger stall marked with the designated figure can be improved, which is beneficial to better attracting the user to use the finger stall marked with the designated figure and improving the user viscosity of the finger stall marked with the designated figure.
Referring to fig. 3, fig. 3 is a flowchart illustrating another human-computer interaction method according to an embodiment of the present invention. In the man-machine interaction method described in fig. 3, an electronic device is set in a library (e.g., a library of a school or a public library in a city), and the electronic device establishes a communication connection with an interactive device (e.g., an interactive robot) set at an entrance of the library, a finger stall marked with a designated graphic is placed on the interactive device, and when the interactive device monitors that a user picks up the finger stall marked with the designated graphic, the interactive device outputs a set including a cartoon object, and the interactive device binds the designated graphic with a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set and then sends the bound image to the electronic device for pre-storage. Further, the electronic device may store the meaning of the designated graph in advance, for example, the meaning of the designated graph is only used for indicating reading of the content, or the meaning of the designated graph is only used for indicating the search result corresponding to the query content, or the meaning of the designated graph is used for indicating both reading of the content and the search result corresponding to the query content. As shown in fig. 3, the human-computer interaction method may include the following steps.
301. The electronic equipment identifies a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page; the meaning of the designated graph is used for indicating the reading of the content and indicating the search result corresponding to the query content.
302. The electronic equipment identifies the content touched by the target finger from the learning page as the content pointed by the user.
303. And the electronic equipment acquires the preset voiceprint characteristics bound by the specified graph.
304. The electronic equipment adopts the preset voiceprint characteristics to read the content pointed by the user.
305. And the electronic equipment inquires the search result corresponding to the content pointed by the user.
306. And the electronic equipment controls the display screen to output the search result corresponding to the content pointed by the user.
307. The electronic equipment detects whether the eye sight of the user falls on the display screen, and if not, the step 308 to the step 309 are executed; if yes, go to step 310.
308. And the electronic equipment controls the display screen to enter a standby state.
309. And the electronic equipment adopts the preset voiceprint characteristics to report the search result corresponding to the content pointed by the user, and the process is ended.
310. The electronic equipment identifies whether the user is a blind user, if so, the step 308 to the step 309 are executed; if not, go to step 306.
In one embodiment, after the electronic device performs step 302, the electronic device may further perform the following steps:
the electronic equipment collects the facial image information of the user and reports the facial image information of the user and the electronic page corresponding to the learning page to the service equipment; the content of the learning page is the same as that of the electronic page corresponding to the learning page;
the service equipment identifies the identity information of the user according to the facial image information of the user; and establishing a mapping relation between the electronic page and the identity information of the user, so that a supervisor (such as a parent or a teacher) of the user can learn the learning dynamic of the user through the mapping relation.
Furthermore, after the service device establishes the mapping relationship between the electronic page and the identity information of the user, the following steps may be further performed:
the service equipment queries and supervises a learning task distributed for the user by taking the identity information of the user as a basis, wherein the learning task at least comprises a specified point-reading page; judging whether the content of the electronic page is the same as that of the appointed point-reading page, if so, marking a first mark on the established mapping relation between the electronic page and the identity information of the user, wherein the first mark is used for indicating that the electronic page belongs to the appointed point-reading page; if not, a second mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the second mark is used for indicating that the electronic page does not belong to the appointed point reading page, so that a supervisor can know the learning dynamic state of the user, and can further know whether the user finishes the point reading of the appointed point reading page included in the learning task.
Furthermore, the learning task may further include a reading time range for a specified reading page, and accordingly, after the service device marks the established mapping relationship between the electronic page and the identity information of the user with a first mark, the following steps may be further performed:
the service equipment judges whether the initial time of the learning page when touched is within a reading time range which is included by the learning task and aims at the appointed reading page, if yes, a third mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the third mark is used for indicating that the user reads the appointed reading page within the reading time range of the appointed reading page; if not, a fourth mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the fourth mark indicates that the user does not click and read the appointed click-to-read page within the click-to-read time range of the appointed click-to-read page, so that a supervisor can further know whether the user actively clicks and reads the appointed click-to-read page within the click-to-read time range of the appointed click-to-read page included in the learning task, and the supervisor can know the click-to-read enthusiasm of the user.
In one embodiment, an electronic device detects whether a user's eye gaze falls on a display screen, comprising:
the electronic equipment collects the facial image information of the user, determines the direction of the eye sight of the user according to the facial information of the user, judges whether the eye sight of the user falls on the display screen or not according to the direction of the eye sight of the user, and determines that the eye sight of the user falls on the display screen if the eye sight of the user falls on the display screen; on the contrary, if the eye sight of the user is judged not to fall on the display screen according to the direction of the eye sight of the user, if so, the eye sight of the user is determined not to fall on the display screen.
Therefore, by implementing the man-machine interaction method described in fig. 3, even if a plurality of fingers of a younger student user are stretched or a palm of the younger student user lies on a learning page, the recognition rate and accuracy of the electronic device on the content pointed by the student user are not affected, so that the recognition rate and accuracy of the content pointed by the user are improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
In addition, by implementing the human-computer interaction method described in fig. 3, the experience of the user using the finger stall marked with the designated figure can be improved, which is beneficial to better attracting the user to use the finger stall marked with the designated figure and improving the user viscosity of the finger stall marked with the designated figure.
In addition, the human-computer interaction method described in fig. 3 is implemented, so that the supervisor can not only know the learning dynamics of the user, but also further know whether the user has completed the point reading of the designated point reading page included in the learning task, and know the point reading enthusiasm of the user.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 4, the electronic device may include:
a first recognition unit 401 configured to recognize a target finger wearing a finger glove marked with a specified pattern from among a plurality of fingers touching a certain learning page;
and a second identifying unit 402, configured to identify, from the learning page, content touched by the target finger as content pointed by the user.
For example, the first identification unit 401 may identify the target finger wearing the finger glove marked with a specific graphic (such as a specific LOGO) from the plurality of fingers touching the learning page through a shooting module of the electronic device itself or a shooting module external to the electronic device.
In one embodiment, the first identification unit 401 may detect whether a certain learning page (e.g., a paper learning page) is touched through a sensor (e.g., a pressure sensor), and if the certain learning page (e.g., the paper learning page) is touched, may start a shooting module of the electronic device itself or an external shooting module to shoot the learning page (e.g., the paper learning page) being touched to obtain a shot image; the shot image can comprise a plurality of fingers, and a finger stall marked with a specified graph (such as a specified LOGO) is worn on a certain target finger in the plurality of fingers; the shot image also comprises learning contents which are not shielded by the plurality of fingers on the learning page (such as a paper learning page); further, the first recognition unit 401 may recognize a target finger wearing a finger glove marked with a prescribed figure (such as a prescribed LOGO) from among a plurality of fingers included in the captured image.
In another embodiment, after the first recognition unit 401 recognizes a target finger wearing a finger glove marked with a specified graphic from among a plurality of fingers touching a certain learning page, the following operations may be further performed:
inquiring whether the finger stall marked with the specified graph is bound with an allowable use time range or not, if not, triggering a second identification unit 402 to identify the content touched by the target finger from the learning page as the content pointed by the user; if the current system display time is within the allowable use time range, triggering the second identification unit 402 to identify the content touched by the target finger from the learning page as the content pointed by the user; otherwise, the second recognition unit 402 is not triggered to execute the recognition of the content touched by the target finger from the learning page as the content pointed by the user.
Wherein, the allowable use time range can be the same as the working time range of the electronic equipment; alternatively, the permitted usage time range may be different from the operating time range of the electronic device, for example, the permitted usage time range may be a partial time range included in the operating time range of the electronic device, and the remaining battery capacity of the electronic device in the partial time range is greater than the specified capacity.
In an embodiment, the second identifying unit 402 may obtain, according to the learning content on the learning page (e.g., paper learning page) included in the captured image and not blocked by the plurality of fingers, an electronic page corresponding to the learning page (e.g., paper learning page), and compare the learning content on the electronic page with the learning content on the learning page (e.g., paper learning page) and not blocked by the plurality of fingers, so as to determine the learning content on the learning page (e.g., paper learning page) and blocked by the plurality of fingers; and determining the content shielded by the target finger (namely touched by the target finger) from the learning content shielded by the plurality of fingers on the learning page (such as a paper learning page) as the content pointed by the user according to the position of the target finger on the learning page.
Therefore, by implementing the electronic device described in fig. 4, even if a plurality of fingers of a younger student user are stretched or a palm of the younger student user lies on a learning page, the identification rate and accuracy of the electronic device for the content pointed by the student user are not affected, so that the identification rate and accuracy for the content pointed by the user are improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is optimized from the electronic device shown in fig. 4. The electronic device shown in fig. 5 may be disposed in a library (e.g., a library of a school or a public library in a city), and the electronic device establishes a communication connection with an interactive device (e.g., an interactive robot) disposed at an entrance of the library, a finger stall marked with a specific graphic is placed on the interactive device, and when the interactive device monitors that a user picks up the finger stall marked with the specific graphic, the interactive device outputs a cartoon object set, and the interactive device binds the specific graphic with a preset voiceprint feature corresponding to a certain cartoon object selected by the user from the cartoon object set and then sends the bound specific graphic to the electronic device for pre-storage. Further, the electronic device may store the meaning of the designated graph in advance, for example, the meaning of the designated graph is only used for indicating reading of the content, or the meaning of the designated graph is only used for indicating the search result corresponding to the query content, or the meaning of the designated graph is used for indicating both reading of the content and the search result corresponding to the query content. Compared with the electronic device shown in fig. 4, the electronic device shown in fig. 5 may further include an acquisition unit 403 and an reading unit 404;
the obtaining unit 403 is configured to obtain a preset voiceprint feature bound to the specified graph when the meaning of the specified graph is used to indicate that the content is subjected to reading; the reading unit 404 is configured to read the content pointed by the user by using the preset voiceprint feature.
As an alternative embodiment, the electronic device shown in fig. 5 may further include an inquiring unit 405 and a first control unit 406; wherein, the meaning of the designated graph is also used for representing the search result corresponding to the query content, and the query unit 405 is used for querying the search result corresponding to the content pointed by the user; the first control unit 406 is configured to control the display screen to output a search result corresponding to the content pointed by the user.
As an alternative implementation, the electronic device shown in fig. 5 may further include:
a detecting unit 407, configured to detect whether the eye gaze of the user falls on the display screen after the first control unit 406 controls the display screen to output a search result corresponding to the content pointed by the user;
a second control unit 408 for controlling the display screen to enter a standby state when the detection unit 407 detects that the eye sight line of the user does not fall on the display screen;
correspondingly, the reading unit 404 is further configured to read the search result corresponding to the content pointed by the user by using a preset voiceprint feature.
As an alternative implementation, the electronic device shown in fig. 5 may further include:
a third identifying unit 409 for identifying whether the user is a blind user when the detecting unit 407 detects that the eye sight line of the user falls on the display screen;
the second control unit 408 is further configured to control the display screen to enter a standby state when the third identification unit 409 identifies the user as a blind user.
In the embodiment of the present invention, the preset voiceprint feature bound by the designated graph may be a preset voiceprint feature corresponding to a certain cartoon object selected by a user from a cartoon object set output by an interactive device. When the interactive equipment monitors that a user picks up a finger stall marked with a specified graph, the interactive equipment can acquire attribute information of the user, and the attribute information of the user can comprise facial image information of the user; and, the interactive device may obtain the gender and age of the user by analyzing the facial image information of the user; further, the interactive device may find out a hot cartoon object set formed by a plurality of cartoon objects that are matched with the gender and age of the user at the same time from the network, and output the cartoon object set for the user to select a certain cartoon object from the cartoon object set, and send the preset voiceprint feature corresponding to the certain cartoon object selected by the user from the cartoon object set and the specified graphic to the electronic device for pre-storage, so as to be acquired by the acquisition unit 403. Thereby can promote the user and use the experience sense that is marked with the finger stall of appointed figure, be favorable to better attraction user to use the finger stall that is marked with appointed figure, promote the user's that is marked with the finger stall of appointed figure degree of viscosity.
In one embodiment, after the second recognition unit 402 recognizes the content touched by the target finger from the learning page as the content pointed by the user, the electronic device may further perform the following operations:
the electronic equipment collects the facial image information of the user and reports the facial image information of the user and the electronic page corresponding to the learning page to the service equipment; the content of the learning page is the same as that of the electronic page corresponding to the learning page;
the service equipment identifies the identity information of the user according to the facial image information of the user; and establishing a mapping relation between the electronic page and the identity information of the user, so that a supervisor (such as a parent or a teacher) of the user can learn the learning dynamic of the user through the mapping relation.
Furthermore, after the service device establishes the mapping relationship between the electronic page and the identity information of the user, the following steps may be further performed:
the service equipment queries and supervises a learning task distributed for the user by taking the identity information of the user as a basis, wherein the learning task at least comprises a specified point-reading page; judging whether the content of the electronic page is the same as that of the appointed point-reading page, if so, marking a first mark on the established mapping relation between the electronic page and the identity information of the user, wherein the first mark is used for indicating that the electronic page belongs to the appointed point-reading page; if not, a second mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the second mark is used for indicating that the electronic page does not belong to the appointed point reading page, so that a supervisor can know the learning dynamic state of the user, and can further know whether the user finishes the point reading of the appointed point reading page included in the learning task.
Furthermore, the learning task may further include a reading time range for a specified reading page, and accordingly, after the service device marks the established mapping relationship between the electronic page and the identity information of the user with a first mark, the following steps may be further performed:
the service equipment judges whether the initial time of the learning page when touched is within a reading time range which is included by the learning task and aims at the appointed reading page, if yes, a third mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the third mark is used for indicating that the user reads the appointed reading page within the reading time range of the appointed reading page; if not, a fourth mark is marked on the established mapping relation between the electronic page and the identity information of the user, and the fourth mark indicates that the user does not click and read the appointed click-to-read page within the click-to-read time range of the appointed click-to-read page, so that a supervisor can further know whether the user actively clicks and reads the appointed click-to-read page within the click-to-read time range of the appointed click-to-read page included in the learning task, and the supervisor can know the click-to-read enthusiasm of the user.
In one embodiment, an electronic device detects whether a user's eye gaze falls on a display screen, comprising:
the electronic equipment collects the facial image information of the user, determines the direction of the eye sight of the user according to the facial information of the user, judges whether the eye sight of the user falls on the display screen or not according to the direction of the eye sight of the user, and determines that the eye sight of the user falls on the display screen if the eye sight of the user falls on the display screen; on the contrary, if the eye sight of the user is judged not to fall on the display screen according to the direction of the eye sight of the user, if so, the eye sight of the user is determined not to fall on the display screen.
Therefore, with the electronic device described in fig. 5, even if a plurality of fingers of a younger student user are stretched or a palm of the younger student user lies on a learning page, the recognition rate and accuracy of the electronic device for the content pointed by the student user are not affected, so that the recognition rate and accuracy for the content pointed by the user are improved; in addition, the user can wear the finger stall marked with the designated graph on any one of the habitual fingers, so that the content pointed by the user can be accurately identified no matter the finger of the user is scattered or bent, and other natural and comfortable habits are met, the use requirement is low, the user can smoothly complete learning in a state of following own habits and natural and comfortable conditions, and the operation experience of the user can be improved.
In addition, the electronic equipment described in fig. 5 is implemented, so that the experience of the user using the finger stall marked with the designated graphic can be improved, the user can be better attracted to use the finger stall marked with the designated graphic, and the user viscosity of the finger stall marked with the designated graphic can be improved.
In addition, the electronic device described in fig. 5 may be implemented to enable the supervisor to know not only the learning dynamics of the user, but also to further enable the supervisor to know whether the user has completed the click-to-read of the designated click-to-read page included in the learning task, and to know the click-to-read aggressiveness of the user.
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. As shown in fig. 6, the electronic device may include:
a memory 601 in which executable program code is stored;
a processor 602 coupled to a memory 601;
the processor 502 calls the executable program code stored in the memory 501 to execute all or part of the steps in any one of the human-computer interaction methods in fig. 1 to 3.
In addition, the embodiment of the invention further discloses a computer readable storage medium which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute all or part of the steps in any one of the human-computer interaction methods in fig. 1-3.
In addition, the embodiment of the invention further discloses a computer program product, which enables all or part of steps in any one of the human-computer interaction methods of the computer 1-3 to be performed when the computer program product runs on the computer.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The human-computer interaction method and the electronic device disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A human-computer interaction method, comprising:
identifying a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page;
and identifying the content touched by the target finger from the learning page as the content pointed by the user.
2. The human-computer interaction method of claim 1, wherein the meaning of the designated graphic is used to represent a reading of content, the method further comprising:
acquiring preset voiceprint characteristics bound by the specified graph;
and adopting the preset voiceprint characteristics to read the content pointed by the user.
3. The human-computer interaction method according to claim 2, wherein the meaning of the designated graph is further used for representing search results corresponding to query contents, and the method further comprises:
inquiring a search result corresponding to the content pointed by the user;
and controlling a display screen to output a search result corresponding to the content pointed by the user.
4. The human-computer interaction method according to claim 3, wherein after the control display screen outputs the search result corresponding to the content pointed by the user, the method further comprises:
detecting whether the eye sight of the user falls on the display screen;
if the eye sight of the user does not fall on the display screen, controlling the display screen to enter a standby state;
and reporting and reading the search result corresponding to the content pointed by the user by adopting the preset voiceprint characteristic.
5. A human-computer interaction method according to claim 4, characterized in that the method further comprises:
and if the eye sight of the user falls on the display screen, identifying whether the user is a blind user, and if so, executing the step of controlling the display screen to enter a standby state.
6. The human-computer interaction method according to any one of claims 2 to 4, applied to an electronic device arranged in a library, wherein the electronic device is in communication connection with an interactive device arranged at an entrance of the library, the interactive device is provided with a finger stall marked with a specified graphic, and when the interactive device monitors that a user picks up the finger stall marked with the specified graphic, the interactive device outputs a cartoon object set, and the interactive device binds the specified graphic with a preset voiceprint feature corresponding to a cartoon object selected by the user from the cartoon object set and sends the bound specified graphic to the electronic device.
7. An electronic device, comprising:
the first identification unit is used for identifying a target finger wearing a finger stall marked with a specified graph from a plurality of fingers touching a certain learning page;
and the second identification unit is used for identifying the content touched by the target finger from the learning page as the content pointed by the user.
8. The electronic device of claim 7, wherein the meaning of the designated graphic is used to indicate that content is read, the electronic device further comprising:
the acquisition unit is used for acquiring the preset voiceprint characteristics bound by the specified graphics;
and the reading unit is used for reading the content pointed by the user by adopting the preset voiceprint characteristics.
9. The electronic device of claim 8, wherein the meaning of the designated graph is further used for representing search results corresponding to query content, and the electronic device further comprises:
the query unit is used for querying a search result corresponding to the content pointed by the user;
and the first control unit is used for controlling the display screen to output the search result corresponding to the content pointed by the user.
10. The electronic device of claim 9, further comprising:
the detection unit is used for detecting whether the eye sight of the user falls on the display screen or not after the first control unit controls the display screen to output a search result corresponding to the content pointed by the user;
the second control unit is used for controlling the display screen to enter a standby state when the detection unit detects that the eye sight of the user does not fall on the display screen;
the reading unit is further configured to read the search result corresponding to the content pointed by the user by using the preset voiceprint feature.
11. The electronic device of claim 10, further comprising:
the third identification unit is used for identifying whether the user is a blind user or not when the detection unit detects that the eye sight line of the user falls on the display screen;
the second control unit is also used for controlling the display screen to enter a standby state when the third identification unit identifies that the user is a blind user.
12. The electronic device according to any one of claims 7 to 11, wherein the electronic device is in communication connection with an interactive device arranged at an entrance of the library, the interactive device is provided with a finger stall marked with a designated graphic, and when the interactive device monitors that a user picks up the finger stall marked with the designated graphic, the interactive device outputs a cartoon object set, and the interactive device binds the designated graphic with a preset voiceprint feature corresponding to a cartoon object selected by the user from the cartoon object set and sends the cartoon object set to the electronic device.
CN201910494081.8A 2019-06-09 2019-06-09 Man-machine interaction method and electronic equipment Active CN111078096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910494081.8A CN111078096B (en) 2019-06-09 2019-06-09 Man-machine interaction method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910494081.8A CN111078096B (en) 2019-06-09 2019-06-09 Man-machine interaction method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111078096A true CN111078096A (en) 2020-04-28
CN111078096B CN111078096B (en) 2021-07-23

Family

ID=70310064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910494081.8A Active CN111078096B (en) 2019-06-09 2019-06-09 Man-machine interaction method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111078096B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140154659A1 (en) * 2012-11-21 2014-06-05 Laureate Education, Inc. Facial expression recognition in educational learning systems
CN105183357A (en) * 2015-09-09 2015-12-23 魅族科技(中国)有限公司 Terminal device and page control method
CN106228339A (en) * 2016-07-15 2016-12-14 广东小天才科技有限公司 The control method of a kind of mobile device learning model and device, mobile device
CN106227335A (en) * 2016-07-14 2016-12-14 广东小天才科技有限公司 Preview teaching materials and the interactive learning method of video classes and Applied Learning client
CN106713896A (en) * 2016-11-30 2017-05-24 世优(北京)科技有限公司 Static image multimedia presentation method, device and system
CN107749201A (en) * 2017-11-07 2018-03-02 广东欧珀移动通信有限公司 Point reads object processing method, device, storage medium and electronic equipment
CN109068378A (en) * 2018-07-13 2018-12-21 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control mobile terminal breath screen
CN109634552A (en) * 2018-12-17 2019-04-16 广东小天才科技有限公司 It is a kind of to enter for control method and terminal device applied to dictation
CN109656465A (en) * 2019-02-26 2019-04-19 广东小天才科技有限公司 A kind of content acquisition method and private tutor's equipment applied to private tutor's equipment
CN109669661A (en) * 2018-12-20 2019-04-23 广东小天才科技有限公司 A kind of control method and electronic equipment of dictation progress
CN109784289A (en) * 2019-01-23 2019-05-21 广东小天才科技有限公司 A kind of user's reminding method and private tutor's equipment applied to private tutor's equipment
US20190164447A1 (en) * 2017-11-30 2019-05-30 Beijing Xiaomi Mobile Software Co., Ltd. Story machine, control method and control device therefor, storage medium and story machine player system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140154659A1 (en) * 2012-11-21 2014-06-05 Laureate Education, Inc. Facial expression recognition in educational learning systems
CN105183357A (en) * 2015-09-09 2015-12-23 魅族科技(中国)有限公司 Terminal device and page control method
CN106227335A (en) * 2016-07-14 2016-12-14 广东小天才科技有限公司 Preview teaching materials and the interactive learning method of video classes and Applied Learning client
CN106228339A (en) * 2016-07-15 2016-12-14 广东小天才科技有限公司 The control method of a kind of mobile device learning model and device, mobile device
CN106713896A (en) * 2016-11-30 2017-05-24 世优(北京)科技有限公司 Static image multimedia presentation method, device and system
CN107749201A (en) * 2017-11-07 2018-03-02 广东欧珀移动通信有限公司 Point reads object processing method, device, storage medium and electronic equipment
US20190164447A1 (en) * 2017-11-30 2019-05-30 Beijing Xiaomi Mobile Software Co., Ltd. Story machine, control method and control device therefor, storage medium and story machine player system
CN109068378A (en) * 2018-07-13 2018-12-21 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and control mobile terminal breath screen
CN109634552A (en) * 2018-12-17 2019-04-16 广东小天才科技有限公司 It is a kind of to enter for control method and terminal device applied to dictation
CN109669661A (en) * 2018-12-20 2019-04-23 广东小天才科技有限公司 A kind of control method and electronic equipment of dictation progress
CN109784289A (en) * 2019-01-23 2019-05-21 广东小天才科技有限公司 A kind of user's reminding method and private tutor's equipment applied to private tutor's equipment
CN109656465A (en) * 2019-02-26 2019-04-19 广东小天才科技有限公司 A kind of content acquisition method and private tutor's equipment applied to private tutor's equipment

Also Published As

Publication number Publication date
CN111078096B (en) 2021-07-23

Similar Documents

Publication Publication Date Title
US20200202226A1 (en) System and method for context based deep knowledge tracing
US20170193992A1 (en) Voice control method and apparatus
RU2010152819A (en) VISUALIZATION OF TRAINING ANIMATIONS ON THE USER INTERFACE DISPLAY
CN109299399B (en) Learning content recommendation method and terminal equipment
EP2472393A1 (en) Enablement of culture-based gestures
CN108805035A (en) Interactive teaching and learning method based on gesture identification and device
CN108881979B (en) Information processing method and device, mobile terminal and storage medium
CN107239222A (en) The control method and terminal device of a kind of touch-screen
CN109240495A (en) A kind of method and apparatus controlling automatic page turning
CN111078102B (en) Method for determining point reading area through projection and terminal equipment
CN114299546A (en) Method and device for identifying pet identity, storage medium and electronic equipment
CN111078096B (en) Man-machine interaction method and electronic equipment
CN111078983B (en) Method for determining page to be identified and learning equipment
US9092083B2 (en) Contact detecting device, record display device, non-transitory computer readable medium, and contact detecting method
CN111077997A (en) Point reading control method in point reading mode and electronic equipment
CN110569906A (en) Data processing method, data processing apparatus, and computer-readable storage medium
CN115981542A (en) Intelligent interactive touch control method, system, equipment and medium for touch screen
CN111090382B (en) Character content input method and terminal equipment
CN111081104B (en) Dictation content selection method based on classroom performance and learning equipment
EP3951619A1 (en) Information processing device, program, and information provision system
CN111079493B (en) Man-machine interaction method based on electronic equipment and electronic equipment
CN111090791B (en) Content query method based on double screens and electronic equipment
CN111160097A (en) Content identification method and device
CN113138662A (en) Method and device for preventing mistaken touch of touch equipment, electronic equipment and readable storage medium
CN111078100B (en) Point reading method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant