CN114489331A - Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks - Google Patents

Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks Download PDF

Info

Publication number
CN114489331A
CN114489331A CN202111673615.7A CN202111673615A CN114489331A CN 114489331 A CN114489331 A CN 114489331A CN 202111673615 A CN202111673615 A CN 202111673615A CN 114489331 A CN114489331 A CN 114489331A
Authority
CN
China
Prior art keywords
gesture
interaction
user
user interaction
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111673615.7A
Other languages
Chinese (zh)
Inventor
刘晓俊
陈家祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mishue Artificial Intelligence Information Technology Co ltd
Original Assignee
Shanghai Mishue Artificial Intelligence Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mishue Artificial Intelligence Information Technology Co ltd filed Critical Shanghai Mishue Artificial Intelligence Information Technology Co ltd
Priority to CN202111673615.7A priority Critical patent/CN114489331A/en
Publication of CN114489331A publication Critical patent/CN114489331A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Abstract

The invention discloses a method, a device, equipment and a medium for interacting air gestures different from button clicking, which are applied to the field of education. The method comprises the following steps: when an air gesture awakening instruction is received, displaying a gesture menu corresponding to a current interface; acquiring and displaying a user interaction gesture in the gesture interaction frame; and executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture. According to the method and the device, the user interaction gesture in the pre-configured gesture interaction frame is obtained, and when the user interaction gesture is matched with the preset gesture in the gesture menu corresponding to the current interface, the operation corresponding to the user interaction gesture is executed, so that the effect of operating the intelligent learning device in an air-isolated mode is achieved, and the learning efficiency and the learning interest of the user are improved; and when the test questions are played by adopting the voice playing function, the short-distance reading time of the user is reduced by adopting an air-separating gesture interaction mode, and the eyesight of the user is protected.

Description

Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks
Technical Field
The embodiment of the invention relates to the field of artificial intelligence, in particular to a method, a device, equipment and a medium for interacting an air gesture different from button clicking.
Background
Along with the higher and higher attention degree to education, more parents select intelligent learning equipment, let child carry out online study. However, in the actual learning process, the user needs to manually click the screen to perform the learning operation, for example, the user needs to put down a pen in the hand and then manually click the screen when needing to operate the screen, so that the use experience and the learning efficiency of the user are influenced; in addition, the user can read the test questions for a long time, and the eyesight of the user is affected.
Disclosure of Invention
In view of the above, the invention provides a method, a device, equipment and a medium for interacting air gestures, which are different from button clicks, so that the learning efficiency and the learning interest of a user are improved, and when a voice playing function is adopted for playing test questions, the air gesture interaction mode is adopted to reduce the time for the user to read in a close range, and the eyesight of the user is protected.
In a first aspect, an embodiment of the present invention provides an air gesture interaction method different from button clicking, which is applied to an intelligent learning device, and includes:
when an air gesture awakening instruction is received, displaying a gesture menu corresponding to a current interface, wherein the gesture menu comprises all preset gestures supported by the current interface;
acquiring and displaying user interaction gestures in a pre-configured gesture interaction frame;
and executing the operation corresponding to the user interaction gesture under the condition that the user interaction gesture is matched with the preset gesture.
In a second aspect, an embodiment of the present invention further provides an empty gesture interaction apparatus distinguished from button clicking, applied to an intelligent learning device, including:
the gesture display module is used for displaying a gesture menu corresponding to the current interface when an air gesture awakening instruction is received, wherein the gesture menu comprises all preset gestures supported by the current interface;
the acquisition module is used for acquiring and displaying user interaction gestures in a pre-configured gesture interaction box;
and the execution module is used for executing the operation corresponding to the user interaction gesture under the condition that the user interaction gesture is matched with the preset gesture.
In a third aspect, an embodiment of the present invention further provides an empty gesture interaction device distinguished from button clicking, where the device includes: a memory, and one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of separating gestures from button clicks as in any of the embodiments described above.
In a fourth aspect, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method for separating gestures from button clicks as described in any of the above embodiments.
According to the embodiment of the invention, the user interaction gesture in the pre-configured gesture interaction frame is obtained in the learning process of the user, and the operation corresponding to the user interaction gesture is executed under the condition that the user interaction gesture is matched with the preset gesture in the gesture menu corresponding to the current interface, so that the phenomenon that the user manually operates the intelligent learning equipment is avoided, the effect of operating the intelligent learning equipment in an air-isolated mode is realized, and the learning efficiency and the learning interest of the user are improved; and when the test questions are played by adopting the voice playing function, the short-distance reading time of the user is reduced by adopting an air gesture interaction mode, and the eyesight of the user is protected.
Drawings
FIG. 1 is a flowchart of an empty gesture interaction method distinguished from button clicking according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a gesture menu according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a display of a gesture interaction box according to an embodiment of the present invention;
FIG. 4 is a flowchart of another method for interacting with potential objects in the air, which is distinguished from button clicking according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a preset gesture highlighting according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a display of a meaning corresponding to a user interaction gesture according to an embodiment of the present invention;
FIG. 7 is a schematic configuration diagram of a preset gesture according to an embodiment of the present invention;
FIG. 8 is a flowchart of another method for interacting with potential objects in the air, which is distinguished from button clicking according to an embodiment of the present invention;
FIG. 9 is a block diagram of an empty gesture interaction apparatus distinguished from button clicking according to an embodiment of the present invention;
fig. 10 is a schematic hardware structure diagram of an empty gesture interaction device distinguished from button clicking according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
The term "include" and variations thereof as used herein are intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment".
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not construed as indicating or implying relative importance.
The air gesture in the prior art does not standardize the action made by the user. For example, the user may be allowed to swing a number 3 gesture, and some people are habitually used to lift three gestures, i.e., a forefinger gesture, a middle finger gesture and a ring finger gesture; while another group of users would naturally compare the "OK" gesture. The two actions are wrong and wrong, and are not concluded, but are right and are only habitual problems. And the positions of the fingers are combined when the gesture is performed, the exposure degrees of the fingers and the wrist are different, and the use range of a user is expanded and an accurate test is performed on the premise of needing technology.
In addition, the dynamic gesture is more complex than the static gesture, for example, the left-right page turning action, the continuous page turning inevitably has a reset action of one action, so that the system cannot judge whether the page is turned continuously to the left or to the right after the page is slid to the right, and the accuracy of identification is influenced.
And the user makes a gesture, the system recognizes the corresponding gesture, the gesture is matched with the result to tell the user what the gesture corresponds to the learned action and the system feeds back the action to the user and whether the action is recognized or not, and finally the action is fed back to the system and an automatic trigger action is made.
In view of this, the embodiment of the present invention provides an air gesture interaction method different from button clicking, so that on the basis of an intelligent learning device, a learning operation can be performed without a traditional screen clicking. Examples are: turning pages left and right at intervals, answering by single-choice questions, starting learning, and carrying out next step, and the like, wherein the operations are combined with the learning scene depth. The students do not need to put down the calculated hands and click the screen to operate while drafting, free hands can be freed up to conduct command actions, and the gesture interaction frame of the system can receive information at the first time, feed back the information to the signal recognized by the user and simultaneously carry out automatic operation. The embodiment of the invention redefines the habit of the user using the tablet, and truly realizes more interest in learning and more efficiency in question making by the air-isolated gesture. The gesture recognition method has the advantages that the static gesture and the dynamic gesture are combined in action, application scenes are wider, recognition difficulty and accuracy are finer, the gesture recognition method is more accurate in recognition and wider in application scenes compared with the gesture which is tried in the past.
In an embodiment, fig. 1 is a flowchart of an air gesture interaction method distinguished from button clicking according to an embodiment of the present invention, and this embodiment is applicable to a case where an intelligent learning device is operated in an air. This embodiment may be performed by a spaced gesture interaction device distinguished from button clicks. The different empty gesture interaction equipment from button clicking can be intelligent learning equipment provided with a front camera, such as an intelligent learning machine, a television supporting a screen projection function, an intelligent sound box, an intelligent robot, an intelligent watch, an intelligent desk lamp, an intelligent pen, an intelligent mirror and the like, and is not limited. It should be noted that, in a home scene, the intelligent learning machine may be connected to a television set supporting a screen projection function, and the content of the current display interface in the intelligent learning machine is directly projected onto the television set, so that the operation may be performed on the television set with a large screen, and the eyesight of the user is protected. As shown in fig. 1, the present embodiment may include the following steps:
and S110, displaying a gesture menu corresponding to the current interface when receiving the air gesture awakening instruction.
The gesture menu comprises all preset gestures supported by the current interface. In an embodiment, the clear gesture wake-up instruction refers to a trigger instruction for waking up a clear gesture interaction function of the smart learning device. The air-insulated gesture awakening instruction can be realized in different modes, for example, the air-insulated gesture awakening instruction can be realized through air-insulated gestures, and can also be realized through triggering operation of a button. Illustratively, under the condition of realizing the function through the air-separating gesture, a preset gesture for starting the air-separating gesture interaction function is configured in the intelligent learning device in advance, and when a gesture for starting the air-separating gesture interaction function is received, the air-separating gesture interaction function of the intelligent learning device is started; similarly, in the case of being implemented by the triggering operation of the button, the user may directly click the start button of the air gesture interaction function on the display interface of the smart learning device, so as to start the air gesture interaction function of the smart learning device.
The gesture menu comprises all preset gestures supported by the current interface of the intelligent learning device. Taking the current interface as an option selection interface as an example, a preset gesture included in the gesture menu is explained. Fig. 2 is a schematic diagram illustrating a display of a gesture menu according to an embodiment of the present invention. As shown in fig. 2, in the gesture menu, there are included: an option selection gesture and a gesture to switch to the primary interface. Wherein the option selection gesture comprises: and the gesture corresponding to the option A, the gesture corresponding to the option B, the gesture corresponding to the option C and the gesture corresponding to the option D respectively. Illustratively, option a corresponds to a one-finger gesture, option B corresponds to a two-finger gesture, option C corresponds to a three-finger gesture, and option D corresponds to a four-finger gesture.
And S120, acquiring and displaying the user interaction gesture in the pre-configured gesture interaction box.
The gesture interaction box refers to a lens range used for framing the gesture of the user. In the actual operation process, the gesture made by the user is framed through the gesture interaction box, so that the gesture behavior of the user is normalized, and the situation that the action made by the user is outside the gesture interaction box or cannot be captured by the camera due to incomplete or non-normalized gesture action is avoided.
Fig. 3 is a schematic diagram illustrating a display of a gesture interaction box according to an embodiment of the present invention. As shown in fig. 3, the current interface includes a question display box 201 and a gesture interaction box 202, and a user interaction gesture can be displayed in the gesture interaction box 202, as well as the user. Of course, in the event that the user has not made an interaction gesture, there is only the user in the gesture interaction box 202; in the case where the user makes an interaction gesture, the user interaction gesture is overlaid as a floating layer over the user displayed in the gesture interaction box 202.
The user interaction gesture refers to a user gesture collected by the camera. In an embodiment, user interaction gestures are collected through a front-facing camera of the intelligent learning device. In order to ensure the collection effectiveness of the user interaction gestures, the user interaction gestures may be collected in real time. It should be noted that, in order to ensure the learning efficiency of the user and the watching of the subsequent parents on the learning process of the user, the duration of the front camera of the intelligent learning device being in the activated state is the same as the duration of the air-separated gesture interaction function being in the activated state.
S130, executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture.
The matching degree refers to the similarity between the user interaction gesture and the preset gesture; the attribute information of the user interaction gesture is used for representing the attribute information of the user finger corresponding to the user interaction gesture. In one embodiment, the attribute information includes one of: the relative distance between the user finger corresponding to the user interaction gesture and the intelligent learning equipment; the strength of the user finger corresponding to the user interaction gesture; and the relative angle between the user finger corresponding to the user interaction gesture and the intelligent learning device. The relative distance between the user finger corresponding to the user interaction gesture and the intelligent learning device refers to a linear distance between the user finger making the user interaction gesture and the intelligent learning device; the strength of the user finger corresponding to the user interaction gesture refers to the strength of the user finger making the user interaction gesture, and in the actual operation process, the strength of the user finger can be determined according to the curvature of the user finger; and the relative angle between the user finger corresponding to the user interaction gesture and the intelligent learning equipment is used for representing the direction between the user finger corresponding to the user interaction gesture and the intelligent learning equipment.
In the actual operation process, corresponding operation can be directly executed according to the matching degree between the user interaction gesture and the preset gesture; corresponding operation can be executed according to the matching degree between the user interaction gesture and the preset gesture and the attribute information of the user interaction gesture; and executing corresponding operation according to the attribute information of the user interaction gesture.
According to the technical scheme, the user interaction gesture in the pre-configured gesture interaction frame is acquired in the learning process of the user, and the operation corresponding to the user interaction gesture is executed under the condition that the user interaction gesture is matched with the preset gesture in the gesture menu corresponding to the current interface, so that the phenomenon that the user manually operates the intelligent learning equipment is avoided, the effect of operating the intelligent learning equipment in an air-isolated mode is achieved, and the learning efficiency and the learning interest of the user are improved; and when the test questions are played by adopting the voice playing function, the short-distance reading time of the user is reduced by adopting an air-separating gesture interaction mode, and the eyesight of the user is protected.
In one embodiment, fig. 4 is a flowchart of another method for interacting with an empty gesture, which is different from a button click, according to an embodiment of the present invention. This embodiment is further explained on the basis of the above embodiment, wherein the process of the interaction of the potential of the separated space is distinguished from the button click. As shown in fig. 4, the method for interacting with a gap in the present embodiment includes the following steps:
and S410, starting an air-separating gesture interaction function of the intelligent learning equipment when receiving an air-separating gesture awakening instruction.
In an embodiment, when an air-separating gesture wake-up instruction is received, an air-separating gesture interaction function of the intelligent learning device is started, so that a user can control the intelligent learning device through the air-separating gesture.
And S420, displaying a gesture menu corresponding to the current interface.
And the gesture menu comprises all preset gestures supported by the current interface.
S430, obtaining the user interaction gesture in the pre-configured gesture interaction box.
S440, highlighting a preset gesture matched with the user interaction gesture in the gesture menu.
The preset gesture matched with the user interaction gesture in the gesture menu is highlighted under the condition that the user interaction gesture is matched with a certain preset gesture in the gesture menu, so that a user can visually see the user interaction gesture made by the user. Exemplarily, taking highlighting a certain preset gesture in the gesture menu corresponding to the option selection interface shown in fig. 2 as an example, fig. 5 is a display schematic diagram of highlighting the preset gesture provided by the embodiment of the present invention. As shown in fig. 5, a user interaction gesture corresponding to option B appears in the gesture interaction box, and if the user interaction gesture matches with a preset gesture corresponding to option B in the gesture menu, the preset gesture corresponding to option B is highlighted.
S450, executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture.
According to the technical scheme, on the basis of the embodiment, the preset gesture matched with the user interaction gesture in the gesture menu is highlighted, so that the user can check the gesture made by the user more intuitively, and the use experience of the user is improved.
In one embodiment, the method for interacting with an empty gesture distinguished from button clicking further comprises: and after the user interaction gestures in the pre-configured gesture interaction frame are obtained, displaying meanings corresponding to the user interaction gestures in the gesture interaction frame.
The meaning corresponding to the user interaction gesture refers to a result of an operation corresponding to the user interaction gesture. Illustratively, assuming that the user interaction gesture selects option a in the gesture for the option, option a is displayed in the gesture interaction box. Wherein option a is overlaid as a floating layer on top of the user interaction gesture displayed in the gesture interaction box. Fig. 6 is a schematic display diagram of meanings corresponding to user interaction gestures according to an embodiment of the present invention. As shown in fig. 6, in the case that the user interaction gesture is option a in the option selection gesture, option a is directly displayed in the gesture interaction box, so that the user can more intuitively see the result of the operation corresponding to the user interaction gesture performed by the user.
In one embodiment, the method for interacting with a potential space distinguished from button clicking further comprises: and when an air-separating gesture closing instruction is received, closing the air-separating gesture interaction function of the intelligent learning equipment.
The air-separating gesture closing instruction refers to a triggering instruction for closing the air-separating gesture interaction function of the intelligent learning device. The air-insulated gesture closing instruction can be realized in different modes, for example, the air-insulated gesture closing instruction can be realized through an air-insulated gesture, and can also be realized through the triggering operation of a button. Illustratively, under the condition of realizing the function through the air-separating gesture, a preset gesture for closing the air-separating gesture interaction function is configured in advance in the intelligent learning device, and when a gesture for closing the air-separating gesture interaction function issued by a user is received, the air-separating gesture interaction function of the intelligent learning device is closed; similarly, in the case of being implemented by the triggering operation of the button, the user may directly click a close button of the air gesture interaction function on the display interface of the smart learning device to close the air gesture interaction function of the smart learning device.
In the learning process, if the residual capacity of the intelligent learning device is low, in order to ensure normal learning of the user, the air-separating gesture interaction function of the intelligent learning device can be closed.
In an embodiment, the executing the corresponding operation according to the matching degree between the user interaction gesture and the preset gesture includes: and executing the operation corresponding to the user interaction gesture under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value. The preset matching degree threshold is used for representing whether the operation corresponding to the user interaction gesture can be executed. In an embodiment, when the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value, executing an operation corresponding to the user interaction gesture; otherwise, if the matching degree between the user interaction gesture and the preset gesture does not reach the preset matching degree threshold value, the operation corresponding to the user interaction gesture is not executed. It can be understood that the matching degree between the user interaction gesture and the preset gesture is determined to ensure that the user interaction gesture is in the gesture interaction frame and to standardize the gesture behavior of the user.
In an embodiment, the executing, according to the matching degree between the user interaction gesture and the preset gesture and the attribute information of the user interaction gesture, a corresponding operation includes: acquiring attribute information corresponding to the user interaction gesture; and executing the operation matched with the attribute information under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value. In an embodiment, when the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold, the attribute information of the user interaction gesture is acquired, so that corresponding operation is executed according to the attribute information. Illustratively, assuming that a user makes a gesture 1, and the relative distance between the finger of the user corresponding to the gesture 1 and the smart learning device gradually decreases, the operation corresponding to the gesture 1 and the attribute information is performed.
In one embodiment, the determining manner of the user interaction gesture comprises: acquiring original interaction gestures of all frames within a preset time length in a pre-configured gesture interaction frame; determining the type of the interactive gesture according to the original interactive gestures of all the frames; under the condition that the interaction gesture type is a static gesture, taking an original interaction gesture which has the largest occurrence frequency within a preset time length and the occurrence frequency reaching a preset frequency threshold value as a user interaction gesture; and under the condition that the interaction gesture type is a dynamic gesture, taking the original interaction gesture which appears most frequently in a preset time length as a user interaction gesture. The static gesture refers to a gesture corresponding to a static action; a dynamic gesture refers to a gesture corresponding to one continuous motion.
In an embodiment, the static gesture includes, but is not limited to, one of: a start learning gesture, a choice selection gesture, an answer submission gesture, an answer withdrawal gesture, a pause playing gesture, a continue playing gesture, and a screenshot gesture;
the dynamic gesture includes, but is not limited to, one of: a discipline selection gesture, a next-to-next gesture, a previous-to-next gesture, a page-flip gesture, an interface zoom-in gesture, an interface zoom-out gesture, a fast-forward gesture, a fast-reverse gesture, a volume-adjust gesture, a switch-to-master-interface gesture, a brightness-adjust gesture, a skip gesture. The interface zooming gesture can also be understood as a gesture for realizing full screen; an interface zoom-out gesture may be understood as a gesture to exit full screen; the skip gesture refers to a gesture for skipping the current test question; the volume adjusting gesture comprises two volume turning-up gestures, one volume turning-down gesture and the corresponding volume turning-up gesture and volume turning-down gesture are opposite operations, for example, a gesture of drawing circles anticlockwise is a volume turning-down gesture, and a gesture of drawing circles clockwise is a volume turning-up gesture; the brightness adjustment gesture includes two gestures, one is a brightness increase gesture and one is a brightness decrease gesture, and accordingly, the brightness increase gesture and the brightness decrease gesture are opposite operations, for example, the gesture of sliding upwards is a brightness increase gesture, and the gesture of sliding downwards is a brightness decrease gesture.
In the embodiment, a front-facing camera of the intelligent learning device collects data of user interaction gestures, recognizes a gesture corresponding to each frame, and takes a gesture with the largest occurrence frequency in all original interaction gestures in a preset time length as a user interaction gesture. The original interaction gesture refers to an unprocessed gesture made by a user within a preset time length, wherein one frame of image corresponds to one original interaction gesture. It should be noted that after data corresponding to the original interaction gestures of all frames within the preset duration are acquired, the data are calculated through a preset algorithm, and a calculation result is returned. Determining the type of the interactive gesture through algorithm processing, and if the gesture is a static gesture, taking an original interactive gesture which has the largest occurrence frequency in a preset time length and the occurrence frequency of which reaches a preset frequency threshold value as a user interactive gesture; and if the gesture is a dynamic gesture, taking the original interaction gesture which appears for the most times in the preset time length as the user interaction gesture. Certainly, in the judgment process of the actual gesture, since the dynamic gesture is a continuous motion and may have repeated motions, in order to ensure the accuracy of gesture recognition, when the interactive gesture type is determined to be the dynamic gesture, the interval time is set for the dynamic gesture, and the interval time can also be understood as cooling time to avoid the homing motion of the dynamic gesture. For example, in the case of swinging a gesture of turning a page to the right, the user needs to swing the hand from the right to the left, but when the finger of the user is returned, a phenomenon of left-right repetition is caused, and at this time, the cooling time can be configured, that is, image acquisition is not performed on the returning process of the user, so that the accuracy of gesture acquisition is ensured.
Exemplarily, fig. 7 is a schematic configuration diagram of a preset gesture according to an embodiment of the present invention. As shown in fig. 7, different preset gestures are used to perform different operations. Of course, in order to improve the user experience, one or more preset gestures may be configured for the same operation, for example, a selection gesture for options, generally speaking, four options for each test question, and one gesture may be configured for each option (for example, option a corresponds to a gesture of extending an index finger, option B corresponds to a gesture of extending an index finger and a middle finger, option C corresponds to a gesture of extending an index finger, a middle finger, and a ring finger, and option D corresponds to a gesture of extending an index finger, a middle finger, a ring finger, and a little finger); for another example, three different preset gestures are configured for one answer submission gesture, which are respectively: a gun-shooting gesture, a bottle-opening gesture and an OK gesture. Of course, this is not limited, and the configuration may be performed according to actual circumstances.
It should be noted that fig. 7 is only an exemplary diagram showing a part of the preset gestures, and not all gestures.
In one embodiment, fig. 8 is a flowchart of another method for interacting with an empty gesture, which is different from a button click, according to an embodiment of the present invention. This embodiment describes, as a preferred embodiment of the above-described embodiments, an empty gesture interaction process distinguished from button clicking. In this embodiment, a preset time duration is 1s, and the example that 20 frames of image data are included in 1s is described as an example, a process of interacting an empty gesture distinguished from a button click is described. As shown in fig. 8, the process of the space gesture interaction distinguished from the button click in the present embodiment includes the following steps:
and S810, receiving a user interaction gesture.
In an embodiment, the user makes a gesture action according to a gesture menu supportable by the current interface. And framing a lens range of the user when the user makes a gesture in the gesture interaction box, wherein if the lens range is not available, the action made by the user can not be captured by the camera at all, namely the action made by the user can be out of the box, and can also be incomplete and nonstandard.
In the embodiment, the camera collects gesture actions made by a user, identifies a gesture corresponding to each frame, and takes out the gesture with the largest number of times as a user interaction gesture through all original interaction gestures in 1 s. After data acquisition of the user interaction gesture, according to a calculation result, a data processing process comprises the following steps: firstly, acquiring 20 frames of data of 1s data, calculating each frame through a preset algorithm, returning calculation results (20), and obtaining dynamic gestures and static gestures through algorithm processing. If the returned gesture is a dynamic gesture, triggering a dynamic gesture strategy, such as sliding, and informing the system of feeding back dynamic gesture feedback; and if the gestures are all static gestures, the most static gestures are preferentially selected for recognition, and the selected result is fed back.
The strategy is prioritized, the gesture has a recognition rate, and the final gesture is obtained according to the recognition rate. The static gesture preferentially takes out the original interaction gesture with the largest number of times and the number of times exceeding a preset number threshold as the user interaction gesture; the dynamic gesture preferentially takes the original interaction gesture that occurs the most frequently as the user interaction gesture. Of course, the interval time and the cooling time (the homing action of the dynamic gesture can be avoided) are set in the identification strategy of the dynamic gesture to identify, judge and output the final gesture result.
And S820, displaying the meaning corresponding to the user interaction gesture in the gesture interaction box.
In the embodiment, based on the user interaction gesture recognized by the camera in the intelligent learning device, the front end of the system performs matching, synchronously maps the user interaction gesture in the gesture interaction box corresponding to the camera, and prompts the user to make a meaning corresponding to the user interaction gesture.
And S830, executing operation corresponding to the user interaction gesture.
In this embodiment, the static gestures and the dynamic gestures are trained through a training model to adapt to the gesture placing actions and habits of each user; moreover, the preset gestures are more consistent with the cognition of the user, and the understanding cost of the user is reduced on the premise of ensuring the use of cool and dazzling gestures; moreover, the application scene is combined with the learning scene, so that the simplicity, the rapidness, the high efficiency and the interestingness of the intelligent learning equipment are improved; in addition, the user does not need to put down the calculated fingers and then manually click the screen in the draft making process, and the fingers can be used for space operation, so that the answering efficiency is improved.
Certainly, in the learning process, if the intelligent learning device plays test questions through the voice playing function, the user can answer the questions directly through the user interaction gestures, the time that the user needs to read for a long time in a short distance can be relieved, and therefore the eyesight of the user used by the intelligent learning device is protected.
In an embodiment, fig. 9 is a block diagram of an air gesture interaction apparatus distinguished from button clicking according to an embodiment of the present invention, which is suitable for a case where an intelligent learning device is operated in an air, and the apparatus may be implemented by hardware/software. As shown in fig. 9, the apparatus includes: a gesture display module 910, an acquisition module 920, and an execution module 930.
The gesture display module 910 is configured to display a gesture menu corresponding to a current interface when an idle gesture wakeup instruction is received, where the gesture menu includes all preset gestures supported by the current interface;
an obtaining module 920, configured to obtain and display a user interaction gesture in a pre-configured gesture interaction box;
an executing module 930, configured to execute an operation corresponding to the user interaction gesture when the user interaction gesture matches the preset gesture.
According to the technical scheme, the user interaction gesture in the pre-configured gesture interaction frame is acquired in the learning process of the user, and the operation corresponding to the user interaction gesture is executed under the condition that the user interaction gesture is matched with the preset gesture in the gesture menu corresponding to the current interface, so that the phenomenon that the user manually operates the intelligent learning equipment is avoided, the effect of operating the intelligent learning equipment in an air-isolated mode is achieved, and the learning efficiency and the learning interest of the user are improved; and when the test questions are played by adopting the voice playing function, the short-distance reading time of the user is reduced by adopting an air-separating gesture interaction mode, and the eyesight of the user is protected.
In one embodiment, the separated gesture interaction apparatus distinguished from button clicking further includes:
and the starting module is used for starting the air-separating gesture interaction function of the intelligent learning equipment before the gesture menu corresponding to the current interface is displayed.
In one embodiment, the separated gesture interaction apparatus distinguished from button clicking further includes:
and the highlighting module is used for highlighting the preset gesture matched with the user interaction gesture in the gesture menu after the user interaction gesture in the pre-configured gesture interaction box is obtained.
In one embodiment, the separated gesture interaction apparatus distinguished from button clicking further includes:
and the meaning display module is used for displaying the meaning corresponding to the user interaction gesture in the gesture interaction box after the user interaction gesture in the pre-configured gesture interaction box is obtained.
In one embodiment, the separated gesture interaction apparatus distinguished from button clicking further includes:
and the closing module is used for closing the air-separating gesture interaction function of the intelligent learning equipment when receiving an air-separating gesture closing instruction.
In an embodiment, the executing the corresponding operation according to the matching degree between the user interaction gesture and the preset gesture is specifically configured to: and executing the operation corresponding to the user interaction gesture under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value.
In an embodiment, the executing, according to the matching degree between the user interaction gesture and the preset gesture and the attribute information of the user interaction gesture, a corresponding operation is specifically configured to:
acquiring attribute information corresponding to the user interaction gesture;
and executing operation matched with the attribute information under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value.
In one embodiment, the attribute information includes one of: the relative distance between the user finger corresponding to the user interaction gesture and the intelligent learning device; the strength of the user finger corresponding to the user interaction gesture; and the relative angle between the finger of the user corresponding to the user interaction gesture and the intelligent learning equipment.
In one embodiment, the determining manner of the user interaction gesture comprises:
acquiring original interaction gestures of all frames within a preset time length in a pre-configured gesture interaction frame;
determining the type of the interactive gesture according to the original interactive gestures of all the frames;
taking the original interaction gesture which has the maximum occurrence frequency within a preset time length and the occurrence frequency reaching a preset frequency threshold value as a user interaction gesture under the condition that the interaction gesture type is a static gesture;
and under the condition that the interaction gesture type is a dynamic gesture, taking the original interaction gesture which appears most frequently in a preset time length as a user interaction gesture.
In an embodiment, the static gesture includes, but is not limited to, one of: a start learning gesture, a choice selection gesture, an answer submission gesture, an answer withdrawal gesture, a pause playing gesture, and a screenshot gesture;
the dynamic gesture includes, but is not limited to, one of: a discipline selection gesture, a next-to-next gesture, a previous-to-next gesture, a page-flip gesture, an interface zoom-in gesture, an interface zoom-out gesture, a fast-forward gesture, a fast-reverse gesture, a volume-adjust gesture, a switch-to-primary-interface gesture, a brightness-adjust gesture, a skip gesture, a bailey gesture.
In one embodiment, the intelligent learning device is a terminal configured with a front camera;
the smart learning device includes, but is not limited to, one of: the intelligent learning machine, support the TV set of throwing the screen function, intelligent audio amplifier, intelligent robot, intelligent wrist-watch, intelligent desk lamp, intelligent pen, intelligent mirror.
The air-spaced gesture interaction device different from button clicking can execute the air-spaced gesture interaction method different from button clicking provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the air-spaced gesture interaction method different from button clicking.
In an embodiment, fig. 10 is a schematic hardware structure diagram of an empty gesture interaction device distinguished from button clicking according to an embodiment of the present invention. The device in the embodiment of the present invention is described by taking a learning machine as an example. As shown in fig. 10, an empty gesture interaction apparatus distinguished from button clicking according to an embodiment of the present invention includes: a processor 1010, a memory 1020, an input device 1030, and an output device 1040. The processor 1010 of the spaced gesture interaction device distinguished from button clicking may be one or more, one processor 1010 is taken as an example in fig. 10, and the processor 1010, the memory 1020, the input device 1030 and the output device 1040 of the spaced gesture interaction device distinguished from button clicking may be connected by a bus or in other ways, and are taken as an example in fig. 10.
The memory 1020 of the spaced gesture interaction device distinguished from button clicking may be used as a computer readable storage medium for storing one or more programs, which may be software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the spaced gesture interaction method distinguished from button clicking provided by the embodiments of the present invention (for example, the modules in the spaced gesture interaction device distinguished from button clicking shown in fig. 5 include a gesture display module, an acquisition module, and an execution module). The processor 1010 executes software programs, instructions and modules stored in the memory 1020 to execute various functional applications and data processing of the cloud server, that is, to implement the above-described method embodiment different from the method for interacting with an empty gesture by button clicking.
The memory 1020 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 1020 may further include memory located remotely from the processor 1010, which may be connected to devices over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input unit 1030 may be used to receive numeric or character information input by a user to generate key signal inputs related to user settings and function control of the terminal device. Output device 1040 may include a display device such as a display screen.
And, when one or more programs included in the above-described spaced gesture interaction device distinguished from button clicks are executed by the one or more processors 1010, the programs perform the following operations: when an air gesture awakening instruction is received, displaying a gesture menu corresponding to a current interface, wherein the gesture menu comprises all preset gestures supported by the current interface; acquiring and displaying user interaction gestures in a pre-configured gesture interaction frame; and executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for interacting an empty gesture different from a button click, the method including: when an air gesture awakening instruction is received, displaying a gesture menu corresponding to a current interface, wherein the gesture menu comprises all preset gestures supported by the current interface; acquiring and displaying user interaction gestures in a pre-configured gesture interaction frame; and executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or flash Memory), an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (14)

1. An air gesture interaction method distinguished from button clicking, applied to an intelligent learning device, includes:
when an air gesture awakening instruction is received, displaying a gesture menu corresponding to a current interface, wherein the gesture menu comprises all preset gestures supported by the current interface;
acquiring and displaying user interaction gestures in a pre-configured gesture interaction frame;
and executing corresponding operation according to the matching degree between the user interaction gesture and the preset gesture and/or the attribute information of the user interaction gesture.
2. The method according to claim 1, before displaying the gesture menu corresponding to the current interface, further comprising:
and starting the air-separating gesture interaction function of the intelligent learning equipment.
3. The method of claim 1, after the obtaining the user interaction gesture in the preconfigured gesture interaction box, further comprising:
and highlighting a preset gesture matched with the user interaction gesture in the gesture menu.
4. The method of claim 1, after the obtaining the user interaction gesture in the preconfigured gesture interaction box, further comprising:
and displaying the meaning corresponding to the user interaction gesture in the gesture interaction box.
5. The method of claim 1, further comprising:
and when an air-separating gesture closing instruction is received, closing the air-separating gesture interaction function of the intelligent learning equipment.
6. The method according to any one of claims 1 to 5, wherein the performing corresponding operations according to the matching degree between the user interaction gesture and the preset gesture includes:
and executing the operation corresponding to the user interaction gesture under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value.
7. The method according to any one of claims 1 to 5, wherein the performing corresponding operations according to the matching degree between the user interaction gesture and the preset gesture and the attribute information of the user interaction gesture includes:
acquiring attribute information corresponding to the user interaction gesture;
and executing operation matched with the attribute information under the condition that the matching degree between the user interaction gesture and the preset gesture reaches a preset matching degree threshold value.
8. The method of claims 1-5, wherein the attribute information comprises one of: the relative distance between the user finger corresponding to the user interaction gesture and the intelligent learning device; the strength of the user finger corresponding to the user interaction gesture; and the relative angle between the finger of the user corresponding to the user interaction gesture and the intelligent learning equipment.
9. The method according to any one of claims 1-5, wherein the determining of the user interaction gesture comprises:
acquiring original interaction gestures of all frames within a preset time length in a pre-configured gesture interaction frame;
determining the type of the interactive gesture according to the original interactive gestures of all the frames;
taking the original interaction gesture which has the maximum occurrence frequency within a preset time length and the occurrence frequency reaching a preset frequency threshold value as a user interaction gesture under the condition that the interaction gesture type is a static gesture;
and under the condition that the interaction gesture type is a dynamic gesture, taking the original interaction gesture which appears most frequently in a preset time length as a user interaction gesture.
10. The method of claim 9,
the static gesture includes one of: a start learning gesture, a choice selection gesture, an answer submission gesture, an answer withdrawal gesture, a pause playing gesture, a continue playing gesture, and a screenshot gesture;
the dynamic gesture includes one of: a discipline selection gesture, a next-to-next gesture, a previous-to-next gesture, a page-flip gesture, an interface zoom-in gesture, an interface zoom-out gesture, a fast-forward gesture, a fast-reverse gesture, a volume-adjust gesture, a switch-to-primary-interface gesture, a brightness-adjust gesture, a skip gesture, a bailey gesture.
11. The method according to any one of claims 1 to 5, wherein the intelligent learning device is a terminal configured with a front camera;
the smart learning device comprises one of: the intelligent learning machine, support TV set, intelligent audio amplifier, intelligent robot, intelligent wrist-watch, intelligent desk lamp, intelligent pen, the intelligent mirror of throwing the screen function.
12. An empty gesture interaction device different from button clicking is applied to intelligent learning equipment and comprises:
the gesture display module is used for displaying a gesture menu corresponding to the current interface when an air gesture awakening instruction is received, wherein the gesture menu comprises all preset gestures supported by the current interface;
the acquisition module is used for acquiring and displaying user interaction gestures in a pre-configured gesture interaction box;
and the execution module is used for executing the operation corresponding to the user interaction gesture under the condition that the user interaction gesture is matched with the preset gesture.
13. An empty gesture interaction device distinguished from button clicks, the device comprising: a memory, and one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of spaced gesture interaction distinct from button clicks recited in any of claims 1-11.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for separating potential interactions from button clicks according to any one of claims 1 to 11.
CN202111673615.7A 2021-12-31 2021-12-31 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks Pending CN114489331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673615.7A CN114489331A (en) 2021-12-31 2021-12-31 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673615.7A CN114489331A (en) 2021-12-31 2021-12-31 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks

Publications (1)

Publication Number Publication Date
CN114489331A true CN114489331A (en) 2022-05-13

Family

ID=81497168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673615.7A Pending CN114489331A (en) 2021-12-31 2021-12-31 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks

Country Status (1)

Country Link
CN (1) CN114489331A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202530A (en) * 2022-05-26 2022-10-18 当趣网络科技(杭州)有限公司 Gesture interaction method and system of user interface
CN116466828A (en) * 2023-06-19 2023-07-21 无锡车联天下信息技术有限公司 Intelligent cabin driving environment gesture intelligent detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007844A (en) * 2014-06-18 2014-08-27 原硕朋 Electronic instrument and wearable type input device for same
CN107272890A (en) * 2017-05-26 2017-10-20 歌尔科技有限公司 A kind of man-machine interaction method and device based on gesture identification
CN109960980A (en) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 Dynamic gesture identification method and device
CN110850982A (en) * 2019-11-11 2020-02-28 南方科技大学 AR-based human-computer interaction learning method, system, device and storage medium
CN111580661A (en) * 2020-05-09 2020-08-25 维沃移动通信有限公司 Interaction method and augmented reality device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007844A (en) * 2014-06-18 2014-08-27 原硕朋 Electronic instrument and wearable type input device for same
CN107272890A (en) * 2017-05-26 2017-10-20 歌尔科技有限公司 A kind of man-machine interaction method and device based on gesture identification
CN109960980A (en) * 2017-12-22 2019-07-02 北京市商汤科技开发有限公司 Dynamic gesture identification method and device
CN110850982A (en) * 2019-11-11 2020-02-28 南方科技大学 AR-based human-computer interaction learning method, system, device and storage medium
CN111580661A (en) * 2020-05-09 2020-08-25 维沃移动通信有限公司 Interaction method and augmented reality device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202530A (en) * 2022-05-26 2022-10-18 当趣网络科技(杭州)有限公司 Gesture interaction method and system of user interface
CN115202530B (en) * 2022-05-26 2024-04-09 当趣网络科技(杭州)有限公司 Gesture interaction method and system of user interface
CN116466828A (en) * 2023-06-19 2023-07-21 无锡车联天下信息技术有限公司 Intelligent cabin driving environment gesture intelligent detection method
CN116466828B (en) * 2023-06-19 2023-08-18 无锡车联天下信息技术有限公司 Intelligent cabin driving environment gesture intelligent detection method

Similar Documents

Publication Publication Date Title
CN111683263B (en) Live broadcast guiding method, device, equipment and computer readable storage medium
CN108469772B (en) Control method and device of intelligent equipment
US20220013026A1 (en) Method for video interaction and electronic device
CN114489331A (en) Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks
CN111580652B (en) Video playing control method and device, augmented reality equipment and storage medium
CN113763958B (en) Voice wakeup method, voice wakeup device, electronic equipment and storage medium
CN112652200A (en) Man-machine interaction system, man-machine interaction method, server, interaction control device and storage medium
WO2018045669A1 (en) Method for controlling interface of electronic device and electronic device
KR20180109499A (en) Method and apparatus for providng response to user's voice input
CN111507220A (en) Method and device for determining and feeding back user information in live broadcast teaching
WO2020108024A1 (en) Information interaction method and apparatus, electronic device, and storage medium
KR20210058757A (en) Method and apparatus for determining key learning content, device and storage medium
CN111522524B (en) Presentation control method and device based on conference robot, storage medium and terminal
CN111596760A (en) Operation control method and device, electronic equipment and readable storage medium
CN112579935B (en) Page display method, device and equipment
CN104423992A (en) Speech recognition startup method for display
EP3509311A1 (en) Electronic apparatus, user interface providing method and computer readable medium
CN114257824A (en) Live broadcast display method and device, storage medium and computer equipment
CN111625094B (en) Interaction method and device of intelligent rearview mirror, electronic equipment and storage medium
CN108073291B (en) Input method and device and input device
CN112381709B (en) Image processing method, model training method, device, equipment and medium
CN111225250B (en) Video extended information processing method and device
CN117234405A (en) Information input method and device, electronic equipment and storage medium
CN113593614A (en) Image processing method and device
CN113778595A (en) Document generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination