CN112346566B - Interactive learning method and device, intelligent learning equipment and storage medium - Google Patents

Interactive learning method and device, intelligent learning equipment and storage medium Download PDF

Info

Publication number
CN112346566B
CN112346566B CN202011193201.XA CN202011193201A CN112346566B CN 112346566 B CN112346566 B CN 112346566B CN 202011193201 A CN202011193201 A CN 202011193201A CN 112346566 B CN112346566 B CN 112346566B
Authority
CN
China
Prior art keywords
control instruction
input data
data
control
execution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011193201.XA
Other languages
Chinese (zh)
Other versions
CN112346566A (en
Inventor
陈世锐
钟永
立嘉
曾斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202011193201.XA priority Critical patent/CN112346566B/en
Publication of CN112346566A publication Critical patent/CN112346566A/en
Application granted granted Critical
Publication of CN112346566B publication Critical patent/CN112346566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03545Pens or stylus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application is suitable for the technical field of computers, and provides an interactive learning method, an interactive learning device, intelligent learning equipment and a storage medium. The method includes acquiring input data; the input data includes input data of an environmental object and/or a user gesture; determining a corresponding control instruction according to the input data, wherein the control instruction comprises an execution main body and a control action; and sending a control instruction to the execution body, wherein the control instruction is used for instructing the execution body to execute the control action. The interactive learning method provided by the embodiment of the application realizes the control of various execution subjects by environmental objects and/or user gestures, increases the diversity of interaction modes, changes the interaction modes from the common touch screen touch mode of the mobile terminal in the prior art to the interaction with the environmental objects and/or user gestures, and increases the interestingness of the interaction.

Description

Interactive learning method and device, intelligent learning equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to an interactive learning method, an interactive learning device, intelligent learning equipment and a storage medium.
Background
With the development of science and technology, students use various electronic devices, such as mobile phones, tablet computers, PCs, etc., in the learning process. At present, the learning process of students is based on the self-contained screen of the electronic equipment, and the input and output of the interactive content are completed in modes of touch, keyboard and the like.
The interaction mode excessively depends on the screen of the electronic equipment, needs to directly look at the screen for a long time, and has non-negligible damage to the eyesight of students (especially the eyesight of children). The interaction modes are single, interaction is carried out in a touch or keyboard mode, the learning interest of students is completely dependent on the interestingness and the richness of a preset content library, and the initiative is gradually reduced along with the time.
Disclosure of Invention
In view of the above, the embodiments of the present application provide an interactive learning method, apparatus, device, and storage medium, so as to solve the technical problem in the prior art that the interactive learning method based on electronic devices has a single interactive mode.
In a first aspect, an embodiment of the present application provides an interactive learning method, which is applicable to an intelligent learning device, where the intelligent learning device is configured to respond to a triggering operation of an environmental object and/or a gesture of a user; the environment object comprises at least one of a first object and a second object, wherein the first object is provided with a sensor for acquiring a motion track, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment;
the method comprises the following steps:
acquiring input data; the input data includes input data of an environmental object and/or a user gesture;
Determining a corresponding control instruction according to the input data, wherein the control instruction comprises an execution main body and a control action;
and sending a control instruction to the execution body, wherein the control instruction is used for instructing the execution body to execute the control action.
In a possible implementation manner of the first aspect, acquiring the input data includes:
under the condition that the first mode is started, acquiring a motion track of a first object through a sensor; the first mode is triggered by a switch disposed on the first object;
and generating first data according to the motion trail, wherein the first data comprises the identification of the first object.
In a possible implementation manner of the first aspect, the intelligent learning device includes a camera unit;
the obtaining input data includes:
acquiring an image of a preset area through a camera unit, wherein the image contains a second object and/or a user gesture;
inputting the image into a pre-trained image recognition model, and determining the identification of the object contained in the image;
generating second data, wherein the second data comprises the identification of the object.
In a possible implementation manner of the first aspect, the second object includes any one of the following:
a card with a mark and a learning tool with a volume smaller than the space size of the preset area.
In a possible implementation manner of the first aspect, determining the corresponding control instruction according to the input data includes:
searching a control instruction corresponding to the first data and a control instruction corresponding to the second data from the configuration file respectively; the configuration file comprises a plurality of groups of one-to-one corresponding identifications and control instructions, and each input data comprises an identification;
distributing a target identifier corresponding to the execution main body of each control instruction for each control instruction according to the execution main body of each control instruction;
and sequentially placing all the control instructions containing the target identifier into a message queue according to a preset priority, wherein the priority of the control instruction corresponding to the first data is higher than that of the control instruction corresponding to the second data.
In a possible implementation manner of the first aspect, sending a control instruction to the execution body includes:
simultaneously sending control instructions corresponding to the execution subjects corresponding to different target identifiers;
and/or the number of the groups of groups,
and sequentially sending corresponding control instructions to execution bodies corresponding to the same target identifiers according to the arrangement sequence of the plurality of control instructions in the message queue.
In a possible implementation manner of the first aspect, the execution body includes a projection unit; sending the control instruction to the execution body, including:
sending the control instruction to a projection unit; the control instruction is used for indicating the projection unit to project the image corresponding to the mark contained in the display control instruction.
In a possible implementation manner of the first aspect, the execution body includes a speaker; sending the control instruction to the execution body, including:
and sending a control instruction to the loudspeaker, wherein the control instruction is used for indicating the loudspeaker to play the audio corresponding to the identifier contained in the control instruction.
In a possible implementation manner of the first aspect, before acquiring the input data, the method further includes:
obtaining a plurality of training samples, wherein each training sample comprises a training image and an identifier of an object contained in the training image; the object contained in the training image is an environmental object to be trained and/or a user gesture to be trained;
obtaining a pre-trained image recognition model according to a plurality of training samples;
distributing corresponding control instructions for the identifications of the objects contained in the training images;
and adding the identification of the object contained in the training image and the corresponding control instruction to the configuration file.
In a second aspect, an embodiment of the present application provides an interactive learning apparatus, which is suitable for an intelligent learning device, where the intelligent learning device is configured to respond to a triggering operation of an environmental object and/or a gesture of a user; the environment object comprises at least one of a first object and a second object, wherein the first object is provided with a sensor for acquiring a motion track, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment;
the device comprises:
the acquisition module is used for acquiring input data; the input data includes input data of an environmental object and/or a user gesture;
the determining module is used for determining corresponding control instructions according to the input data, wherein the control instructions comprise an execution main body and control actions;
and the sending module is used for sending a control instruction to the execution main body, wherein the control instruction is used for instructing the execution main body to execute the control action.
In a third aspect, an embodiment of the present application provides an intelligent learning device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of any one of the methods of the first aspect when the computer program is executed by the processor.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which when executed by a processor performs the steps of any of the methods of the first aspect described above.
In a fifth aspect, an embodiment of the application provides a computer program product for, when run on a terminal device, causing the terminal device to perform the method of any of the first aspects described above.
According to the interactive learning method provided by the embodiment of the application, input data is firstly obtained, then a corresponding execution main body and a control action are determined according to the input data, and finally a control instruction is sent to the execution main body, wherein the control instruction is used for indicating the execution main body to execute the control action. The input data comprise input data of an environmental object and/or user gestures, the environmental object can be at least one of a first object provided with a sensor for acquiring a motion track and a second object which can be recognized by intelligent learning equipment and is trained in advance, namely the interactive learning mode provided by the embodiment of the application realizes the control of the environmental object and/or the user gestures on various execution subjects, the diversity of the interactive mode is increased, the interactive mode is changed from a common touch screen touch mode of the mobile terminal in the prior art to interaction with the environmental object and/or the user gestures, and the interactive interestingness is increased.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of hardware components of an intelligent learning system according to an embodiment of the present application;
FIG. 2 is a flow chart of an interactive learning method according to an embodiment of the application;
FIG. 3 is a flow chart of acquiring input data according to an embodiment of the present application;
FIG. 4 is a flowchart of acquiring input data according to another embodiment of the present application;
FIG. 5 is a flow chart of determining control commands according to an embodiment of the present application;
FIG. 6 is a flowchart of an interactive learning method according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an interactive learning device according to an embodiment of the present application;
fig. 8 is a schematic diagram of an intelligent learning device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic diagram of hardware components of an intelligent learning system according to an embodiment of the present application. The intelligent learning system includes an intelligent learning device 10 and at least one environmental object 20. Environmental object 1, environmental object 2 … environmental object n, n being an integer greater than or equal to 1, is shown in fig. 1.
In this embodiment, the intelligent learning device 10 is configured to respond to a triggering operation of at least one environmental object 20, and generate a control instruction corresponding to the triggering operation, so as to implement interaction applicable to multiple environmental objects 20.
The intelligent learning device 10 may also be configured to respond to a trigger operation of a user gesture, and generate a control instruction corresponding to the trigger operation.
Wherein the intelligent learning device 10 can respond to the triggering operation of the environmental object and the gesture of the user at the same time.
In this embodiment, the environmental object 20 includes at least one of a first object and a second object. The first object is provided with a sensor for acquiring a motion track, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment.
The sensor for acquiring the motion trajectory may be a motion sensor, for example. The first object may be a pen provided with a motion sensor. The pre-trained environmental object that can be recognized by the intelligent learning device can be a learning tool, such as a card or the like.
In particular, the environmental object may also be a mobile terminal or the like provided with a user interaction interface.
In the present embodiment, the intelligent learning apparatus 10 uses learning necessities as carriers, and a storage unit 11, a processing unit 12, and a communication unit 13 are provided on the learning necessities.
For example, the intelligent learning device 10 may be a luminaire provided with the storage unit 11, the processing unit 12, and the communication unit 13.
Wherein the storage unit 11 may store content data to be interacted with in advance. The content data to be interacted with includes, but is not limited to, picture data, text data, audio-visual data.
Wherein the communication unit 13 is configured to communicate with a plurality of execution devices (for example, the execution device 1, the execution device 2 …, the execution device m shown in fig. 1) in the environment. The executing device includes, but is not limited to, a sound, an air conditioner, and the like.
Wherein the communication unit 13 may be a many-to-many wireless communication module. For example, the communication unit 13 may transmit messages to a plurality of execution devices at the same time to realize cooperative control.
In the present embodiment, the intelligent learning apparatus 10 may further include an image capturing unit 14. The image capturing unit 14 is used for capturing an image in a preset area. The processing unit 12 of the intelligent learning device receives the image transmitted from the imaging unit 14, recognizes the object included in the image, and executes a control operation corresponding to the target image if the target image is recognized.
The control action corresponding to the target object may be preset.
Illustratively, the target object is an "OK" gesture among the user gestures, and the control operation corresponding to the target object is to turn on the sound. When the user gesture is "OK" and is located in the preset area, the image capturing unit 14 obtains an image including the "OK" gesture, and sends the image to the processing unit 12, and if the processing unit 12 recognizes that the object included in the image is the "OK" gesture, a control instruction is sent to the sound equipment through the communication unit 13, or a control instruction is sent to the control equipment of the sound equipment, and the sound equipment is turned on.
In this embodiment, in order to reduce the damage of the electronic device to the eyesight of the student (especially the eyesight of the child), the intelligent learning device 10 may further include a projection unit 15. The projection unit 15 is used for projecting the interactive content to be displayed to the target area.
For example, the carrier of the intelligent learning device 10 may be a light fixture. The lamp is placed on the operating platform, on which the camera unit 14 and the projection unit 15 are mounted. The operation platform can be divided into an image acquisition area and a projection display area.
By adjusting the mounting positions of the image capturing unit 14 and the projection unit 15, the field area of the image capturing unit 14 on the operation platform is matched with the image acquisition area, and the projection unit 15 on the operation platform is matched with the projection display area.
In one scenario, the second object is a magic cube, the corresponding control action of the magic cube is to display a picture in the first folder, the camera unit 14 acquires an image including the magic cube, and sends the image to the processing unit 12, the processing unit 12 identifies the object included in the image as the magic cube, and generates a control instruction, where the control instruction instructs the projection unit 15 to display the picture in the first folder in the projection display area.
In this embodiment, the intelligent learning device 10 may further include a speaker 16 for playing audio data.
The technical scheme of the present application and how the technical scheme of the present application solves the above technical problems are exemplarily described below with specific embodiments. It is noted that the specific embodiments listed below may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flow chart of an interactive learning method according to an embodiment of the present application, which is suitable for the intelligent learning device shown in fig. 1. As shown in fig. 1, the interactive learning method includes:
s10, acquiring input data; the input data includes input data of environmental objects and/or user gestures.
In this embodiment, the environmental object may include at least one of a first object and a second object.
The first object is provided with a sensor for acquiring a motion track, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment.
Accordingly, the input data may include at least one of input data of the first object, input data of the second object, and a user gesture.
Wherein the user gesture is a pre-trained user gesture that can be recognized by the intelligent learning device.
Illustratively, the sensor is a motion sensor and the first object is a pen provided with the motion sensor. The second object is a magic cube that can be recognized by the intelligent learning device. The user gestures may include gestures that are directly posed, or may include gestures that result from continuous motion. For example, the user gesture may be a "V" word gesture; or a circle drawn by the user through the index finger.
In this embodiment, each input data may include a uniquely identifiable identifier.
The acquiring of the input data may refer to acquiring one or more of a motion track of a pen, an image including a magic cube, and an image including a V-shaped gesture, and allocating preset identifiers to the acquired one or more of the motion track and the image, respectively.
In this embodiment, the input data may also be an interaction request instruction sent by the mobile terminal configured with the interaction interface.
The mobile terminal may be plural. Accordingly, the input data may be obtained by receiving one or more interaction request instructions sent by a plurality of mobile terminals, and allocating a preset identifier to each interaction request instruction.
S20, determining a corresponding control instruction according to the input data, wherein the control instruction comprises an execution main body and a control action.
In this embodiment, the execution body may be the intelligent learning device itself, or may be a home device or other intelligent devices that are communicatively connected to the intelligent learning device.
Accordingly, the control action may be a processing action of the intelligent learning device, or may be a processing action of an execution device (for example, a home device or other intelligent devices) in the environment, or may include a processing action of the intelligent learning device and a processing action of the execution device in the environment at the same time.
In one example, the input data is input data of a second object, the second object is a card with "company a" characters, and the corresponding control instruction may be to perform screening processing on the target data table according to the keyword "company a". At this time, the execution subject is an intelligent learning device, and the control action is screening based on the acquired keywords.
In another example, the input data is input data of a first object, the first object is a pen provided with a motion sensor, when a motion track of the pen is "M", a corresponding control instruction is to turn on the intelligent sound, and at this time, the execution main body of the control instruction is a control unit of the intelligent sound, and the control action is to turn on the intelligent sound.
In this embodiment, the storage unit of the intelligent learning device stores predefined input data and corresponding control instructions thereof in advance. For example, it may be stored via a configuration file.
The predefined input data and the corresponding control instructions thereof can be defined by the user, so that customized interaction methods of different users are realized.
The configuration file contains an identifier of each input data, and an execution main body and a control action corresponding to each input data.
For example, after the intelligent learning device obtains the input data, it queries the execution body and the control action corresponding to the input data from the configuration file, and packages the identification of the execution body, the control action and the input data to generate the control instruction.
S30, sending a control instruction to the execution main body, wherein the control instruction is used for instructing the execution main body to execute the control action.
In this embodiment, if a plurality of input data are acquired at the same time, it is determined that the control command is also a plurality of. The plurality of control instructions may be written to the message queue prior to transmission of the plurality of control instructions. If the control instruction needs to be sent to the execution body, the control instruction can be read from the message queue and sent.
The method includes reading control instructions from the message queue and sending the control instructions, and may include simultaneously reading a plurality of control instructions having different execution bodies from the message queue and simultaneously sending respective corresponding control instructions to the different execution bodies.
In this embodiment, the execution body may include, but is not limited to, a projection unit, a speaker, and the like. Accordingly, sending a control instruction to the execution body may mean that the control instruction is sent to the projection unit; the control instruction is used for indicating the projection unit to project the image corresponding to the mark contained in the display control instruction.
And the method can also be that a control instruction is sent to the loudspeaker, and the control instruction is used for indicating the loudspeaker to play the audio corresponding to the identifier contained in the control instruction.
Wherein, the images or the audios corresponding to different identifications are preset.
Accordingly, the respective corresponding control instructions may be simultaneously sent to the projection device and the speaker, and the projection device displays the corresponding image and the speaker plays the corresponding audio.
According to the interactive learning method provided by the embodiment of the application, input data is firstly obtained, then a corresponding execution main body and a control action are determined according to the input data, and finally a control instruction is sent to the execution main body, wherein the control instruction is used for indicating the execution main body to execute the control action. The input data comprise input data of an environmental object and/or user gestures, the environmental object can be at least one of a first object provided with a sensor for acquiring a motion track and a second object which can be recognized by intelligent learning equipment and is trained in advance, namely the interactive learning mode provided by the embodiment of the application realizes the control of the environmental object and/or the user gestures on various execution subjects, the diversity of the interactive mode is increased, the interactive mode is changed from a common touch screen touch mode of the mobile terminal in the prior art to interaction with the environmental object and/or the user gestures, and the interactive interestingness is increased.
Since the environmental object may comprise the first object or the second object, the manner of acquiring the input data may be different accordingly, and the possible embodiments of acquiring the input data will be described below by way of examples of fig. 3 and 4, respectively.
Fig. 3 is a flowchart illustrating a process of acquiring input data according to an embodiment of the application. This embodiment describes one possible implementation of obtaining input data in the embodiment of fig. 2, where the input data is input data of the first object. As shown in fig. 3, acquiring input data includes:
s101, under the condition that a first mode is started, acquiring a motion track of a first object through a sensor; the first mode is triggered by a switch disposed on the first object.
In this embodiment, the sensor may be a motion sensor. Since the motion sensor is disposed on the first object, the motion trajectory of the first object may be characterized by the motion trajectory of the motion sensor.
The present embodiment is exemplified with a pen provided with a motion sensor to characterize a first object. Since the student frequently holds the pen to operate in the learning process, the motion trail of the motion sensor arranged on the pen has no reference meaning and cannot be used as a trigger operation. This step is to switch on or off the first mode by a switch. If the first mode is started, the motion trail of the pen can be used as trigger operation, and the motion trail of the pen is obtained through the sensor.
In this embodiment, the first mode may be turned on or off by a switch provided on the first object.
For example, if the switch is in an on state, the first mode is on, and if the switch is in an off state, the first mode is off.
S102, generating first data according to the motion trail, wherein the first data comprises the identification of the first object.
In this embodiment, generating the first data according to the motion trail may mean that the intelligent learning device determines whether a trail curve matching the motion trail exists, if so, determines an identifier of the matching trail curve, and distributes the identifier to the acquired motion trail to generate the first data.
In some embodiments, if there is no trajectory curve matching the motion trajectory, a message may be generated that characterizes the motion trajectory as unrecognizable. Accordingly, a reminder tone may be generated from the message. For example, the alert tone may be: "moving track cannot be recognized".
In this embodiment, determining whether there is a track curve matching the motion track may include the following steps:
step 1: the method comprises the steps of performing shape matching on the motion trail and each trail curve of the allocated mark, and calculating the matching degree between the motion trail and each trail curve of each allocated mark.
Step 2: and judging whether the maximum matching degree in the matching degrees is larger than a preset value.
Step 3: and if the maximum matching degree in the plurality of matching degrees is larger than the preset value, determining the track curve corresponding to the maximum matching value as the track curve matched with the motion track.
Step 4: and if the maximum matching degree in the plurality of matching degrees is smaller than the preset value, indicating that a track curve matched with the motion track does not exist.
According to the method, the first mode is started or closed through the switch arranged on the first object, the motion track arranged on the first object is acquired through the sensor under the condition that the first mode is started, the acquisition of the invalid track is avoided, and through the arrangement of the sensor, any environment object provided with the sensor can be used as trigger equipment for interaction learning, so that the diversity of interaction modes is increased, the interaction modes are changed from the common touch screen touch mode of the mobile terminal in the prior art to interaction with the environment object, and the interaction interestingness is increased.
Fig. 4 is a flowchart of acquiring input data according to another embodiment of the present application. This embodiment describes another possible implementation of obtaining input data in the embodiment of fig. 2, where the input data is input data of the second object and/or a gesture of the user. In this embodiment, the intelligent learning device includes a camera unit, as shown in fig. 4, and acquires input data, including:
S103, acquiring an image of a preset area through a camera unit, wherein the image contains a second object and/or a gesture of a user.
In this embodiment, the intelligent learning device is represented by a lamp for example. In order to improve the clarity of the image captured by the camera unit, the camera unit may be mounted towards the illuminated area of the luminaire light source.
In this embodiment, the preset area is a field of view of the imaging unit.
For example, an operation platform for placing a lamp is divided into an image acquisition area and a projection display area. The installation angle and the focal length of the image pickup unit are adjusted so that the visual field area of the image pickup unit on the operation platform is matched with the image acquisition area.
Wherein, the matching may mean that the field of view area of the image capturing unit on the operation platform includes the image capturing area. For example, the area of the field of view of the imaging unit is N times the image acquisition area, N being a value slightly greater than 1. Illustratively, N is 1.1.
In this embodiment, capturing the image of the preset area by the image capturing unit may include capturing an image including the second object, capturing an image including the user gesture, and simultaneously capturing any one of the scenes of the image including the second object and the user gesture.
In this embodiment, the second object includes any one of the following: a card with a mark and a learning tool with a volume smaller than the space size of a preset area.
Wherein the indicia include mathematical symbols, words, scientific symbols, and the like. The indicia may be handwritten or hand drawn by the user on the card.
In one example, the card with the indicia may be a card marked with a "V" symbol, or a card painted with a "bird". Correspondingly, when the card with the mark is a card painted with a 'bird', the intelligent learning equipment projects an image or video of the bird and plays the sound of the bird correspondingly.
In yet another example, the learning tool having a volume less than the predetermined area space size may be a magic cube. Wherein different states of the cube may correspond to different control actions.
For example, the cube is a 6-sided body. When all 6 surfaces of the magic cube are restored, at the moment, all the acquired 6 surface images are solid-color images, the magic cube state corresponding to the 6 solid-color images is A, and the control instruction corresponding to the state A can be the opening of the projection unit.
When an unreduced surface exists in the magic cube, the acquired image contains an impure image, the state of the magic cube can be represented as B at the moment, and a control instruction corresponding to the state B can be the closing of the projection equipment.
It should be appreciated that the physical dimensions of the learning tool, which are smaller than the predetermined area space, should remain unchanged.
S104, inputting the image into a pre-trained image recognition model, and determining the identification of the object contained in the image.
In this embodiment, the input of the pre-trained image recognition model is an image to be recognized, and the output is an identifier of an object contained in the image. The image to be recognized comprises a second object and/or a user gesture.
In this embodiment, if the object included in the image to be identified is successfully identified, an identifier is allocated to the object. If the object contained in the image to be identified cannot be identified, generating an identification representing that the object contained in the image cannot be identified.
S105, generating second data, wherein the second data comprises the identification of the object.
According to the embodiment of the application, through the image pick-up unit and the pre-trained image recognition model, any pre-trained environmental object which can be recognized by the image recognition model and any pre-trained user gesture which can be recognized by the image recognition model can trigger the intelligent learning device to perform interactive learning, so that the diversity of interactive modes is increased.
As can be seen from the embodiments of fig. 3 and fig. 4, the intelligent learning device may acquire input data of different types of environmental objects at the same time, or may receive multiple input data of the same environmental object in a period of time, where the intelligent learning device needs to generate multiple control instructions simultaneously or sequentially. An exemplary description of how the plurality of control commands are determined is provided below.
Fig. 5 is a schematic flow chart of determining a control instruction according to an embodiment of the present application, and on the basis of the foregoing embodiment, a possible implementation manner of S20 is described. As shown in fig. 5, determining a corresponding control instruction according to input data includes:
s201, searching control instructions corresponding to the first data and control instructions corresponding to the second data from the configuration files respectively; the configuration file comprises a plurality of groups of one-to-one identifiers and control instructions, and each input data comprises one identifier.
In this embodiment, when the first mode is turned on, the intelligent learning device may acquire input data of the first object, may acquire input data of the second object, or may acquire a gesture of the user.
Correspondingly, under the condition that the first mode is started, if a plurality of input data are acquired at the same time, the intelligent learning equipment searches the control instruction corresponding to the first data and the control instruction corresponding to the second data from the configuration file respectively.
The control instruction corresponding to the first data may refer to a control instruction corresponding to an identifier included in the first data. The control instruction corresponding to the second data may refer to a control instruction corresponding to an identification contained in the second data.
In this embodiment, the configuration file is a file predefined by a user, where the configuration file includes a plurality of groups of identifiers and control instructions that are in one-to-one correspondence, and each input data includes one identifier.
It should be understood that, in the case where the first mode is turned off, the input data acquired at this time does not include the first data, and then the corresponding control instructions may be sequentially searched according to the acquisition time according to the second data.
S202, according to the execution main body of each control instruction, a target identifier corresponding to the execution main body of each control instruction is allocated to each control instruction.
In this embodiment, the target identifier corresponding to each execution body is preset.
And S203, sequentially placing all control instructions containing the target identifier into a message queue according to a preset priority, wherein the priority of the control instruction corresponding to the first data is higher than that of the control instruction corresponding to the second data.
The purpose of this step is to place the control command corresponding to the first data preferentially in the message queue when the control command corresponding to the first data and the control command corresponding to the second data are present at the same time.
In this embodiment, the communication unit of the intelligent learning device may support many-to-many wireless communication. When the execution subjects of the plurality of control instructions contained in the message queue are different, the respective control instructions corresponding to the execution subjects can be simultaneously sent to the execution subjects, so that cooperative control among the execution subjects is realized.
For example: the intelligent learning device sending control instructions to the execution body may include at least one of: simultaneously sending control instructions corresponding to the execution subjects corresponding to different target identifiers; and sequentially sending corresponding control instructions to execution bodies corresponding to the same target identifiers according to the arrangement sequence of the plurality of control instructions in the message queue.
In some examples, the execution bodies of the plurality of control instructions in the message queue are respectively an electronic organ and an intelligent music player, the control action of the electronic organ is playing a small key, the control action of the intelligent music player is playing a preset program B, then corresponding control instructions are simultaneously sent to the electronic organ and the intelligent music player, and the electronic organ and the intelligent music player simultaneously execute corresponding control actions to realize ensemble.
In still other examples, if the message queue includes a plurality of control instructions having the same execution body, the control instructions are sequentially sent to the execution body according to the time of entering the message queue.
In this embodiment, the control instructions are written into the message queue, and the respective control instructions corresponding to the execution subjects are simultaneously transmitted to the execution subjects via the communication unit supporting the many-to-many communication. That is, the plurality of execution subjects can cooperate in a message-to-communication manner, so that cooperative control of the plurality of execution subjects can be realized.
Fig. 6 is a flowchart of an interactive learning method according to another embodiment of the present application. As shown in fig. 6, before the input data is acquired, the interactive learning method further includes:
s41, obtaining a plurality of training samples, wherein each training sample comprises a training image and an identifier of an object contained in the training image; the object contained in the training image is an environmental object to be trained and/or a user gesture to be trained.
In this embodiment, if the object in the training image includes both the environmental object to be trained and the user gesture to be trained, only one identifier corresponding to the training image is provided, and the one identifier characterizes that the object in the training image includes both the environmental object to be trained and the user gesture to be trained.
S42, obtaining a pre-trained image recognition model according to the plurality of training samples.
S43, distributing corresponding control instructions for the identifications of the objects contained in the training images.
In this embodiment, the intelligent learning device responds to the input of the user and allocates a corresponding control instruction to the identifier of the object included in the training image.
S44, adding the identification of the object contained in the training image and the corresponding control instruction to the configuration file.
The configuration file in this embodiment is the same as that in the above embodiment.
It should be appreciated that after the processing of the present embodiment, the environmental object to be trained is converted into a second object that can be identified by the pre-trained image recognition model.
The method provided by the embodiment of the application can realize the training of various environmental objects or various user gestures, so that the intelligent learning equipment can respond to the triggering of different environmental objects or various user gestures, and the diversity of interaction modes is increased. Moreover, the environment object can be customized and developed according to different users, and the customization requirement of the users can be met.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Based on the interactive learning method provided by the embodiment, the embodiment of the application further provides an embodiment of a device for realizing the embodiment of the method.
Fig. 7 is a schematic structural diagram of an interactive learning device according to an embodiment of the application. The interactive learning apparatus 50 is adapted to an intelligent learning device for responding to a triggering operation of an environmental object and/or a gesture of a user; the environment object comprises at least one of a first object and a second object, wherein the first object is provided with a sensor for acquiring a motion track, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment; as shown in fig. 7, the interactive learning apparatus 50 includes an acquisition module 501, a determination module 502, and a transmission module 503; wherein:
An acquisition module 501, configured to acquire input data; the input data includes input data of environmental objects and/or user gestures.
The determining module 502 is configured to determine a corresponding control instruction according to the input data, where the control instruction includes an execution body and a control action.
The sending module 503 is configured to send a control instruction to the execution subject, where the control instruction is used to instruct the execution subject to execute a control action.
Optionally, the acquiring module 501 acquires input data specifically includes: under the condition that the first mode is started, acquiring a motion track of a first object through a sensor; the first mode is triggered by a switch disposed on the first object; and generating first data according to the motion trail, wherein the first data comprises the identification of the first object.
Optionally, the intelligent learning device includes a camera unit; accordingly, the acquiring module 501 specifically includes:
acquiring an image of a preset area through a camera unit, wherein the image contains a second object and/or a user gesture;
inputting the image into a pre-trained image recognition model, and determining the identification of the object contained in the image;
generating second data, wherein the second data comprises the identification of the object.
Optionally, the second object comprises any one of: a card with a mark and a learning tool with a volume smaller than the space size of the preset area.
Optionally, the determining module 502 determines a corresponding control instruction according to the input data, which specifically includes:
searching a control instruction corresponding to the first data and a control instruction corresponding to the second data from the configuration file respectively; the configuration file comprises a plurality of groups of one-to-one corresponding identifications and control instructions, and each input data comprises an identification;
distributing a target identifier corresponding to the execution main body of each control instruction for each control instruction according to the execution main body of each control instruction;
and sequentially placing all the control instructions containing the target identifier into a message queue according to a preset priority, wherein the priority of the control instruction corresponding to the first data is higher than that of the control instruction corresponding to the second data.
Optionally, the sending module 503 sends a control instruction to the execution body, including:
simultaneously sending control instructions corresponding to the execution subjects corresponding to different target identifiers;
and/or the number of the groups of groups,
and sequentially sending corresponding control instructions to execution bodies corresponding to the same target identifiers according to the arrangement sequence of the plurality of control instructions in the message queue.
Optionally, the execution body includes at least one of a projection unit, a speaker; the sending module 503 sends a control instruction to the execution body, including:
sending the control instruction to a projection unit; the control instruction is used for indicating the projection unit to project an image corresponding to the mark contained in the display control instruction;
optionally, the sending module 503 sends a control instruction to the execution body, and may further include: and sending a control instruction to the loudspeaker, wherein the control instruction is used for indicating the loudspeaker to play the audio corresponding to the identifier contained in the control instruction.
Optionally, the interactive learning apparatus 50 further includes a training module, where the training module is configured to, before acquiring the input data:
obtaining a plurality of training samples, wherein each training sample comprises a training image and an identifier of an object contained in the training image; the object contained in the training image is an environmental object to be trained and/or a user gesture to be trained;
obtaining a pre-trained image recognition model according to a plurality of training samples;
distributing corresponding control instructions for the identifications of the objects contained in the training images;
and adding the identification of the object contained in the training image and the corresponding control instruction to the configuration file.
The interactive learning device provided in the embodiment shown in fig. 7 may be used to implement the technical solution in the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be repeated here.
Fig. 8 is a schematic diagram of an intelligent learning device according to an embodiment of the present application. As shown in fig. 8, the intelligent learning apparatus of this embodiment includes: at least one processor 601, a memory 602 and a computer program stored in the memory 602 and executable on the processor 601. The intelligent learning device further comprises a communication part 603, wherein the processor 601, the memory 602 and the communication part 603 are connected by a bus 604.
The steps in the above-described embodiments of the interactive learning method, such as steps S10 to S30 in the embodiment shown in fig. 2, are implemented when the processor 601 executes the computer program. Alternatively, the processor 601 may perform the functions of the modules/units in the above-described apparatus embodiments, such as the functions of the modules 501 to 503 shown in fig. 7, when executing a computer program.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in the memory 602 and executed by the processor 601 to perform the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the intelligent learning device.
It will be appreciated by those skilled in the art that fig. 8 is merely an example of a smart learning device and is not limiting of the smart learning device, and may include more or fewer components than illustrated, or may combine certain components, or different components, such as a camera unit, a projection unit, etc.
The processor 601 may be a central processing unit (Central Processkng Unkt, CPU), but may also be other general purpose processors, digital signal processors (Dkgktal Skgnal Processor, DSP), application specific integrated circuits (Applkcatkon Speckfkc Kntegrated Ckrcukt, ASKC), off-the-shelf programmable gate arrays (Fkeld-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 602 may be an internal storage unit of the Smart learning device, or may be an external storage device of the Smart learning device, such as a plug-in hard disk, a Smart media ka Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. The memory 602 is used to store the computer program as well as other programs and data required by the intelligent learning device. The memory 602 may also be used to temporarily store data that has been output or is to be output.
The bus may be an industry standard architecture (Kndustry Standard Archktecture, KSA) bus, an external device interconnect (Perkpheral Component, PCK) bus, or an extended industry standard architecture (Extended Kndustry Standard Archktecture, EKSA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (12)

1. An interactive learning method is characterized by being suitable for intelligent learning equipment, wherein the intelligent learning equipment is used for responding to triggering operation of environmental objects and/or gestures of a user; the environment object comprises at least one of a first object and a second object, a sensor used for acquiring a motion track is arranged on the first object, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment;
The method comprises the following steps:
acquiring input data; the input data comprises input data of the environmental object and/or the user gesture;
determining a corresponding control instruction according to the input data, wherein the control instruction comprises an execution main body and a control action;
the control instruction is sent to the execution main body, and the control instruction is used for instructing the execution main body to execute the control action;
the determining a corresponding control instruction according to the input data comprises the following steps:
distributing a target identifier corresponding to the execution main body of each control instruction for each control instruction according to the execution main body of each control instruction;
sequentially placing the control instructions containing the target identifier into a message queue according to a preset priority, wherein the priority of the control instruction corresponding to the first data is higher than that of the control instruction corresponding to the second data; the first data is input data of the first object, and the second data is input data of the second object and/or the user gesture.
2. The interactive learning method of claim 1, wherein the acquiring input data comprises:
under the condition that a first mode is started, acquiring a motion track of the first object through the sensor; the first mode is triggered by a switch disposed on the first object;
And generating first data according to the motion trail, wherein the first data comprises the identification of the first object.
3. The interactive learning method of claim 1, wherein the intelligent learning device comprises a camera unit;
the acquiring input data includes:
acquiring an image of a preset area through the camera unit, wherein the image comprises the second object and/or the user gesture;
inputting the image into a pre-trained image recognition model, and determining the identification of an object contained in the image;
generating second data, the second data comprising an identification of the object.
4. A method of interactive learning as claimed in claim 3, wherein the second object comprises any one of:
a card with a mark and a learning tool with a volume smaller than the space size of the preset area.
5. The interactive learning method of claim 1, wherein the determining the corresponding control command from the input data comprises:
searching a control instruction corresponding to the first data and a control instruction corresponding to the second data from the configuration file respectively; the configuration file comprises a plurality of groups of identifiers and control instructions which are in one-to-one correspondence, and each input data comprises an identifier.
6. The interactive learning method of claim 5, wherein the sending the control instruction to the execution body comprises:
simultaneously sending control instructions corresponding to the execution subjects corresponding to different target identifiers;
and/or the number of the groups of groups,
and sequentially sending corresponding control instructions to execution subjects corresponding to the same target mark according to the arrangement sequence of the plurality of control instructions in the message queue.
7. The interactive learning method of claim 1, wherein the execution body comprises a projection unit;
the sending the control instruction to the execution body includes:
sending the control instruction to the projection unit; the control instruction is used for indicating the projection unit to project and display the image corresponding to the identifier contained in the control instruction.
8. The interactive learning method of claim 1, wherein the execution body comprises a speaker;
the sending the control instruction to the execution body includes:
and sending the control instruction to the loudspeaker, wherein the control instruction is used for instructing the loudspeaker to play the audio corresponding to the identifier contained in the control instruction.
9. The interactive learning method of any of claims 1-8, wherein prior to obtaining the input data, the method further comprises:
obtaining a plurality of training samples, wherein each training sample comprises a training image and an identifier of an object contained in the training image; the object contained in the training image is an environmental object to be trained and/or a user gesture to be trained;
obtaining a pre-trained image recognition model according to the training samples;
distributing corresponding control instructions for the identifications of the objects contained in the training images;
and adding the identification of the object contained in the training image and the corresponding control instruction to a configuration file.
10. An interactive learning device is characterized by being suitable for intelligent learning equipment, wherein the intelligent learning equipment is used for responding to triggering operation of environmental objects and/or gestures of a user; the environment object comprises at least one of a first object and a second object, a sensor used for acquiring a motion track is arranged on the first object, and the second object is a pre-trained environment object which can be identified by the intelligent learning equipment;
the device comprises:
The acquisition module is used for acquiring input data; the input data comprises input data of the environmental object and/or the user gesture;
the determining module is used for determining corresponding control instructions according to the input data, wherein the control instructions comprise an execution main body and control actions;
the sending module is used for sending the control instruction to the execution main body, and the control instruction is used for indicating the execution main body to execute the control action;
the determining module is further configured to:
distributing a target identifier corresponding to the execution main body of each control instruction for each control instruction according to the execution main body of each control instruction;
sequentially placing the control instructions containing the target identifier into a message queue according to a preset priority, wherein the priority of the control instruction corresponding to the first data is higher than that of the control instruction corresponding to the second data; the first data is input data of the first object, and the second data is input data of the second object and/or the user gesture.
11. Intelligent learning device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 9 when the computer program is executed by the processor.
12. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 9.
CN202011193201.XA 2020-10-30 2020-10-30 Interactive learning method and device, intelligent learning equipment and storage medium Active CN112346566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011193201.XA CN112346566B (en) 2020-10-30 2020-10-30 Interactive learning method and device, intelligent learning equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011193201.XA CN112346566B (en) 2020-10-30 2020-10-30 Interactive learning method and device, intelligent learning equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112346566A CN112346566A (en) 2021-02-09
CN112346566B true CN112346566B (en) 2023-12-15

Family

ID=74356295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011193201.XA Active CN112346566B (en) 2020-10-30 2020-10-30 Interactive learning method and device, intelligent learning equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112346566B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912116A (en) * 2016-04-08 2016-08-31 北京鹏泰互动广告有限公司 Intelligent projection method and projector
CN110187771A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Gesture interaction method, device, wearable device and computer storage medium high up in the air
CN110874131A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Building intercom indoor unit and control method and storage medium thereof
CN111061371A (en) * 2019-12-18 2020-04-24 京东方科技集团股份有限公司 Control method and device of electronic painted screen, mobile terminal and storage medium
CN111580653A (en) * 2020-05-07 2020-08-25 讯飞幻境(北京)科技有限公司 Intelligent interaction method and intelligent interactive desk

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912116A (en) * 2016-04-08 2016-08-31 北京鹏泰互动广告有限公司 Intelligent projection method and projector
CN110874131A (en) * 2018-08-29 2020-03-10 杭州海康威视数字技术股份有限公司 Building intercom indoor unit and control method and storage medium thereof
CN110187771A (en) * 2019-05-31 2019-08-30 努比亚技术有限公司 Gesture interaction method, device, wearable device and computer storage medium high up in the air
CN111061371A (en) * 2019-12-18 2020-04-24 京东方科技集团股份有限公司 Control method and device of electronic painted screen, mobile terminal and storage medium
CN111580653A (en) * 2020-05-07 2020-08-25 讯飞幻境(北京)科技有限公司 Intelligent interaction method and intelligent interactive desk

Also Published As

Publication number Publication date
CN112346566A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
US11450353B2 (en) Video tagging by correlating visual features to sound tags
CN109151593B (en) Anchor recommendation method, device and storage medium
CN109862293B (en) Control method and device for terminal loudspeaker and computer readable storage medium
CN107463700B (en) Method, device and equipment for acquiring information
CN110162604B (en) Statement generation method, device, equipment and storage medium
CN108922531B (en) Slot position identification method and device, electronic equipment and storage medium
CN110674349B (en) Video POI (Point of interest) identification method and device and electronic equipment
CN111950570B (en) Target image extraction method, neural network training method and device
CN114049892A (en) Voice control method and device and electronic equipment
KR20210001412A (en) System and method for providing learning service
CN108304434B (en) Information feedback method and terminal equipment
CN108847066A (en) A kind of content of courses reminding method, device, server and storage medium
CN112786032A (en) Display content control method, device, computer device and readable storage medium
CN110750659A (en) Dynamic display method, device and storage medium for media resources
CN112346566B (en) Interactive learning method and device, intelligent learning equipment and storage medium
CN112749550B (en) Data storage method and device, computer equipment and storage medium
CN111539217B (en) Method, equipment and system for disambiguation of natural language content titles
CN112306603A (en) Information prompting method and device, electronic equipment and storage medium
CN113784045B (en) Focusing interaction method, device, medium and electronic equipment
KR20230085333A (en) Apparatus for ai based children education solution
CN114816087A (en) Information processing method, device, equipment and storage medium
CN111291539B (en) File editing control method, device, computer device and storage medium
CN114328815A (en) Text mapping model processing method and device, computer equipment and storage medium
JP6944920B2 (en) Smart interactive processing methods, equipment, equipment and computer storage media
CN113641902A (en) Music information pushing method and device, computer equipment and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant