CN111724638B - AR interactive learning method and electronic equipment - Google Patents

AR interactive learning method and electronic equipment Download PDF

Info

Publication number
CN111724638B
CN111724638B CN202010484984.0A CN202010484984A CN111724638B CN 111724638 B CN111724638 B CN 111724638B CN 202010484984 A CN202010484984 A CN 202010484984A CN 111724638 B CN111724638 B CN 111724638B
Authority
CN
China
Prior art keywords
page
scene
electronic device
content
spoken language
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010484984.0A
Other languages
Chinese (zh)
Other versions
CN111724638A (en
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010484984.0A priority Critical patent/CN111724638B/en
Publication of CN111724638A publication Critical patent/CN111724638A/en
Application granted granted Critical
Publication of CN111724638B publication Critical patent/CN111724638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an AR interactive learning method and electronic equipment, wherein the method comprises the following steps: acquiring a page identifier of a current book page; searching for AR materials corresponding to the page identification; constructing an AR scene according to the AR material and a preview picture of the current book page; and loading and displaying the AR scene. By implementing the embodiment of the application, the attention of the infant can be focused.

Description

AR interactive learning method and electronic equipment
Technical Field
The application relates to the technical field of computers, in particular to an AR interactive learning method and electronic equipment.
Background
Research shows that the infant period is the best period for learning languages, so many parents mostly require children to learn english from the infant period, but in practice, the self-control ability of infants is generally poor, the infants are easily interfered by external environment, and long-time attention-focused learning is difficult for the infants, so how to control the infants to pay attention to mechanical learning for a long time becomes a problem to be solved urgently in infant education.
Disclosure of Invention
The embodiment of the application discloses an AR interactive learning method and electronic equipment, which can help infants to concentrate on attention.
The first aspect of the embodiments of the present application discloses an AR interactive learning method, including:
acquiring a page identifier of a current book page;
searching for AR materials corresponding to the page identification;
constructing an AR scene according to the AR material and the preview picture of the current book page;
and loading and displaying the AR scene.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after the finding of the AR material corresponding to the page identifier, the method further includes:
acquiring user information;
searching for a first AR material matched with the user information in the AR materials;
constructing an AR scene according to the AR materials and the preview picture of the current book page, wherein the method comprises the following steps:
and constructing an AR scene according to the first AR material and the preview picture of the current book page.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the constructing an AR scene according to the first AR material and the preview picture of the current book page includes:
when the click operation aiming at the current book page is detected, determining target content corresponding to the click operation;
determining a second AR material corresponding to the target content from the first AR material;
And constructing an AR scene according to the second AR material and the preview picture of the current book page.
As an optional implementation manner, in the first aspect of the embodiment of the present application, when the current mode of the electronic device is a multi-user mode, the loading and displaying the AR scene includes:
loading and displaying the AR scene on a display screen of the electronic equipment;
and sending the AR scene to a first terminal device in communication connection with the electronic device, so that the first terminal device displays the AR scene on a display screen of the first terminal device.
As an optional implementation manner, in the first aspect of this embodiment of this application, the method is applied to an electronic device of a spoken language practice user, and after the obtaining a page identifier of a current book page when the current book page is spoken language practice content, the method further includes:
sending the page identification to a second terminal device of an assessment user of the spoken language practice user, so that the second terminal device searches for an AR material corresponding to the page identification and a standard audio frequency of the spoken language practice content;
the searching for the AR material corresponding to the page identifier includes:
acquiring exercise audio of the spoken exercise user for the spoken exercise content;
Sending the practice audio to the second terminal device, so that the second terminal device analyzes the practice audio according to the standard audio of the spoken language practice content, determines a third AR material from the AR materials corresponding to the page identifier according to an analysis result, and sends the third AR material to the electronic device;
receiving the third AR material;
the constructing an AR scene according to the AR material and the preview picture of the current book page comprises the following steps:
and constructing an AR scene according to the third AR material and the preview picture of the current book page.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the analysis result includes error content; after the loading displays the AR scene, the method further comprises:
sending an error correction instruction to the second terminal device, so that the second terminal device determines a standard audio corresponding to the error content from standard audio of the spoken language practice content, and sends the standard audio corresponding to the error content to the electronic device;
and receiving and playing the standard audio corresponding to the error content.
A second aspect of an embodiment of the present application discloses an electronic device, including:
The acquisition unit is used for acquiring a page identifier of a current book page;
the searching unit is used for searching the AR material corresponding to the page identifier;
the processing unit is used for constructing an AR scene according to the AR materials and the preview picture of the current book page;
and the display unit is used for loading and displaying the AR scene.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the obtaining unit is further configured to obtain user information after the searching unit searches for the AR material corresponding to the page identifier; searching a first AR material matched with the user information in the AR materials;
the processing unit is specifically configured to construct an AR scene according to the first AR material and the preview picture of the current book page.
As an optional implementation manner, in the second aspect of the embodiment of the present application, a manner that the processing unit is configured to construct an AR scene according to the first AR material and the preview picture of the current book page is specifically:
the processing unit is used for determining target content corresponding to the click operation when the click operation aiming at the current book page is detected; determining a second AR material corresponding to the target content from the first AR material; and constructing an AR scene according to the second AR material and the preview picture of the current book page.
As an optional implementation manner, in the second aspect of the embodiment of the present application, when the current mode of the electronic device is a multi-user mode, the display unit is specifically configured to load and display the AR scene on a display screen of the electronic device; and sending the AR scene to a first terminal device in communication connection with the electronic device, so that the first terminal device displays the AR scene on a display screen of the first terminal device.
As an optional implementation manner, in the second aspect of this embodiment of the present application, when the current book page is spoken language practice content, the electronic device further includes:
the sending unit is used for sending the page identification to second terminal equipment of an assessment user of the spoken language practice user after the acquiring unit acquires the page identification of the current book page, so that the second terminal equipment searches for an AR material corresponding to the page identification and a standard audio frequency of the spoken language practice content;
the searching unit is specifically configured to collect practice audio of the spoken language practice user for the spoken language practice content; sending the practice audio to the second terminal device, so that the second terminal device analyzes the practice audio according to the standard audio of the spoken language practice content, determines a third AR material from the AR materials corresponding to the page identifier according to an analysis result, and sends the third AR material to the electronic device; and receiving the third AR material;
And the processing unit is specifically configured to construct an AR scene according to the third AR material and the preview picture of the current book page.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the analysis result includes error content; the sending unit is further configured to send an error correction instruction to the second terminal device after the display unit loads and displays the AR scene, so that the second terminal device determines a standard audio corresponding to the error content from standard audios of the spoken language practice content, and sends the standard audio corresponding to the error content to the electronic device;
the electronic device further includes:
and the playing unit is used for receiving and playing the standard audio corresponding to the error content.
A third aspect of an embodiment of the present application discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect of the present application.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium storing a computer program comprising a program code for performing some or all of the steps of any one of the methods of the first aspect of the present application.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect.
A sixth aspect of embodiments of the present application discloses an application issuing system, configured to issue a computer program product, where the computer program product is configured to, when run on a computer, cause the computer to perform part or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
implementing the embodiment of the application, and acquiring the page identification of the current book page; searching for AR materials corresponding to the page identification; constructing an AR scene according to the AR material and a preview picture of the current book page; and loading and displaying the AR scene. By implementing the method, the substitution feeling and immersion feeling in the learning process of the infant are stronger, and the infant studying method is helpful for focusing attention.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without making a creative effort.
Fig. 1 is a schematic flowchart of an AR interactive learning method disclosed in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a page identifier collection disclosed in an embodiment of the present application;
FIG. 3 is a schematic view of a loaded display of an AR scene disclosed in an embodiment of the present application;
fig. 4 is a schematic flowchart of another AR interactive learning method disclosed in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of another electronic device disclosed in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises," "comprising," and any variations thereof in the embodiments and drawings of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The AR interactive learning method disclosed in the embodiment of the present application may be applied to an electronic device, the electronic device may be a family education machine, and an operating system of the family education machine may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a Black Berry operating system, a Windows Phone8 operating system, and the like.
The electronic device may be a terminal device or another electronic device. The terminal device may be referred to as a User Equipment (UE), a Mobile Station (MS), a mobile terminal (mobile terminal), an intelligent terminal, and the like, and may communicate with one or more core networks through a Radio Access Network (RAN). For example, the terminal equipment may be a mobile phone (or so-called "cellular" phone), a computer with a mobile terminal, etc., and the terminal equipment may also be a portable, pocket, hand-held, computer-included or vehicle-mounted mobile device and terminal equipment in future NR networks, which exchange voice or data with a radio access network.
The embodiment of the application discloses an AR interactive learning method and electronic equipment, which can help infants to focus attention. The details will be described below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of an AR interactive learning method disclosed in an embodiment of the present application. The AR interactive learning method shown in fig. 1 may specifically include the following steps:
101. and acquiring a page identifier of the current book page.
In this embodiment of the present application, the page identifier of the current page of the book may be a page number or a two-dimensional code, and the obtaining of the page identifier of the current page of the book includes, but is not limited to, the following implementation manners:
mode 1, please refer to fig. 2, and fig. 2 is a schematic diagram of page identifier acquisition, where 01 in fig. 2 is a front camera of an electronic device, 02 is a display screen of the electronic device, and 03 is a current book page. Specifically, a front camera of the electronic device is used for shooting a current book page to obtain a page image corresponding to the current book page; and performing OCR recognition on the page image to obtain a page identifier of the page image.
For example, before a front camera of the electronic device is used for shooting a current book page to obtain a page image corresponding to the current book page, an explicit frame for indicating the book placement position may be projected on a desk top by using a laser projector of the electronic device; wherein, the explicit frame can also comprise a direction indication line; the current book page is placed in the explicit frame by the user as indicated by the direction indicating line. It should be noted that the dominant box may disappear after the page image is obtained.
Mode 2: outputting a page selection interface on a display screen of the electronic equipment; and detecting the page identification input on the page selection interface by the user.
102. And searching the AR material corresponding to the page identification of the current book page.
In the embodiment of the application, the electronic device or the server may be preset with an AR material library, the AR material library records a corresponding relationship between the page identifier and the AR material, and the AR material library does not require other devices for acquiring the AR material corresponding to the page identifier under the condition of the electronic device, so that the acquisition efficiency is high; under the condition that the AR material library is in the server, the AR materials corresponding to the page identification are obtained by the electronic equipment request server, and the storage pressure of the electronic equipment can be relieved. Specifically, under the condition that the AR material library is in the server, finding the AR material corresponding to the page identifier of the current book page may include: sending a material acquisition request carrying the page identifier to a server so that the server searches for the AR material corresponding to the page identifier in an AR material library and sends the AR material corresponding to the page identifier to the electronic equipment; and receiving the AR material corresponding to the page identifier.
For example, before sending the material acquisition request carrying the page identifier to the server, a connection request carrying an equipment identifier of the electronic equipment may also be sent to the server, so that the server determines whether the electronic equipment is legal or not according to the equipment identifier, and establishes a connection with the electronic equipment when the electronic equipment is legal; sending a material acquisition request carrying the page identifier to a server, wherein the material acquisition request comprises the following steps: and when detecting that the connection with the server is successful, sending a material acquisition request carrying the page identifier to the server. The electronic device may be a specific brand or a paid device. Further, if the electronic device is a paid device, the server may further determine a device level of the electronic device according to the payment amount of the electronic device, determine a communication link matched with the device level, and send the AR material corresponding to the page identifier to the electronic device through the communication link. The payment amount is proportional to the equipment grade of the electronic equipment, and the higher the payment amount is, the higher the equipment grade of the electronic equipment is, the higher the data transmission efficiency of the corresponding communication link is.
103. And constructing an AR scene according to the AR material and the preview picture of the current book page.
Optionally, after step 102, user information may also be obtained; searching a first AR material matched with the user information in the AR materials; the constructing an AR scene according to the AR material and the preview image of the current book page may include: and constructing an AR scene according to the first AR material and the preview picture of the current book page. The user information can comprise user gender and user preference, and the constructed AR scene can be different from person to person based on the first AR material searched by the user information, so that the user experience is better.
Further, the constructed AR scene may be for all contents on the current book page, and may also be for partial contents on the current book page, the user may autonomously select a construction mode of the AR scene, the construction mode may include an integral construction and a partial construction, the integral construction may be a construction using all the materials in the first AR material, the partial construction may be a construction using only materials related to partial contents in the first AR material, and the following exemplary description of the case of the partial construction: constructing the AR scene from the first AR material and the preview picture of the current book page may include: when the click operation aiming at the current book page is detected, determining the target content corresponding to the click operation; determining a second AR material corresponding to the target content from the first AR material; and constructing an AR scene according to the second AR material and the preview picture of the current book page. By implementing the method, the flexible construction of the AR scene can be realized.
Alternatively, the first and second liquid crystal display panels may be,
after the step 102, the emotion of the user can be identified, and a fourth AR material matched with the emotion of the user is searched in the AR materials; the constructing an AR scene according to the AR material and the preview image of the current book page may include: and constructing an AR scene according to the fourth AR material and the preview picture of the current book page. Wherein, recognizing the emotion of the user can be realized by collecting and analyzing facial expressions, physiological signals or heart rate of the user. Further, constructing the AR scene according to the fourth AR material and the preview picture of the current book page may include: when the click operation aiming at the current book page is detected, determining the target content corresponding to the click operation; determining a second AR material corresponding to the target content from the fourth AR material; and constructing an AR scene according to the second AR material and the preview picture of the current book page.
104. And loading and displaying the AR scene.
In the embodiment of the application, the display mode of the AR scene may be divided into multi-device display and single-device display. The multiple-device display may be performed by displaying both the electronic device and a terminal device communicatively connected to the electronic device, so that multiple users can view the AR scene at the same time, and the single-device display may be performed by displaying only on a display screen of the electronic device, specifically please refer to fig. 3, where fig. 3 is a loading display diagram of the AR scene, where 01 in fig. 3 is a display screen of the electronic device, 02 is a front book page, fig. 3 further includes an enlarged view of 01, 001, 002, and 003 in the enlarged view are AR materials in the AR scene, and 004 is a preview image of the current book page.
The following describes a multi-device display: when the current mode of the electronic device is the multi-user mode, the loading and displaying the AR scene may include: loading and displaying the AR scene on a display screen of the electronic equipment; and sending the AR scene to a first terminal device in communication connection with the electronic device, so that the first terminal device displays the AR scene on a display screen of the first terminal device.
After step 104, an interaction instruction for the AR material in the AR scene may also be detected, and the AR material in the AR scene is controlled to execute a preset action indicated by the interaction instruction. Specifically, the interaction instruction may be input by a user through a contact type or a non-contact type, and the embodiment of the present application is not limited. The touch input mode may be pressing a virtual button or touch sliding on a display screen of the electronic device, and the non-touch input mode may be gesture detection, voice input or wearable device cooperation.
By implementing the method, based on the AR scene, the substitution sense and immersion sense in the learning process can be stronger, the attention can be focused, the storage pressure of the electronic equipment can be relieved, the constructed AR scene can be different from person to person, the use experience sense of a user is better, the flexible construction of the AR scene can be realized, and multiple users can watch the AR scene conveniently.
Example two
Referring to fig. 4, fig. 4 is a flowchart illustrating another AR interactive learning method disclosed in an embodiment of the present application, where the AR interactive learning method shown in fig. 4 may be applied to an electronic device of a spoken language practice user, and the AR interactive learning method shown in fig. 4 includes the following steps:
401. acquiring a page identifier of a current book page; wherein, the current book page is the spoken language practice content.
For a detailed description of step 401, please refer to the introduction of step 101 in the first embodiment, which is not described again in this embodiment.
402. And sending the page identification to a second terminal device of the assessment user of the spoken language practice user so that the second terminal device searches for the AR material corresponding to the page identification and the standard audio of the spoken language practice content.
The connection mode between the electronic device and the second terminal device may be bluetooth, Wlan, or Wifi, which is not limited in the embodiments of the present application. The way for the second terminal device to search for the AR material corresponding to the page identifier may be the same as the obtaining way of the electronic device in the first embodiment, and specific reference is made to the description in the first embodiment, which is not limited in the embodiment of the present application.
403. And acquiring the practice audio of the spoken language practice user aiming at the spoken language practice content.
404. And sending the practice audio to the second terminal equipment so that the second terminal equipment analyzes the practice audio according to the standard audio of the spoken practice content, determines a third AR material from the AR materials corresponding to the page identification according to an analysis result, and sends the third AR material to the electronic equipment.
The third AR material is explained below by way of example:
(1) the spoken language practice content is an apple, orange and banana, the AR material corresponding to the page identifier may include an apple image, an orange image and a banana image, and if the standard audio of the spoken language practice content is compared, it is determined that only the sound of the apple and orange in the practice audio of the user is correct, the third AR material is the apple image and the orange image.
(2) The spoken language practice content is an apple, orange and banana, the AR material corresponding to the page identifier may include a colored apple image, an uncolored apple image, a colored orange image, an uncolored orange image, a colored banana image and an uncolored banana image, and if it is determined by comparing the standard audio of the spoken language practice content that only the pronunciation of the apple and orange in the practice audio of the user is correct, the third AR material is a colored apple image, a colored orange image and an uncolored banana image.
In an embodiment of the present application, the spoken exercise content may include first spoken exercise content and second spoken exercise content, and the exercise audio of the spoken exercise content may include first exercise audio and second exercise audio, wherein the first exercise audio is generated earlier than the second exercise audio. Based on this, sending the exercise audio to the second terminal device may include, but is not limited to, the following implementations:
mode 1: when the first exercise audio is detected to be collected, sending the first exercise audio to a second mobile terminal; and when detecting that the second exercise audio is finished, sending the second exercise audio to the second mobile terminal. By implementing the method, the real-time sending of the practice audio can be realized, the data processing pressure of the second mobile terminal can be relieved, and the acquisition efficiency of the third AR material is improved.
Mode 2: and when the second exercise audio collection is detected to be completed, sending the first exercise audio and the second exercise audio to the second mobile terminal. By implementing the method, frequent interaction between the electronic equipment and the second terminal equipment is not needed, and the reduction of the equipment power consumption of the electronic equipment is facilitated.
405. A third AR material is received.
406. And constructing an AR scene according to the third AR material and the preview picture of the current book page.
407. And loading and displaying the AR scene.
For a detailed description of step 407, please refer to the introduction of step 104 in the first embodiment, which is not described again in this embodiment. Step 401 to step 407 are executed, so that the constructed AR scene can be matched with practice audio of the spoken language practice user, and the interestingness of spoken language practice is improved.
Optionally, in this embodiment of the application, the analysis result obtained by the second terminal device includes error content; after the AR scene is loaded and displayed, an error correction instruction can be sent to the second terminal device, so that the second terminal device determines a standard audio corresponding to the error content from the standard audio of the spoken language practice content, and sends the standard audio corresponding to the error content to the electronic device; and receiving and playing the standard audio corresponding to the error content. By implementing the method, the aim of guiding the spoken language practice user to carry out the spoken language practice can be achieved.
The second terminal device may be connected to a plurality of electronic devices simultaneously, for example, the second terminal device is connected to a first electronic device and a second electronic device simultaneously, the user corresponding to the first electronic device is a first spoken language practicing user, the user corresponding to the second electronic device is a second spoken language practicing user, under the condition that the page identifications sent by the first electronic device and the second electronic device are consistent, the second terminal device analyzes the practice audio of the first spoken language practicing user to obtain a first analysis result, analyzes the practice audio of the second spoken language practicing user to obtain a second analysis result, and determines a target electronic device from the first electronic device and the second electronic device according to the first analysis result and the second analysis result, and sends a virtual resource package to the target electronic device.
Wherein, the first analysis result may be a first score, the second analysis result may be a second score, and the determining the target electronic device from the first electronic device and the second electronic device according to the first analysis result and the second analysis result includes: when the first score is greater than the second score, the first electronic device is determined to be the target electronic device, and when the second score is greater than the first score, the second electronic device is determined to be the target electronic device. By implementing the method, the spoken language practice interest of the user can be stimulated through the competitive spoken language practice of the user. It should be noted that, when the first score is equal to the second score, the second terminal device may send the virtual resource packet to both the first electronic device and the second electronic device, or may not send the virtual resource packet.
Further, the first analysis result may include pronunciation questions of the first spoken language practicing user, the second analysis result may include pronunciation questions of the second spoken language practicing user, and the second mobile terminal may further obtain the first exercise content according to the first analysis result and send the first exercise content to the first electronic device; and acquiring second exercise content according to the second analysis result, and sending the second exercise content to the second electronic equipment. Wherein the first exercise content is used to correct pronunciation issues of the first spoken exercise user and the second exercise content is used to correct pronunciation issues of the second spoken exercise user. If the user of the second terminal device is a teacher and the user of the electronic device connected with the second terminal device is a student, by implementing the method, the teacher can instantly know the spoken language condition of each student and can also carry out differentiated recommendation of exercise contents according to the spoken language condition of each student.
By implementing the method, based on the AR scene, the substitution sense and immersion sense in the learning process can be stronger, the attention can be focused, the storage pressure of the electronic equipment can be relieved, the constructed AR scene can be different from person to person, the use experience sense of a user is better, the flexible construction of the AR scene can be realized, multiple users can conveniently watch the AR scene at the same time, the acquisition efficiency of third AR materials can be improved, and the interestingness of spoken language practice can be improved.
EXAMPLE III
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, the electronic device may include:
an obtaining unit 501 is configured to obtain a page identifier of a current book page.
Mode 1: an obtaining unit 501, configured to capture a current book page by using a front-facing camera of an electronic device, to obtain a page image corresponding to the current book page; and performing OCR recognition on the page image to obtain a page identifier of the page image.
Exemplarily, the obtaining unit is further configured to project an explicit frame for indicating a book placement position on the desktop by using a laser projector of the electronic device before a front camera of the electronic device is used to capture a current book page and obtain a page image corresponding to the current book page; wherein, the explicit frame can also comprise a direction indication line; the current book page is placed in the explicit frame by the user as indicated by the direction indicating line. It should be noted that the dominant box may disappear after the page image is obtained.
Mode 2: an obtaining unit 501, configured to output a page selection interface on a display screen of an electronic device; and detecting the page identification input on the page selection interface by the user.
The searching unit 502 is configured to search for the AR material corresponding to the page identifier.
The searching unit 502 is configured to send a material obtaining request carrying the page identifier to a server, so that the server searches for an AR material corresponding to the page identifier in an AR material library, and sends the AR material corresponding to the page identifier to an electronic device; and receiving the AR material corresponding to the page identifier.
Illustratively, before sending the material acquisition request carrying the page identifier to the server, the searching unit 502 is further configured to send a connection request carrying an equipment identifier of the electronic equipment to the server, so that the server determines whether the electronic equipment is a legal device according to the equipment identifier, and establishes a connection with the electronic equipment when the electronic equipment is a legal device; and when detecting that the connection with the server is successful, sending a material acquisition request carrying the page identifier to the server.
The processing unit 503 is configured to construct an AR scene according to the AR material corresponding to the page identifier and the preview image of the current book page.
Optionally, in this embodiment of the application, the obtaining unit 501 is further configured to obtain the user information after the searching unit 502 searches for the AR material corresponding to the page identifier; searching a first AR material matched with the user information in the AR materials corresponding to the page identification; the processing unit 503 is specifically configured to construct an AR scene according to the first AR material and the preview image of the current book page.
Further, the way that the processing unit 503 is configured to construct the AR scene according to the first AR material and the preview picture of the current book page may specifically be: the processing unit 503 is configured to determine, when a click operation for a current book page is detected, target content corresponding to the click operation; determining a second AR material corresponding to the target content from the first AR material; and constructing an AR scene according to the second AR material and the preview picture of the current book page.
Alternatively, the first and second electrodes may be,
the obtaining unit 501 is further configured to identify a user emotion after the searching unit 502 searches for the AR material corresponding to the page identifier, and search for a fourth AR material matching the user emotion in the AR material; the processing unit 503 is specifically configured to construct an AR scene according to the fourth AR material and the preview image of the current book page. Wherein, recognizing the emotion of the user can be realized by collecting and analyzing facial expressions, physiological signals or heart rate of the user.
The way for the processing unit 503 to construct the AR scene according to the fourth AR material and the preview picture of the current book page may specifically be: the processing unit 503 is configured to determine, when a click operation for a current book page is detected, target content corresponding to the click operation; determining a second AR material corresponding to the target content from the fourth AR material; and constructing an AR scene according to the second AR material and the preview picture of the current book page.
And a display unit 504, configured to load and display the AR scene.
In this embodiment of the application, when the current mode of the electronic device is the multi-user mode, the display unit 504 is specifically configured to load and display the AR scene on a display screen of the electronic device; and sending the AR scene to a first terminal device in communication connection with the electronic device, so that the first terminal device displays the AR scene on a display screen of the first terminal device.
The display unit 504 is further configured to detect an interaction instruction for the AR material in the AR scene, and control the AR material in the AR scene to execute a preset action indicated by the interaction instruction.
Example four
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application. The electronic device shown in fig. 6 is optimized from the electronic device shown in fig. 5, and as shown in fig. 6, when the current book page is spoken language practice content, the electronic device may further include:
A sending unit 505, configured to send the page identifier to a second terminal device of the assessment user of the spoken language practice user after the obtaining unit 501 obtains the page identifier of the current book page, so that the second terminal device searches for the AR material and the standard audio of the spoken language practice content corresponding to the page identifier.
The searching unit 502 is specifically configured to collect practice audio of a spoken language practice user for spoken language practice content; sending the practice audio to the second terminal equipment so that the second terminal equipment analyzes the practice audio according to the standard audio of the spoken practice content, determines a third AR material from the AR materials corresponding to the page identifier according to the analysis result, and sends the third AR material to the electronic equipment; and receiving third AR material.
The finding unit 502 is configured to send the exercise audio to the second terminal device, which may include, but is not limited to, the following implementation manners:
the searching unit 502 is configured to send a first exercise audio to the second mobile terminal when it is detected that the first exercise audio is completely collected; and when detecting that the second exercise audio is finished, sending the second exercise audio to the second mobile terminal.
Alternatively, the first and second electrodes may be,
a searching unit 502, configured to send the first exercise audio and the second exercise audio to the second mobile terminal when it is detected that the second exercise audio is completely captured.
The processing unit 503 is specifically configured to construct an AR scene according to the third AR material and the preview image of the current book page.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the analysis result may include error content; a sending unit 505, further configured to send an error correction instruction to the second terminal device after the display unit 504 loads and displays the AR scene, so that the second terminal device determines a standard audio corresponding to the error content from the standard audio of the spoken language practice content, and sends the standard audio corresponding to the error content to the electronic device;
the electronic device further includes:
and a playing unit 506, configured to receive and play the standard audio corresponding to the error content.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
wherein, the processor 702 calls the executable program code stored in the memory 701 to execute part or all of the steps of the method in the above embodiments.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute part or all of the steps of the method in the embodiment.
The embodiment of the application discloses a computer program product, which causes a computer to execute part or all of the steps of the method in the above embodiment when the computer program product runs on the computer.
The embodiment of the application discloses an application issuing system, which is used for issuing a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the method in the embodiment.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The AR interactive learning method and the electronic device disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in this document to explain the principles and implementations of the present application, and the size of the step numbers in the specific examples does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by the function and the inherent logic of the process, but should not form any limitation on the implementation process of the embodiments of the present application. The units described as separate parts may or may not be physically separate, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit. If the integrated unit is implemented as a software functional unit and sold or used as a stand-alone product, it may be stored in a memory accessible to a computer. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
The above description of the embodiments is only for the purpose of helping to understand the method of the present application and its core idea; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. An AR interactive learning method, which is applied to an electronic device of a spoken language practice user, the method comprising:
acquiring a page identifier of a current book page; the page identification comprises a page number or a two-dimensional code, and the current book page is spoken language practice content;
sending the page identification to a second terminal device of an assessment user of the spoken language practice user, so that the second terminal device searches for an AR material corresponding to the page identification and a standard audio frequency of the spoken language practice content;
acquiring exercise audio of the spoken exercise user for the spoken exercise content;
sending the practice audio to the second terminal device, so that the second terminal device analyzes the practice audio according to the standard audio of the spoken language practice content, determines a third AR material from the AR materials corresponding to the page identifier according to an analysis result, and sends the third AR material to the electronic device;
Receiving the third AR material;
determining target AR materials from the third AR materials;
constructing an AR scene according to the target AR material and the preview picture of the current book page;
and loading and displaying the AR scene.
2. The method of claim 1, wherein determining target AR material from the third AR material comprises:
acquiring user information;
searching for a first AR material matched with the user information in the third AR material;
constructing an AR scene according to the target AR material and the preview picture of the current book page, wherein the method comprises the following steps:
and constructing an AR scene according to the first AR material and the preview picture of the current book page.
3. The method of claim 2, wherein said constructing an AR scene from said first AR material and a preview of said current book page comprises:
when the click operation aiming at the current book page is detected, determining target content corresponding to the click operation;
determining a second AR material corresponding to the target content from the first AR material;
and constructing an AR scene according to the second AR material and the preview picture of the current book page.
4. The method according to any one of claims 1 to 3, wherein when the current mode of the electronic device is a multi-user mode, the loading and displaying the AR scene comprises:
loading and displaying the AR scene on a display screen of the electronic equipment;
and sending the AR scene to a first terminal device in communication connection with the electronic device, so that the first terminal device displays the AR scene on a display screen of the first terminal device.
5. The method of claim 1, wherein the analysis results include error content; after the loading displays the AR scene, the method further comprises:
sending an error correction instruction to the second terminal device, so that the second terminal device determines a standard audio corresponding to the error content from standard audio of the spoken language practice content, and sends the standard audio corresponding to the error content to the electronic device;
and receiving and playing the standard audio corresponding to the error content.
6. An electronic device, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a page identifier of a current book page, the page identifier comprises a page number or a two-dimensional code, and the current book page is spoken language practice content;
The searching unit is used for sending the page identifier to second terminal equipment of an assessment user of the spoken language practice user so that the second terminal equipment searches for the AR material corresponding to the page identifier and the standard audio of the spoken language practice content; acquiring exercise audio of the spoken language exercise user aiming at the spoken language exercise content; acquiring practice audio of the spoken language practice user for the spoken language practice content, sending the practice audio to the second terminal device, so that the second terminal device analyzes the practice audio according to standard audio of the spoken language practice content, determines a third AR material from the AR materials corresponding to the page identifier according to an analysis result, sends the third AR material to the electronic device, and receives the third AR material; determining target AR materials from the third AR materials;
the processing unit is used for constructing an AR scene according to the target AR material and the preview picture of the current book page;
and the display unit is used for loading and displaying the AR scene.
7. The electronic device of claim 6, wherein the manner in which the search unit is configured to determine the target AR material from the third AR material specifically comprises:
The searching unit is used for acquiring user information; searching a first AR material matched with the user information in the third AR material;
the method for constructing the AR scene according to the target AR material and the preview image of the current book page by the processing unit specifically includes:
and the processing unit is used for constructing an AR scene according to the first AR material and the preview picture of the current book page.
8. An electronic device, characterized in that the electronic device comprises:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the method according to any one of claims 1 to 5.
9. A computer-readable storage medium having stored thereon a computer program comprising instructions for carrying out some or all of the steps of the method according to any one of claims 1 to 5.
CN202010484984.0A 2020-06-01 2020-06-01 AR interactive learning method and electronic equipment Active CN111724638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010484984.0A CN111724638B (en) 2020-06-01 2020-06-01 AR interactive learning method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010484984.0A CN111724638B (en) 2020-06-01 2020-06-01 AR interactive learning method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111724638A CN111724638A (en) 2020-09-29
CN111724638B true CN111724638B (en) 2022-07-29

Family

ID=72565662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010484984.0A Active CN111724638B (en) 2020-06-01 2020-06-01 AR interactive learning method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111724638B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362662A (en) * 2021-06-30 2021-09-07 重庆五洲世纪文化传媒有限公司 AR-based preschool education system
CN113742500A (en) * 2021-07-15 2021-12-03 北京墨闻教育科技有限公司 Situational scene teaching interaction method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2534793A1 (en) * 2006-01-30 2007-07-30 Sandro Micieli Intelligent medical device - imd
CN103996314A (en) * 2014-05-22 2014-08-20 南京奥格曼提软件科技有限公司 Teaching system based on augmented reality
CN106023692A (en) * 2016-05-13 2016-10-12 广东博士早教科技有限公司 AR interest learning system and method based on entertainment interaction
CN110471530A (en) * 2019-08-12 2019-11-19 苏州悠优互娱文化传媒有限公司 It is a kind of based on children's book equipped AR interactive learning method, apparatus, medium
CN110609833A (en) * 2019-09-19 2019-12-24 广东小天才科技有限公司 Book page number identification method and device, family education machine and storage medium

Also Published As

Publication number Publication date
CN111724638A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
CN108847214B (en) Voice processing method, client, device, terminal, server and storage medium
US20220013026A1 (en) Method for video interaction and electronic device
CN106355429A (en) Image material recommendation method and device
CN111724638B (en) AR interactive learning method and electronic equipment
TW201104644A (en) Interactive information system, interactive information method, and computer readable medium thereof
CN108304762B (en) Human body posture matching method and device, storage medium and terminal
CN109241301A (en) Resource recommendation method and device
CN108877334B (en) Voice question searching method and electronic equipment
CN108833991A (en) Video caption display methods and device
CN111077996B (en) Information recommendation method and learning device based on click-to-read
CN108846030B (en) method, system, electronic device and storage medium for visiting official website
EP4345591A1 (en) Prop processing method and apparatus, and device and medium
CN112306442A (en) Cross-device content screen projection method, device, equipment and storage medium
CN107870904A (en) A kind of interpretation method, device and the device for translation
CN108881979B (en) Information processing method and device, mobile terminal and storage medium
CN111984180B (en) Terminal screen reading method, device, equipment and computer readable storage medium
CN110767005A (en) Data processing method and system based on intelligent equipment special for children
CN111639158B (en) Learning content display method and electronic equipment
CN111400539A (en) Voice questionnaire processing method, device and system
CN111723606A (en) Data processing method and device and data processing device
CN111523343B (en) Reading interaction method, device, equipment, server and storage medium
CN112165627A (en) Information processing method, device, storage medium, terminal and system
CN109542297A (en) The method, apparatus and electronic equipment of operation guiding information are provided
CN108280184B (en) Test question extracting method and system based on intelligent pen and intelligent pen
WO2023103917A1 (en) Speech control method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant