CN112001990B - Scene-based data processing method and device, storage medium and electronic device - Google Patents

Scene-based data processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN112001990B
CN112001990B CN202010757988.1A CN202010757988A CN112001990B CN 112001990 B CN112001990 B CN 112001990B CN 202010757988 A CN202010757988 A CN 202010757988A CN 112001990 B CN112001990 B CN 112001990B
Authority
CN
China
Prior art keywords
spoken language
learning
scene
user account
spoken
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010757988.1A
Other languages
Chinese (zh)
Other versions
CN112001990A (en
Inventor
张星一
胡立峰
林紫璇
田野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Hongen Perfect Future Education Technology Co ltd
Original Assignee
Tianjin Hongen Perfect Future Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Hongen Perfect Future Education Technology Co ltd filed Critical Tianjin Hongen Perfect Future Education Technology Co ltd
Priority to CN202010757988.1A priority Critical patent/CN112001990B/en
Publication of CN112001990A publication Critical patent/CN112001990A/en
Application granted granted Critical
Publication of CN112001990B publication Critical patent/CN112001990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a scene-based data processing method and device, a storage medium and an electronic device, wherein the method comprises the following steps: according to the spoken language grade of the user account, a spoken language learning task is distributed; selecting a first learning scene from a scene map, or matching the first learning scene according to the spoken language learning task; and controlling non-player characters in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of non-player characters, and each non-player character is associated with a learning corpus of at least one spoken language learning task. The invention solves the technical problem that the spoken language learning task cannot be demonstrated in the learning scene in the related technology, provides an online mode of spoken language learning, teaches through lively activities and improves the learning enthusiasm of users.

Description

Scene-based data processing method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of data processing, and in particular, to a scene-based data processing method and apparatus, a storage medium, and an electronic apparatus.
Background
In the related art, in order to grasp a certain language, such as english, a user uses learning software, such as dictionary software and translation software, to enhance language ability through learning tasks.
In the related art, when learning spoken language on line, a spoken language environment is generally given, then some dialogue corpus under the environment is derived, the system is guided and read, and then the user follows and reads the mobile phone. Or the real person study is carried out, the online remote spoken dialogue is carried out, the study cost and the communication network are required to be high, and the popularization rate is low. The on-line spoken language learning mode lacks some motivational measures to excite the learning desire of the user, the desire of the user to learn is not strong, and the class abandoning rate is high.
In view of the above problems in the related art, no effective solution has been found yet.
Disclosure of Invention
The embodiment of the invention provides a scene-based data processing method and device, a storage medium and an electronic device.
According to an embodiment of the present invention, there is provided a scene-based data processing method including: according to the spoken language grade of the user account, a spoken language learning task is distributed; selecting a first learning scene from a scene map, or matching the first learning scene according to the spoken language learning task; and controlling non-player character NPCs in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of NPCs, and each NPC is associated with a learning corpus of at least one spoken language learning task.
Optionally, assigning the spoken language learning task according to the spoken language grade of the user account includes: determining the spoken language grade according to the learning progress of the user account; selecting a spoken language course matched with the spoken language grade from a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty; and distributing a spoken language learning task for the user account in the spoken language course.
Optionally, assigning a spoken learning task to the user account in the spoken lesson includes one of: generating a route guidance marker in the first learning scenario, wherein the route guidance marker points to a first NPC in the first learning scenario; after the PPC controlled by the user account moves to the first NPC, a spoken language learning task corresponding to the first NPC is distributed for the user account in the spoken language course; responding to a selected instruction of the second NPC; after the PPC controlled by the user account moves to the second NPC, a spoken language learning task corresponding to the second NPC is distributed for the user account in the spoken language course.
Optionally, controlling the NPC in the first learning scenario to demonstrate the spoken learning task includes: controlling the NPC in the first learning scene to guide and broadcast the corpus audio of the spoken language learning task; and acquiring response audio or follow-up audio corresponding to the corpus audio through an audio interface of the spoken language learning client.
Optionally, assigning the spoken language learning task according to the spoken language grade of the user account includes: determining the role grade of a third NPC which is met by the PPC controlled by the user account in the first learning scene; judging whether the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, wherein the role grade is used for indicating the highest course grade of a spoken language learning task corresponding to the third NPC; if the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, a first spoken language learning task corresponding to the spoken language grade is distributed for the user account; and if the spoken language grade of the user account is smaller than the role grade of the third NPC, a second spoken language learning task is allocated to the user account, wherein the third NPC is related to the first spoken language learning task and the second spoken language learning task, and the difficulty level of the second spoken language learning task is smaller than that of the first spoken language learning task.
Optionally, after controlling the non-player character NPC in the first learning scenario to demonstrate the spoken learning task, the method further includes: judging whether the spoken language learning task is completed or not; if the spoken language learning task is completed, a first virtual asset is allocated to the user account; judging whether the spoken language lesson to which the spoken language learning task belongs is completed or not; and if the spoken language lesson to which the spoken language learning task belongs is completed, distributing a second virtual asset for the user account.
Optionally, after the first virtual asset is allocated to the user account, the method further includes: redeeming a first virtual product at an online mall using the first virtual asset; a virtual ornament redeeming the user account controlled PPC at an online mall using the first virtual asset; creating a virtual building in a second learning scene of the scene map using the first virtual asset.
Optionally, after redeeming the virtual building in the second learning context of the context map using the first virtual asset, the method further comprises: selecting building positions of virtual buildings to be paved in the second learning scene; redeeming the virtual building using the first virtual asset; rendering the building animation of the virtual building, and loading the virtual building in the second learning scene.
Optionally, selecting the first learning scenario in the scenario map includes one of: selecting a street learning scene from a scene map; a casino learning scene is selected in the scene map.
According to another embodiment of the present invention, there is provided a scene-based data processing apparatus including: the first distribution module is used for distributing spoken language learning tasks according to the spoken language grades of the user account numbers; the selection module is used for selecting a first learning scene in the scene map or matching the first learning scene according to the spoken language learning task; and the control module is used for controlling non-player character NPCs in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of NPCs, and each NPC is associated with a learning corpus of at least one spoken language learning task.
Optionally, the first allocation module includes: the determining unit is used for determining the spoken language grade according to the learning progress of the user account; the selecting unit is used for selecting a spoken language course matched with the spoken language grade in a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty; the first allocation unit is used for allocating a spoken language learning task for the user account in the spoken language course.
Optionally, the first distribution unit includes one of: a first allocation subunit configured to generate a route guidance marker in the first learning scenario, wherein the route guidance marker points to a first NPC in the first learning scenario; after the PPC controlled by the user account moves to the first NPC, a spoken language learning task corresponding to the first NPC is distributed for the user account in the spoken language course; a second allocation subunit, configured to respond to a selected instruction of the second NPC; after the PPC controlled by the user account moves to the second NPC, a spoken language learning task corresponding to the second NPC is distributed for the user account in the spoken language course.
Optionally, the control module includes: the control unit is used for controlling the NPC in the first learning scene to guide and broadcast the corpus audio of the spoken language learning task; the collection unit is used for collecting response audio or follow-up audio corresponding to the corpus audio through the audio interface of the spoken language learning client.
Optionally, the first allocation module includes: the determining unit is used for determining the role grade of a third NPC which is met by the PPC controlled by the user account in the first learning scene; the judging unit is used for judging whether the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, wherein the role grade is used for indicating the highest course grade of a spoken language learning task corresponding to the third NPC; the second allocation unit is used for allocating a first spoken language learning task corresponding to the spoken language grade to the user account if the spoken language grade of the user account is greater than or equal to the role grade of the third NPC; and if the spoken language grade of the user account is smaller than the role grade of the third NPC, a second spoken language learning task is allocated to the user account, wherein the third NPC is related to the first spoken language learning task and the second spoken language learning task, and the difficulty level of the second spoken language learning task is smaller than that of the first spoken language learning task.
Optionally, the apparatus further includes: the first judging module is used for judging whether the spoken language learning task is completed or not after the control module controls the non-player character NPC in the first learning scene to demonstrate the spoken language learning task; the second distribution module is used for distributing a first virtual asset to the user account if the spoken language learning task is completed; the second judging module is used for judging whether the spoken language course to which the spoken language learning task belongs is completed or not; and the third distribution module is used for distributing a second virtual asset for the user account if the completion of the spoken language course to which the spoken language learning task belongs is completed.
Optionally, the apparatus further includes: the first exchange module is used for exchanging a first virtual product in an online mall by using the first virtual asset after the second distribution module distributes the first virtual asset for the user account; the second exchange module is used for exchanging the virtual ornament of the PPC controlled by the user account on-line mall by using the first virtual asset after the second distribution module distributes the first virtual asset for the user account; and the creation module is used for creating a virtual building in a second learning scene of the scene map by using the first virtual asset after the second allocation module allocates the first virtual asset to the user account.
Optionally, the creating module includes: a selection unit for selecting a building position of a virtual building to be laid in the second learning scene; a redemption unit for redeeming the virtual building using the first virtual asset; and the paving unit is used for rendering the building animation of the virtual building and loading the virtual building in the second learning scene.
Optionally, the selection module includes one of: a first selection unit for selecting a street learning scene in the scene map; and the second selection unit is used for selecting the recreation ground learning scene in the scene map.
According to a further embodiment of the invention, there is also provided a storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the apparatus embodiments described above when run.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the invention, a first learning scene is selected in the scene map, then a spoken language learning task is distributed according to the spoken language grade of the user account, finally, the non-player character NPC in the first learning scene is controlled to demonstrate the spoken language learning task, the spoken language learning task in the embodiment comprises learning or testing of the linguistic data such as words and the like in the dimensions such as listening and speaking, for example, the user follows and reads a certain English word on the client, the user exercises word pronunciation and the like on the client, and an immersive spoken language course environment is constructed through the NPC in the learning scene to demonstrate the spoken language learning task, so that the technical problem that the spoken language learning task cannot be demonstrated in the learning scene in the related art is solved, an online mode of spoken language learning is provided, teaching is realized in fun, and the learning enthusiasm of the user is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a block diagram of a hardware architecture of a scene-based data processing handset according to an embodiment of the invention;
FIG. 2 is a flow chart of a scenario-based data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scene map in an embodiment of the invention;
FIG. 4 is a schematic diagram of generating route guidance markers in a learning scenario according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a spoken language learning task demonstrated in a learning scenario, in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of the present invention redeeming virtual decorations in an online marketplace;
fig. 7 is a block diagram of a scene-based data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided in the first embodiment of the present application may be performed in a mobile phone, a tablet, a computer, a wearable device, or a similar electronic terminal. Taking the operation on a mobile phone as an example, fig. 1 is a hardware structure block diagram of a scene-based data processing mobile phone according to an embodiment of the present invention. As shown in fig. 1, the handset 10 may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative, and is not intended to limit the structure of the mobile phone. For example, the handset 10 may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a mobile phone program, for example, a software program of application software and a module, such as a mobile phone program corresponding to a scene-based data processing method in an embodiment of the present invention, and the processor 102 executes the mobile phone program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the handset 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of the handset 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
In this embodiment, a scene-based data processing method is provided, and fig. 2 is a flowchart of a scene-based data processing method according to an embodiment of the present invention, as shown in fig. 2, where the flowchart includes the following steps:
step S202, a spoken language learning task is distributed according to the spoken language grade of a user account;
the user account in this embodiment is an account that a user logs in on an application program of a client, corresponds to a target user, logs in an account when the user learns on the client, and associates a course library, user data and learning progress on different clients through the account.
The spoken language learning task is a learning task executed by the user through interactive information on line, and comprises any form of spoken language learning modes such as system recommendation, user self-selected scene dialogue, exercise dialogue and the like.
Step S204, selecting a first learning scene from the scene map, or matching the first learning scene according to the spoken language learning task;
when the first learning scene is matched according to the spoken language learning task, the method comprises the following two implementation modes, namely matching the NPC according to the spoken language learning task, and then matching the first learning scene according to the NPC, wherein a mapping relation between the spoken language learning task and the NPC is included in a system mapping table, the mapping relation between the NPC and the learning scene is included in a scene map, each learning scene further includes a plurality of different NPCs (such as virtual roles of rabbits, nursing, piglets, frogs and the like), the NPCs are distributed at different positions of the learning scene, such as fruit shops, shopping malls, restaurants, sidewalks and the like in a commercial street scene, clicking on a certain NPC in the learning scene, or clicking on the position (fruit shops, shopping malls, restaurants, sidewalks and the like) where the NPC is located, the PCC controlled by a user account can be directly controlled to reach the position of the NPC, and the spoken language learning task is triggered in the learning scene, each NPC is only belongs to one of the learning scenes in the scene map, and the path scheme for searching the NPC in the scene map comprises: converting the scene world of the first learning scene into a two-dimensional data structure diagram; determining a starting position (position of PCC) and a target position (position of NPC); a heuristic algorithm is used to calculate a shortest path from the starting location to the target location in the data structure diagram.
Optionally, when converting the scene world of the first learning scene into the data structure diagram, converting the scene map into the data structure diagram, wherein one learning scene comprises a plurality of nodes (each NPC is packaged into a node in the data structure), connecting any adjacent points to form edges of the scene area, or forming the scene area with the activity range of each NPC in the scene world. The adjacency list stores a data structure diagram in a memory, each node comprises a plurality of pointers pointing to any adjacent node, and the complete node set in the scene map is stored in a standard data structure container. The first step in learning to achieve a way finding in a scene is how to represent the scene world with a data structure diagram, partition the scene world into a plurality of contiguous scene areas (the scene areas may be square grids, waypoints, or navigation grids, etc.), and set the state attribute of the scene areas, where the state attribute is used to indicate whether there is an NPC in the current scene area. When calculating the shortest path from the starting position to the target position by adopting a heuristic algorithm, the shortest path is expressed by (x), and the closer the heuristic result is to the reality, the better, two ways of calculating the heuristic are: the manhattan distance and the euclidean distance, respectively. Wherein, manhattan distance is applied in 2D scene world, manhattan distance's calculation is as follows:
h(x)=|start.x-end.x|+|start,y-end.y|;
The second method of computing heuristics is Euclidean distance. This heuristic calculation uses a standard distance formula and then estimates a straight line path. Unlike manhattan distances, euclidean distances may be used in other way-finding representations to calculate heuristics, such as waypoints or navigation grids. In our 2D lattice, the euclidean distance is:
where start (x, y) is the coordinates of the start position and end (x, y) is the coordinates of the target position.
The matching flow of another matching mode comprises the following steps: keywords of the learning corpus in the spoken language learning task are extracted, and a first learning scene is matched based on the keywords of the learning corpus, wherein a mapping relationship between the keywords and the learning scene is included in a system mapping table, and in some examples, the keywords may be "My name", "This is name", "you name", "Feeling fine", and the like.
Alternatively, selecting the first learning scenario in the scenario map may be, but is not limited to,: selecting a street learning scene from a scene map; a casino learning scene, such as a school, a party, a stadium, etc., is selected in the scene map. Fig. 3 is a schematic diagram of a scene map in an embodiment of the present invention, which illustrates three learning scenes, namely, a mall, a recreation ground, and a home, where one learning scene includes a scene background, an NPC (Non-Player-Controlled Character, non-Player controlled character/Non-Player character), a PCC (Player-Controlled Character, player controlled character) controlled by a user account, and the like.
Step S206, controlling non-player character NPC in a first learning scene to demonstrate a spoken language learning task, wherein the first learning scene comprises a plurality of NPCs, and each NPC is associated with a learning corpus of at least one spoken language learning task;
optionally, the data such as corpus resources of the spoken language course can be stored in the client and the cloud at the same time, so that data migration and synchronization of different clients can be realized.
In one implementation of this embodiment, after controlling the non-player character NPC in the first learning scenario to demonstrate the spoken learning task, the solution further includes: generating a sub-learning scene in the first learning scene; generating a spoken language review task of the spoken language learning task; and demonstrating the spoken language review task in the sub-learning scene.
Optionally, generating the sub-learning scenario in the first learning scenario includes: selecting N NPCs (N is greater than or equal to 1) with highest click rates in the first learning scene, and acquiring exclusive virtual decoration assets exchanged by the user account when the spoken language learning task is completed; and creating the sub-learning scene in a first learning scene by taking the N NPCs and the exclusive virtual decoration asset as scene elements.
The appointed NPC liked by the user can be determined by calculating the clicking rate of the user on each NPC in the first learning scene, the higher the clicking rate is, the higher the like degree is, and the clicking rate can be calculated by counting the clicking times in the process of demonstrating the spoken language learning task. The scene decoration of the first learning scene can be added in the sub-learning scene, after the user account finishes the spoken language learning task, the system rewards the virtual gold coin or the exclusive virtual decoration asset, and the exclusive virtual decoration asset unique to the user account can be exchanged by using the virtual gold coin.
Optionally, generating the spoken language learning task includes: after the spoken language learning task is completed, controlling an NPC in a first learning scene to demonstrate a spoken language test task of the spoken language learning task, wherein the spoken language test task is identical to corpus resources of the spoken language learning task; and obtaining a test result of the spoken language test task, extracting an appointed corpus resource which fails to pass the test from the test result, and generating the spoken language review task according to the appointed corpus resource.
In the spoken language course, after the spoken language learning task is finished, a spoken language self-describing link is further set as a spoken language test (namely, a user uses a plurality of learned corpus to perform self-describing exercises), the spoken language frequency in the spoken language test task is obtained, the matching degree of the spoken language frequency and standard frequency (can also comprise speech speed, intonation and the like) is calculated, if the matching degree is larger than a preset threshold, the test is passed, otherwise, the test is not passed, the spoken language learning task where the audio corpus is located needs to be re-reviewed, M corpus resources (M is larger than or equal to 1) with the lowest matching degree are selected in the test result, and therefore the spoken language learning task with the lower score of the user in the first learning scene is matched into the sub-scene to serve as a spoken language review task, automatic matching and generation of the corpus assets and the spoken language review task are achieved, and the utilization rate of corpus resources is improved without additionally designing review subjects by developers.
Through the steps, a spoken language learning task is distributed according to the spoken language grade of the user account, a first learning scene is selected in the scene map, the first learning scene is matched according to the spoken language learning task or the user selects the first learning scene in the scene map, finally, the non-player character NPC in the first learning scene is controlled to demonstrate the spoken language learning task, the spoken language learning task in the embodiment comprises learning or testing of listening and speaking dimensions aiming at language materials such as words and the like, for example, the user follows and reads certain English words on a client, exercises word pronunciation and the like on the client, an immersive spoken language course environment is constructed through the NPC in the learning scene to demonstrate the spoken language learning task, the technical problem that the spoken language learning task cannot be demonstrated in the learning scene in the related technology is solved, an online mode of spoken language learning is provided, and the user is taught in the voice learning on the scene, and the learning enthusiasm of the user is improved.
In one implementation of this embodiment, assigning the spoken learning task according to the spoken class of the user account includes:
s11, determining a spoken language grade according to the learning progress of the user account;
the course level of this embodiment may be user-selected or systematically assigned. In one example, the class level may be divided into: spoken children, spoken infants, spoken primary schools, spoken middle schools, etc., are classified according to the European common language reference standard (CEFR), pre-A1, and the obligatory course standard (PEP), or classified into one, two, three, etc., and the course level may be classified according to the vocabulary of the user account, with higher vocabulary and higher spoken level.
S12, selecting a spoken language course matched with the spoken language grade from a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty;
s13, distributing a spoken language learning task for the user account in the spoken language course.
Each class level of the embodiment includes a plurality of spoken classes (lesson), where the plurality of spoken learning tasks in the spoken classes have the same or similar difficulty, and in different dimensions, different scenarios, different grammars, and different vocabularies are trained and learned around different subject matters of the same difficulty.
In some examples of this embodiment, the same advanced learning manner may be set for all users when assigning spoken language learning tasks. The spoken language learning tasks with different difficulties can be allocated to users with different levels (the level of the language capability of the user can be obtained through pre-examination or user input), the higher the level is, the higher the difficulty of the initial spoken language learning task is, the lower the level is, the lower the difficulty of the initial spoken language learning task is, and then the difficulty is gradually increased along with course progress. In some examples, if the user selects a spoken language learning task (such as a first grade, a second grade, etc.) with a certain difficulty, the user may also not increase the difficulty, each difficulty grade is assigned a learning scene, all course units are completed in the learning scene, and the user account selects the spoken language learning task in the corresponding scene to learn, such as a recreation ground, a zoo, a hotel, an airport, etc., and learn to get out of the business.
In this embodiment, the system further includes a reward distribution engine, the engine is configured to distribute virtual assets to user accounts according to spoken language learning tasks, the reward distribution engine is associated with a set of databases, where a plurality of virtual assets, such as skins, props of virtual characters, etc., are stored in the databases, each set of virtual assets corresponds to a learning scene, taking the virtual assets as skins as examples, and a process of distributing the skins by the reward distribution engine includes: positioning a learning scene where a user account is currently located; judging whether the user account completes a corresponding spoken language learning task in the learning scene; if the user account completes the corresponding spoken language learning task in the learning scene, searching the skin matched with the learning scene and distributing the skin to the user account. When a user finishes a spoken language learning task of a learning scene, the skin corresponding to the learning scene, such as an administrator, a breeder, a waiter and a captain, is rewarded, so that the spoken language learning task is associated with the skin, the immersion of the user is improved, and the user can know which courses have been learned and which spoken language learning tasks have been done by seeing the skin when recall courses. In other examples, during the learning process of the spoken language lessons, each spoken language learning task corresponds to one scenario in a learning scenario, such as a learning lesson of a hotel scenario, including spoken language learning tasks of topics such as a connection, a room exchange, a meal exchange, a room cleaning, and a room return, and after all spoken language learning tasks of the hotel scenario are learned according to a time sequence, the lesson of the learning scenario is completed.
In one implementation of this embodiment, assigning a spoken learning task to a user account in a spoken lesson includes the following implementation:
the implementation mode is as follows: generating a route guidance marker in the first learning scenario, wherein the route guidance marker points to a first NPC in the first learning scenario; after the PPC controlled by the user account moves to the first NPC, a spoken language learning task corresponding to the first NPC is distributed for the user account in the spoken language course.
Fig. 4 is a schematic diagram of generating route guidance markers in a learning scenario, where the route guidance markers point to a first NPC in the learning scenario, and a PCC controlled by a user moves in the direction of the first NPC according to a navigation track in a map, such as a rabbit of a fruit shop in the figure.
The implementation mode II is as follows: responding to a selected instruction of the second NPC; after the PPC controlled by the user account moves to the second NPC, a spoken language learning task corresponding to the second NPC is distributed for the user account in the spoken language course.
And the user selects one NPC (namely a second NPC) on the spoken language learning client by using a mouse or a touch medium, the system generates a selection instruction, simultaneously controls the PPC controlled by the user account to move to the second NPC, and distributes spoken language learning tasks corresponding to the second NPC.
In another implementation of this embodiment, assigning the spoken learning task according to the spoken class of the user account includes:
s21, determining the role grade of a third NPC which is met by the PPC controlled by the user account in the first learning scene;
s22, judging whether the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, wherein the role grade is used for indicating the highest course grade of the spoken language learning task corresponding to the third NPC;
s23, if the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, a first spoken language learning task corresponding to the spoken language grade is distributed for the user account; and if the spoken language grade of the user account is smaller than the role grade of the third NPC, a second spoken language learning task is allocated to the user account, wherein the third NPC is related to the first spoken language learning task and the second spoken language learning task, and the difficulty level of the second spoken language learning task is smaller than that of the first spoken language learning task.
The game playing field scene comprises a plurality of NPCs, each NPC corresponds to one learning task, each NPC comprises a plurality of types, each type corresponds to one learning corpus difficulty level, a virtual character (PCC) controlled by a user moves in the learning scene, and the game playing field scene is randomly or according to route guiding marks, the game playing field scene is triggered automatically or prompts the user to click on the learning task (such as a scene dialogue, spoken language exercise of a word and the like) corresponding to the NPC, the user completes part of tasks of random courses, and when the learning task is executed, a spoken language (the spoken language is a control device of the spoken language learning task, the associated spoken language corpus is used by the user to operate the spoken language learning task) is unlocked randomly. Each NPC is associated with at least two sets of spoken language materials (corresponding to a first spoken learning task and a second spoken learning task, respectively).
Preferably, the invention can set an indication mark of a spin ball in the map, the spin of the spin ball points to spoken language tasks corresponding to different NPCs, and the player character can randomly select the dialogue NPCs in the task library range by rotating the spin ball, namely, randomly select the corresponding spoken languages, thereby increasing the entertainment of learning.
In some examples, before the spoken book is triggered, the level of the NPC is compared with the user account, if the NPC is less than or equal to the user level, normal learning corpus is triggered, if the NPC is greater than the user level, a set of corpora for another spoken learning task is triggered, with only a simple conversation, e.g., "I am so busy", "I lost my bag", "My mom is waiting for me", etc.
In one example of this embodiment, controlling the NPC presentation spoken learning task in the first learning scenario includes: controlling corpus audio of an NPC guide broadcast spoken language learning task in a first learning scene; and collecting response audio or follow-up audio corresponding to the corpus audio through an audio interface of the spoken language learning client.
Optionally, when the NPC guides the corpus audio of the spoken language learning task, text information or pictures corresponding to the corpus audio can be displayed on the interactive interface of the client, and can be synchronously projected in the real scene through the AR interface. For example, if the corpus audio is the pronunciation of "triagle", the following steps may be displayed simultaneously on the interactive interface: "triangule", "n; a triangular object; triangle iron (percussion instrument); text information such as triangle relation and the like, and mimic graphics such as delta and the like.
Fig. 5 is a schematic diagram of a spoken language learning task demonstrated in a learning scene according to an embodiment of the present invention, where the content of corpus audio includes: "Hello/Hello/what's you name? I'm Mia). The corpus audio guided by NPC is 'Hello "/" what's you name? The user read-following corpus audio is hello or I'm Mia, and the client acquires the response audio read-following by the user through the audio interface, then judges whether the read-following is correct or not, and completes the spoken language learning task if the read-following is correct.
The spoken lessons of the embodiment can be other types of spoken lessons besides follow-up reading, such as questions and answers, looking at English, looking at words to read English, etc. When the lesson task of the spoken lesson is executed, whether the learning task is completed or not is determined according to the operation behavior and the operation result of the user account, for example, whether the user follows a certain English word on the client, whether the follow-up of the certain word is accurately completed or not is the operation result, and whether the user exercises word pronunciation on the client or not reads the word through a standard phonetic symbol is the operation result.
The corpus resource of the embodiment can be resources such as words, sentences, articles, problems, news and the like. For a certain corpus word, such as a certain english word, a user can learn and master the spoken language usage of the word through different channels in different dimensions, and different mastering channels can be implemented on a client by adopting different operation behaviors.
The spoken book of the embodiment includes two learning modes of basic learning (such as vocabulary learning) of scene learning, according to the progress of basic learning, the current level of the user can be determined, how much vocabulary is mastered, how much phonetic symbols are mastered, and the like, each level triggers a task list, a plurality of tasks in the task list can be triggered according to time or scene position and encountered NPC, after the learning task is triggered, the user learns according to the learning task, exercises spoken language material, and after the learning passes, the user enters the next task in the task list. And jumping out of the scene or selecting to leave the scene by the user until all the tasks in the task list are completed, and going to a new scene to do tasks.
Optionally, after controlling the non-player character NPC in the first learning scenario to demonstrate the spoken learning task, the method further includes: judging whether the spoken language learning task is completed; if the spoken language learning task is completed, distributing a first virtual asset for the user account; judging whether a spoken language course belonging to a spoken language learning task is completed or not; and if the spoken language course to which the spoken language learning task belongs is completed, distributing a second virtual asset for the user account.
Each section of spoken language course (for example, lesson 1-My name) of the embodiment has a plurality of spoken language learning tasks, after all the spoken language learning tasks of the spoken language course are completed, the progress bar of the spoken language course is full, a spoken language book (learning control device) is started, and a medal can be obtained by clicking a blessing bag (second virtual asset) on the progress bar of the learning course; the spoken language learning task of the same spoken language course (such as Lesson 1-My name) is completed for a plurality of times, new medals can be obtained each time, and at most 3 medals can be obtained in each course.
The virtual assets of the present embodiment may include various types such as virtual currency, virtual products, user rights (e.g., for unlocking courses, maps, software function modules, VIP experience cards, etc.).
Optionally, determining whether the spoken language learning task is completed includes: judging whether the response audio (or follow-up audio) of the user account is correct when learning the spoken language learning task; if the response audio of the user account is correct when learning the spoken language learning task, determining that the spoken language learning task is completed; if the response audio of the user account is incorrect in learning the spoken language learning task, determining that the spoken language learning task is not completed. In one example, determining whether the response audio of the user account at the time of learning the spoken learning task is correct includes: calculating the matching degree of the response audio and the standard audio; if the matching degree is larger than the passing value, determining that the response audio of the user account is correct when learning the spoken language learning task; if the matching degree is smaller than or equal to the passing value, determining that the response audio of the user account is incorrect when learning the spoken language learning task.
In some embodiments, the response audio correctly includes multiple dimensions, which is equivalent to passing, good, excellent, and failing in the examination results, where the passing, good, and excellent matching degrees of three levels are all considered correct by the system, the current learning task is completed, the failing is considered incorrect, the learning task is incomplete, and in order to distinguish, whether the correct results are output, and at the same time, an identifier, such as an image, an expression, a text, an audio, etc., can be respectively matched with the results of different levels, such as excellent (matching degree greater than 80%) results matching "(" o) ", good (matching degree between 80% -70%) results matching" ("o_"), etc.
In one implementation of this embodiment, after the first virtual asset is assigned to the user account, the first virtual asset may be further redeemed, including the following ways:
the first exchange mode is as follows: redeeming the first virtual product at the online mall using the first virtual asset; the first virtual product is a course, a learning scene, a software function module and a VIP experience card.
And a second exchange mode: exchanging virtual decorations of the user account controlled PPC at an online mall using the first virtual asset; fig. 6 is a schematic diagram of an embodiment of the present invention for redeeming virtual ornaments in an online mall, where a plurality of redeemable virtual ornaments are displayed in a display window of the online mall.
The ornaments of the virtual character comprise head covers, hairstyles, clothes and the like. In one example, a level of exchange may be further set, the more difficult the spoken language learning task is, the more complicated the exchangeable skin is, the more beautiful and the more attractive the effect of group photo is, when the spoken language level is lower than a preset level, the user account completes the spoken language learning task, and an ornament corresponding to the preset level is allocated, when the spoken language level is higher than the preset level, the user account completes the spoken language learning task, and an ornament corresponding to the preset level is allocated. Alternatively, an asset library is associated with each spoken language grade, and the higher the spoken language grade, the more the variety and number of virtual assets in the associated asset library, the more the virtual assets can be superimposed on the skin in the earlier stage, such as black and white caps, then colored caps, then black and white clothes, colored clothes, and then various pendants, etc. In addition, the user may exchange the skin for virtual gold medals awarded in other general learning scenarios, which are different from the skin acquired in the specific learning scenario.
And a third exchange mode: a virtual building is created in a second learning scene of the scene map using the first virtual asset.
In one implementation of the present embodiment, creating a virtual building in a second learning scene of the scene map using the first virtual asset includes: selecting building positions of virtual buildings to be paved in a second learning scene; redeeming the virtual building using the first virtual asset; and rendering the building animation of the virtual building, loading the virtual building in the second learning scene, and paving the virtual building on the building position after loading is completed.
In a second learning scenario (home in a scenario map), a user can purchase props using gold medals awarded in the learning process for building and laying virtual buildings, and data of buildings and the like in the home are stored in attribute information of the user through user data. Different building positions in the home map are adapted to different props, the amount of star coins (one type of virtual currency) required to be purchased by the different props is different, different corresponding props can be purchased at the different props by using the star coins of the user, or the props purchased by the user are built at the corresponding positions, meanwhile, rendering animation of the building process can be added, and a built result graph is displayed, so that the building process is more realistic. And searching building animations matched with the prop in the animation material library, such as worker construction animations, firstly loading the building animations, and then loading the virtual building in a second learning scene.
In this embodiment, after each task is completed, a star coin is awarded, or virtual products are directly awarded, in addition, a bonus system can be configured, and skin, decorations, rewards of on-line product rights and delivery of physical commodities of a specific role can be configured. The second virtual product serving as the role skin and ornament can be automatically worn on the master role (namely, player control role PCC) of the user account.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
The embodiment also provides a scene-based data processing device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 7 is a block diagram of a scene-based data processing apparatus according to an embodiment of the present invention, as shown in fig. 7, including: a first distribution module 70, a selection module 72, a control module 74, wherein,
a first allocation module 70, configured to allocate a spoken language learning task according to a spoken language grade of a user account;
a selection module 72, configured to select a first learning scenario in the scenario map, or match the first learning scenario according to the spoken language learning task;
the control module 74 is configured to control NPCs of non-player characters in the first learning scenario to demonstrate the spoken learning task, where the first learning scenario includes multiple NPCs, and each NPC is associated with a learning corpus of at least one spoken learning task.
Optionally, the first allocation module includes: the determining unit is used for determining the spoken language grade according to the learning progress of the user account; the selecting unit is used for selecting a spoken language course matched with the spoken language grade in a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty; the first allocation unit is used for allocating a spoken language learning task for the user account in the spoken language course.
Optionally, the first distribution unit includes one of: a first allocation subunit configured to generate a route guidance marker in the first learning scenario, wherein the route guidance marker points to a first NPC in the first learning scenario; after the PPC controlled by the user account moves to the first NPC, a spoken language learning task corresponding to the first NPC is distributed for the user account in the spoken language course; a second allocation subunit, configured to respond to a selected instruction of the second NPC; after the PPC controlled by the user account moves to the second NPC, a spoken language learning task corresponding to the second NPC is distributed for the user account in the spoken language course.
Optionally, the control module includes: the control unit is used for controlling the NPC in the first learning scene to guide and broadcast the corpus audio of the spoken language learning task; the collection unit is used for collecting response audio or follow-up audio corresponding to the corpus audio through the audio interface of the spoken language learning client.
Optionally, the first allocation module includes: the determining unit is used for determining the role grade of a third NPC which is met by the PPC controlled by the user account in the first learning scene; the judging unit is used for judging whether the spoken language grade of the user account is greater than or equal to the role grade of the third NPC, wherein the role grade is used for indicating the highest course grade of a spoken language learning task corresponding to the third NPC; the second allocation unit is used for allocating a first spoken language learning task corresponding to the spoken language grade to the user account if the spoken language grade of the user account is greater than or equal to the role grade of the third NPC; and if the spoken language grade of the user account is smaller than the role grade of the third NPC, a second spoken language learning task is allocated to the user account, wherein the third NPC is related to the first spoken language learning task and the second spoken language learning task, and the difficulty level of the second spoken language learning task is smaller than that of the first spoken language learning task.
Optionally, the apparatus further includes: the first judging module is used for judging whether the spoken language learning task is completed or not after the control module controls the non-player character NPC in the first learning scene to demonstrate the spoken language learning task; the second distribution module is used for distributing a first virtual asset to the user account if the spoken language learning task is completed; the second judging module is used for judging whether the spoken language course to which the spoken language learning task belongs is completed or not; and the third distribution module is used for distributing a second virtual asset for the user account if the completion of the spoken language course to which the spoken language learning task belongs is completed.
Optionally, the apparatus further includes: the first exchange module is used for exchanging a first virtual product in an online mall by using the first virtual asset after the second distribution module distributes the first virtual asset for the user account; the second exchange module is used for exchanging the virtual ornament of the PPC controlled by the user account on-line mall by using the first virtual asset after the second distribution module distributes the first virtual asset for the user account; and the creation module is used for creating a virtual building in a second learning scene of the scene map by using the first virtual asset after the second allocation module allocates the first virtual asset to the user account.
Optionally, the creating module includes: a selection unit for selecting a building position of a virtual building to be laid in the second learning scene; a redemption unit for redeeming the virtual building using the first virtual asset; and the paving unit is used for rendering the building animation of the virtual building and loading the virtual building in the second learning scene.
Optionally, the selection module includes one of: a first selection unit for selecting a street learning scene in the scene map; and the second selection unit is used for selecting the recreation ground learning scene in the scene map.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
s1, distributing a spoken language learning task according to a spoken language grade of a user account;
s2, selecting a first learning scene from a scene map, or matching the first learning scene according to the spoken language learning task;
and S3, controlling non-player character NPCs in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of NPCs, and each NPC is associated with a learning corpus of at least one spoken language learning task.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, distributing a spoken language learning task according to a spoken language grade of a user account;
s2, selecting a first learning scene from a scene map, or matching the first learning scene according to the spoken language learning task;
and S3, controlling non-player character NPCs in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of NPCs, and each NPC is associated with a learning corpus of at least one spoken language learning task.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (11)

1. A scene-based data processing method, comprising:
according to the spoken language grade of the user account, a spoken language learning task is distributed;
selecting a first learning scene from a scene map, or matching the first learning scene according to the spoken language learning task;
controlling non-player characters in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of non-player characters, and each non-player character is associated with a learning corpus of at least one spoken language learning task;
the task of distributing the spoken language learning according to the spoken language grade of the user account comprises the following steps:
determining the spoken language grade according to the learning progress of the user account;
selecting a spoken language course matched with the spoken language grade from a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty;
after the player control character is controlled to move to the non-player character, a spoken language learning task is distributed to the user account in the spoken language lesson.
2. The method of claim 1, wherein assigning a spoken learning task to the user account in the spoken lesson comprises one of:
generating a route guidance marker in the first learning scenario, wherein the route guidance marker points to a first non-player character in the first learning scenario; after the player control role controlled by the user account moves to the first non-player role, distributing a spoken learning task corresponding to the first non-player role for the user account in the spoken lesson;
responding to a path guiding instruction of a user; after the player control role controlled by the user account moves to a second non-player role, a spoken language learning task corresponding to the second non-player role is distributed to the user account in the spoken language lesson.
3. The method of claim 1, wherein controlling non-player characters in the first learning scenario to demonstrate the spoken learning task comprises:
controlling a non-player character in the first learning scene to guide and broadcast corpus audio of the spoken language learning task;
and acquiring response audio or follow-up audio corresponding to the corpus audio through an audio interface of the spoken language learning client.
4. The method of claim 1, wherein assigning spoken learning tasks based on spoken ratings of a user account comprises:
determining the role grade of a third non-player role which is met by the player control roles controlled by the user account in the first learning scene;
judging whether the spoken language grade of the user account is greater than or equal to the role grade of the third non-player role, wherein the role grade is used for indicating the highest course grade of a spoken language learning task corresponding to the third non-player role;
if the spoken language grade of the user account is greater than or equal to the character grade of the third non-player character, distributing a first spoken language learning task corresponding to the spoken language grade to the user account; and if the spoken language grade of the user account is smaller than the character grade of the third non-player character, a second spoken language learning task is allocated to the user account, wherein the third non-player character is associated with the first spoken language learning task and the second spoken language learning task, and the difficulty level of the second spoken language learning task is smaller than that of the first spoken language learning task.
5. The method of claim 1, wherein after controlling the non-player character in the first learning scenario to demonstrate the spoken learning task, the method further comprises:
Judging whether the spoken language learning task is completed or not;
if the spoken language learning task is completed, a first virtual asset is allocated to the user account;
judging whether the spoken language lesson to which the spoken language learning task belongs is completed or not;
and if the spoken language lesson to which the spoken language learning task belongs is completed, distributing a second virtual asset for the user account.
6. The method of claim 5, wherein after assigning the first virtual asset to the user account, the method further comprises:
redeeming a first virtual product at an online mall using the first virtual asset;
exchanging virtual ornaments of the player control character controlled by the user account at an online mall using the first virtual asset;
creating a virtual building in a second learning scene of the scene map using the first virtual asset.
7. The method of claim 6, wherein creating a virtual building in a second learning scene of the scene map using the first virtual asset comprises:
selecting building positions of virtual buildings to be paved in the second learning scene;
redeeming the virtual building using the first virtual asset;
Rendering the building animation of the virtual building, and loading the virtual building in the second learning scene.
8. The method of claim 1, wherein selecting a first learning scenario in the scenario map comprises one of:
selecting a street learning scene from a scene map;
a casino learning scene is selected in the scene map.
9. A scene-based data processing apparatus, comprising:
the first distribution module is used for distributing spoken language learning tasks according to the spoken language grades of the user account numbers;
the selection module is used for selecting a first learning scene in the scene map or matching the first learning scene according to the spoken language learning task;
the control module is used for controlling non-player characters in the first learning scene to demonstrate the spoken language learning task, wherein the first learning scene comprises a plurality of non-player characters, and each non-player character is associated with a learning corpus of at least one spoken language learning task;
the first distribution module includes: the determining unit is used for determining the spoken language grade according to the learning progress of the user account; the selecting unit is used for selecting a spoken language course matched with the spoken language grade in a spoken language corpus, wherein the spoken language course comprises a plurality of spoken language learning tasks with the same difficulty; and the first distribution unit is used for distributing a spoken language learning task for the user account in the spoken language lesson after the player is controlled to move to the non-player character.
10. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when run.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 8.
CN202010757988.1A 2020-07-31 2020-07-31 Scene-based data processing method and device, storage medium and electronic device Active CN112001990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010757988.1A CN112001990B (en) 2020-07-31 2020-07-31 Scene-based data processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010757988.1A CN112001990B (en) 2020-07-31 2020-07-31 Scene-based data processing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN112001990A CN112001990A (en) 2020-11-27
CN112001990B true CN112001990B (en) 2024-01-09

Family

ID=73463334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010757988.1A Active CN112001990B (en) 2020-07-31 2020-07-31 Scene-based data processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN112001990B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112947809A (en) * 2021-01-29 2021-06-11 北京三快在线科技有限公司 Knowledge learning method and device and electronic equipment
CN112908068A (en) * 2021-02-06 2021-06-04 江苏电子信息职业学院 College spoken English conversation interactive system
CN113094146B (en) * 2021-05-08 2023-04-07 腾讯科技(深圳)有限公司 Interaction method, device and equipment based on live broadcast and computer readable storage medium
CN114339303A (en) * 2021-12-31 2022-04-12 北京有竹居网络技术有限公司 Interactive evaluation method and device, computer equipment and storage medium
CN115052194B (en) * 2022-06-02 2023-05-02 北京新唐思创教育科技有限公司 Learning report generation method, device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050017909A (en) * 2003-08-11 2005-02-23 김재욱 On-line Leaning System For Foreign Language And Method Thereof
KR20060001175A (en) * 2004-06-30 2006-01-06 이재호 Methode for learning a language through a online role playing game
KR20060074740A (en) * 2004-12-28 2006-07-03 윤병원 Language learning system and method using voice recognition
KR20100031877A (en) * 2008-09-16 2010-03-25 주식회사 엔씨소프트 System and method for processing english study using communication network, and method for processing npc operation in virtual space
KR20110059321A (en) * 2009-11-27 2011-06-02 (주)투핸즈미디어 Server for conversation-based game-type foreign language teaching system and method for teaching foreign language using the same
KR20120080399A (en) * 2011-01-07 2012-07-17 주식회사 엔씨소프트 Apparatus and method of providing language learing material in online game
KR101194794B1 (en) * 2011-07-12 2012-10-25 포항공과대학교 산학협력단 Foreign language education system and method, and collecting method of corpus using the same
US8825492B1 (en) * 2013-10-28 2014-09-02 Yousef A. E. S. M. Buhadi Language-based video game
KR20150075345A (en) * 2013-12-24 2015-07-03 박판열 Method For Providing Studies In Computer Games
CN108830764A (en) * 2018-09-04 2018-11-16 乔新霞 English Teaching Method, system and electric terminal
KR20190078294A (en) * 2017-12-26 2019-07-04 주식회사 글로브포인트 Server and method for providing digital study by virtual tutor
CN110476183A (en) * 2017-03-30 2019-11-19 索尼公司 Information processing unit and information processing method
CN110604920A (en) * 2019-09-16 2019-12-24 腾讯科技(深圳)有限公司 Game-based learning method and device, electronic equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040023195A1 (en) * 2002-08-05 2004-02-05 Wen Say Ling Method for learning language through a role-playing game
US7849043B2 (en) * 2007-04-12 2010-12-07 Microsoft Corporation Matching educational game players in a computerized learning environment
US20180151087A1 (en) * 2016-11-25 2018-05-31 Daniel Wise Computer based method for learning a language

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050017909A (en) * 2003-08-11 2005-02-23 김재욱 On-line Leaning System For Foreign Language And Method Thereof
KR20060001175A (en) * 2004-06-30 2006-01-06 이재호 Methode for learning a language through a online role playing game
KR20060074740A (en) * 2004-12-28 2006-07-03 윤병원 Language learning system and method using voice recognition
KR20100031877A (en) * 2008-09-16 2010-03-25 주식회사 엔씨소프트 System and method for processing english study using communication network, and method for processing npc operation in virtual space
KR20110059321A (en) * 2009-11-27 2011-06-02 (주)투핸즈미디어 Server for conversation-based game-type foreign language teaching system and method for teaching foreign language using the same
KR20120080399A (en) * 2011-01-07 2012-07-17 주식회사 엔씨소프트 Apparatus and method of providing language learing material in online game
KR101194794B1 (en) * 2011-07-12 2012-10-25 포항공과대학교 산학협력단 Foreign language education system and method, and collecting method of corpus using the same
US8825492B1 (en) * 2013-10-28 2014-09-02 Yousef A. E. S. M. Buhadi Language-based video game
KR20150075345A (en) * 2013-12-24 2015-07-03 박판열 Method For Providing Studies In Computer Games
CN110476183A (en) * 2017-03-30 2019-11-19 索尼公司 Information processing unit and information processing method
KR20190078294A (en) * 2017-12-26 2019-07-04 주식회사 글로브포인트 Server and method for providing digital study by virtual tutor
CN108830764A (en) * 2018-09-04 2018-11-16 乔新霞 English Teaching Method, system and electric terminal
CN110604920A (en) * 2019-09-16 2019-12-24 腾讯科技(深圳)有限公司 Game-based learning method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Andhik Ampuh Yunanto et al.English Education Game using Non-Player Character Based on Natural Language Processing.《Procedia Computer Science》.2019,第502-508页. *
吴建华等.信息素质教育游戏中的学习支架研究.《图书情报工作》.2014,第69-75页. *

Also Published As

Publication number Publication date
CN112001990A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112001990B (en) Scene-based data processing method and device, storage medium and electronic device
Bernal-Merino Translation and localisation in video games: Making entertainment software global
Laine et al. Designing mobile augmented reality exergames
TWI501208B (en) Immersive and interactive computer-implemented system, media, and method for education development
CN103890815A (en) Method and system for hosting transient virtual worlds that can be created, hosted and terminated remotely and automatically
Downey History of the (virtual) worlds
Bellotti et al. Exploring gaming mechanisms to enhance knowledge acquisition in virtual worlds
Luiro et al. Exploring local history and cultural heritage through a mobile game
US20230020633A1 (en) Information processing device and method for medium drawing in a virtual system
Dagnino et al. Using serious games for Intangible Cultural Heritage (ICH) education: A journey into the Canto a Tenore singing style
Chernbumroong et al. The Effects of Gamified Exhibition in a Physical and Online Digital Interactive Exhibition Promoting Digital Heritage and Tourism.
Kallioniemi et al. Collaborative navigation in virtual worlds: how gender and game experience influence user behavior
KR100969229B1 (en) Method on Providing Electronic Commerce Service of the Valuables Using Ranking Information
CN112001824A (en) Data processing method and device based on augmented reality
Ronyastra et al. Development and usability evaluation of virtual guide using augmented reality for Candi Gunung Gangsir in East Java
Vassilakis et al. Learning by playing: An LBG for the Fortification Gates of the Venetian walls of the city of Heraklion
Tosida et al. Promotion of the motif kujang design by A* algorithm application in the labyrinth education game
Azizah A participatory design approach to designing a playful cultural heritage experience: A case study of the Majapahit sites
Vale Costa et al. Gameful Tale-Telling and Place-Making from Tourists’ Generation to Generation: A Review
KR102180709B1 (en) A Map board game set of learning and computer Map game method thereof
Leichman Depth Match: Performance, History, and Digital Games
Husaini et al. Designing a Mobile Game as Promotion Media for Sambisari Temple
KR20220151280A (en) System for providing plant education service using augmented reality
KR101763536B1 (en) System for Providing Educational Service with Event
Mykland et al. Prototyping and Evaluation of Hover-a Socially Beneficial Alternative Game

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant