CN115658255A - Task processing method, electronic device and readable storage medium - Google Patents

Task processing method, electronic device and readable storage medium Download PDF

Info

Publication number
CN115658255A
CN115658255A CN202211157924.3A CN202211157924A CN115658255A CN 115658255 A CN115658255 A CN 115658255A CN 202211157924 A CN202211157924 A CN 202211157924A CN 115658255 A CN115658255 A CN 115658255A
Authority
CN
China
Prior art keywords
user
subtask
task
result
duration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211157924.3A
Other languages
Chinese (zh)
Other versions
CN115658255B (en
Inventor
陈然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petal Cloud Technology Co Ltd
Original Assignee
Petal Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petal Cloud Technology Co Ltd filed Critical Petal Cloud Technology Co Ltd
Priority to CN202211157924.3A priority Critical patent/CN115658255B/en
Publication of CN115658255A publication Critical patent/CN115658255A/en
Application granted granted Critical
Publication of CN115658255B publication Critical patent/CN115658255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a task processing method, electronic equipment and a readable storage medium, wherein in the method, user behavior data are acquired in the process that a user executes a first subtask in a task; when the user finishes the first subtask, acquiring a first finishing result of the first subtask according to the user behavior data; and sending the first completion result to the second equipment. In the embodiment of the application, the completion result of each subtask can be obtained according to the user behavior data of each subtask in the task completed by the user, so that the completion result of each subtask can be displayed when the second device displays the task completion result, and the output of the task completion result is enriched.

Description

Task processing method, electronic device and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of data processing, in particular to a task processing method, an electronic device and a readable storage medium.
Background
Education Application (APP) on the electronic equipment, examination APP to and learning machine, intelligent learning tablet etc. can provide the task of different grade type for the user, and the user can learn knowledge, review knowledge etc. through accomplishing the task, has brought the facility for user's life.
At present, after a user completes a task, the electronic equipment can display the result of task completion, and the result is output singly.
Disclosure of Invention
The embodiment of the application provides a task processing method, an electronic device and a readable storage medium, which can output the completion result of each subtask in a task completed by a user, and enrich the output of the task completion result.
In a first aspect, an execution subject for executing the method may be a first device or a cloud server, and the following description takes the cloud server as an example, where the first device is a device for a user to execute a task. In the method, in the process that a user executes a first subtask in a task, a cloud server may obtain user behavior data, and when the user completes the first subtask, a first completion result of the first subtask is obtained according to the user behavior data, and the first completion result is sent to a second device.
In the embodiment of the present application, the second device is a device bound with the first device. Illustratively, the first device is a device used by a child and the second device may be a device used by a parent. The binding of the second device with the first device may also be understood as: the first account logged in on the first device is the same as the second account logged in on the second device, or the first account logged in on the first device and the second account logged in on the second device are pre-bound. It should be understood that the binding relationship (or mapping relationship) between the second device and the first device, or the binding relationship (or mapping relationship) between the first account and the second account may be stored in the cloud server.
In the embodiment of the application, the completion result of the subtask can be obtained according to the user behavior data of the user in completing the subtask in the task, so that the completion result of the subtask in the task can be displayed when the second device displays the task completion result, and the output of the task completion result is enriched. Therefore, the user can carry out exercises more pertinently according to the completion result of the subtasks so as to complete the tasks better.
In the embodiment of the application, the user completes the first subtask on the first device, and the second device can display the first completion result of the first subtask, so that the first completion result can be obtained remotely in real time, and the user can be conveniently remotely supervised to complete the task.
In one possible implementation, the first device may collect user behavior data when a user begins to execute a first subtask of the tasks on the first device. The user behavior data includes: the user records screen data in the process of executing the first subtask, and the user videos in the process of executing the first subtask.
The first device may send the user behavior data to the cloud server, and thus, the cloud server may obtain the user behavior data.
The following describes a method for acquiring a first completion result of a first subtask by a cloud server:
first, the first completion result includes: and video synthesis results, wherein the screen recording data comprises: desktop video of the first subtask.
The cloud server can synthesize the desktop video of the first subtask and the video of the user to obtain the video synthesis result. And the video synthesis result comprises a video synthesized by the desktop video of the first subtask and the video of the user.
Secondly, the first completion result comprises: and the fatigue detection result, wherein the screen recording data comprises: the voice of the user.
The cloud server can input the voice of the user and the video of the user to a fatigue detection model to obtain a fatigue detection result.
In an example, the cloud server may input the voice of the user and the video of the user to a fatigue detection model corresponding to the type of the task according to the type of the task, so as to obtain the fatigue detection result. In this example, the fatigue detection model used by the cloud server is adapted to the type of task, so the cloud server can obtain a fatigue detection result with higher accuracy.
Thirdly, the first completion result further comprises: and evaluating the effective learning duration.
The cloud server can obtain the time length used by the user to complete the first subtask according to the desktop video of the first subtask, and further obtain the effective learning time length evaluation result according to the time length of the first subtask and the time length used by the user to complete the first subtask.
In one example, the effective learning duration evaluation result may include: abnormal or normal.
In the embodiment of the application, the cloud server can obtain the first completion result of the first subtask with multiple dimensions, and the user can correct errors in a more targeted manner aiming at the first completion result so as to complete the task better.
Before the cloud server acquires the user behavior data, task information can be acquired. In one example, information for the task may be stored in the cloud server. Or, in an example, when the user starts to execute the task on the first device, the first device may acquire information of the task and further send the information of the task to the cloud server. Wherein, the information of the task comprises: a duration of the first subtask.
In an example, the task is a task in a target APP, a service APP may be set in the first device, and the service APP may obtain information of the task in the target APP through an APP SDK in the target APP.
In a possible implementation manner, the information of the task may further include: the application identifier, the task identifier, the identifier of the subtask included in the task, the identifier of each subtask, the duration of each subtask, and the like.
In a possible implementation manner, after obtaining the first completion result, the cloud server may further send the first completion result to the first device, and in this implementation manner, the first device may display the first completion result of each first subtask in the task when the user completes the task.
In the implementation mode, when the user completes the task on the first device, the user can see the completion result of each subtask in the task on the first device, so that the user can train more specifically aiming at the first completion result to complete the task better.
In a possible implementation manner, the cloud server may further use the voice of the user and the video of the user as training data, update the fatigue detection model, and obtain a fatigue detection model corresponding to the task, where the fatigue detection model corresponding to the task is used to obtain a fatigue detection result when the user executes a task of the same type as the task.
The cloud server can continuously update the fatigue detection model according to the voice of the user and the video of the user, so that the fatigue detection model is more adaptive to the user, and a fatigue detection result with higher accuracy can be obtained.
In a possible implementation manner, the cloud server may further update the duration of the first subtask according to the duration used by the user to complete the first subtask, where the updated duration of the first subtask is used to obtain an effective learning duration evaluation result. Specifically, the updated duration of the first subtask is used to obtain an effective learning duration evaluation result of the subsequent user when the first subtask is completed.
In this possible implementation manner, the cloud server may update the duration of the first subtask in combination with the actual duration of the user completing the first subtask, so that for the same user, when the first subtask is completed at different times, the durations of the first subtask used for obtaining the effective learning duration evaluation result of the first subtask may be different, so as to more accurately obtain the effective learning duration evaluation result. For different users, the cloud server may update the duration of the first subtask completed by each user, so as to obtain the effective learning duration evaluation result of each user according to the duration of the first subtask of each user. Different users have different detection standards (such as different durations of the first subtask), so that habits of the users can be fitted more, and accuracy of effective learning duration evaluation results can be improved.
In a second aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory. The memory is for storing computer executable program code, the program code comprising instructions; the instructions, when executed by the processor, cause the electronic device to perform the method as in the first aspect.
In a third aspect, embodiments of the present application provide a computer program product containing instructions that, when executed on a computer, cause the computer to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein instructions, which, when executed on a computer, cause the computer to perform the method of the first aspect.
For each possible implementation manner of the second aspect to the fourth aspect, the beneficial effects of the second aspect may refer to the beneficial effects brought by the first aspect, which are not repeated herein.
Drawings
FIG. 1 is a schematic diagram of an interface of a conventional educational APP;
FIG. 2 is a diagram illustrating a system architecture according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of an embodiment of a task processing method according to an embodiment of the present application;
fig. 4 is a schematic interface diagram of a first device according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another embodiment of a task processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of any video frame in a composite video provided by an embodiment of the present application;
fig. 7 is a schematic interface diagram of a second device provided in an embodiment of the present application;
fig. 8 is a flowchart schematically illustrating another embodiment of a task processing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application relate to the terms:
task: may include, but is not limited to, learning tasks, fitness tasks, gaming tasks. The task in the embodiment of the present application may include multiple sub-tasks, that is, a user may be regarded as completing a task by completing the multiple sub-tasks. For example, taking the learning task as an english learning task, the english learning task may include: read, write, exercise, etc. Taking fitness tasks as leg exercise tasks for example, leg exercise tasks may include: squat deeply, straddle, lift legs high, etc. The embodiment of the application does not limit the types of the tasks and the subtasks included in the tasks.
Educational Application (APP): may include but is not limited to english learning class APP, pinyin learning class APP, chinese learning class APP, and math learning class APP.
An electronic device: may be referred to as User Equipment (UE), terminal (terminal), etc. For example, when the task is a learning task, the electronic device may be a mobile phone, a tablet computer (PAD), a Personal Digital Assistant (PDA), a smart learning tablet, a learning machine, a computing device, a vehicle-mounted device or a wearable device, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in smart home (smart home), or the like. For example, when the task is a fitness task, the electronic device may be a cell phone, a tablet, a personal digital assistant, a fitness mirror, or the like. It should be understood that tasks are different, and forms of electronic devices for performing the tasks may be different, and the embodiments of the present application do not limit the forms of the electronic devices.
In the following embodiments, the task is taken as a learning task, and the electronic device is taken as an intelligent learning tablet.
Fig. 1 is a schematic interface diagram of an existing education APP. Referring to fig. 1, in fig. 1, an electronic device is taken as an intelligent learning tablet, and a task is a pinyin learning task, which may include: play, recognize, say, spell, practice, write 6 subtasks. A in fig. 1 shows an interface 101 of "play" subtask, and for example, a next task control 11 may be included on the interface 101, and when the user clicks the next task control 11, the electronic device may display an interface 102 of "recognize" of the next subtask of "play" subtask, as shown in b in the figure. It should be understood that the interfaces for saying, spelling, practicing, and writing the four subtasks are not shown in FIG. 1. It should be understood that the specific contents contained in the interface 101 for "play" subtasks and the interface 102 for "recognize" are not shown in FIG. 1.
After the user completes the plurality of subtasks in the pinyin learning task, the electronic device may display an interface 103 as shown in c in fig. 1. The results of learning the number of pinyin and learning days, mastering the number of pinyin and the like of the user can be displayed on the interface 103. However, the current electronic device outputs a single result, and if the result of the user completing each subtask is not displayed, and the result of the user completing each subtask is not obtained according to the actual situation of the user completing each subtask, the user cannot timely grasp the learning situation of the user for each subtask, and cannot timely and effectively adjust the learning method.
For example, when the user completes the pinyin learning task, the electronic device just displays the interface 101 for "playing" the subtask, and the user clicks the next task control 11, and actually, the user does not really complete the "playing" subtask, but the electronic device records that the user completes the subtask, and further, the result shown by the electronic device cannot accurately reflect the learning condition of the user, so that the user cannot timely master the learning condition of the user, and meanwhile, parents of the user cannot master the learning condition of children.
In addition, when the parent checks the condition that the child completes the task, the parent can only see the result shown in the interface 103 after the child finishes learning, and cannot obtain the condition that the child completes each subtask in real time, and the parent cannot remotely obtain the condition that the child completes each subtask in real time.
The embodiment of the application provides a task processing method, in the process of completing a task by a user, the user behavior data when the user completes each subtask is combined, the result of completing each subtask by the user is obtained through analysis, then when the user completes the task, the completion condition of the user to each subtask can be effectively fed back, and the result output is more comprehensive. If the user can do poor tasks, exercises are added in time, and the exercises are reduced in adaptability to complete good tasks, so that the user experience can be improved.
Before introducing the task processing method provided by the embodiment of the present application, a system architecture applicable to the embodiment of the present application is described:
referring to fig. 2, a system architecture applicable to the embodiment of the present application includes: the device comprises a first device, a second device and a cloud server.
The first equipment is equipment comprising education APP, learning APP or fitness APP. The user may perform a learning task, a fitness task, etc. via the first device.
In one example, an APP that is used to provide a task for a user, such as an education-type APP, a learning-type APP, or a fitness-type APP, may be referred to as a target APP. For example, fig. 1 illustrates an example that the target APP in the first device includes a pinyin learning type APP and an english learning type APP.
The target APP may include an APP Software Development Kit (SDK), and the APP SDK in the target APP may provide a data access interface for the service APP.
The first device further comprises: and (7) service APP. And the service APP is used for acquiring the task information of the target APP through the APP SDK in the target APP. In one example, the task information may include, but is not limited to: the name of the task, the number of subtasks the task includes, and the name of each subtask.
And the service APP is also used for acquiring user behavior data of the user when the user completes each subtask in the task, and sending the user behavior data to the cloud server.
The second device is: a device of a user associated with a user performing a task using a first device. Illustratively, the first device is a device for a child to perform a learning task and the second device may be a parent's device. In one example, the first device and the second device may be the same device, or the first device and the second device may be different devices.
In one example, the second device includes: and (5) business APP management service. The service APP management service may be a service provided by a service APP, or the service APP management service may be an APP, which is not limited in this embodiment of the present application. When the first device and the second device are the same device, the first device may further include a service APP management service. In the following embodiments, the first device and the second device are exemplified as different devices.
And the cloud server is used for acquiring a completion result of the user when completing each subtask in the task according to the user behavior data from the service APP.
In one example, the cloud server may include: the system comprises a service APP server and a processing module. And the service APP server is used for receiving the user behavior data from the service APP. And the processing module is used for acquiring the completion result of each subtask in the task completed by the user according to the user behavior data. It should be understood that the division of the modules in the cloud server is illustrative and not limiting.
In one example, the cloud server may send completion results for each subtask of the user in completing the task to the first device. In this example, the first device may save the completion results of the user completing each subtask, and may display the completion results of the user completing each subtask when the user completes the task.
In one example, the cloud server may send a completion result of each subtask of the user in the completion task to the second device, and the second device may display the completion result of each subtask of the user in real time.
In an embodiment, the cloud server may store a mapping relationship between the identifier of the first user and the identifier of the second user, and when the server obtains a completion result of each subtask completed by the first user, the server may determine the identifier of the second user according to the mapping relationship between the identifier of the first user and the identifier of the second user, so as to send the completion result of each subtask completed by the first user to the second device of the second user.
It should be noted that the first user is a user using the first device and the second user is a user using the second device. The identification of the first user may include, but is not limited to: and logging in an account of the service APP on the first equipment, or logging in an account of the target APP on the first equipment. The identification of the second user may include, but is not limited to: and using the account of the business APP management service on the second equipment, or logging in the account of the business APP management service application on the second equipment.
Taking the service APP management service as an APP, the parent may register a second account in the service APP management service of the second device, and the second device may report the second account to the cloud server. In addition, the parent may register the first account with the service APP of the first device, and the first device may report the first account to the cloud server. In this way, the cloud server may store the first account and the second account correspondingly, and the mapping relationship between the identifier of the first user and the identifier of the second user stored in the cloud server may be: and the mapping relation between the first account and the second account. In an example, the first account and the second account may be the same, such as mobile phone numbers of users, or the first account and the second account may be different, such as the first account is an account allocated to the service APP, and the second account is an account allocated to the service APP management service.
The following describes a task processing method provided in the embodiments of the present application with reference to specific embodiments. The following several embodiments may be combined with each other and may not be described in detail in some embodiments for the same or similar concepts or processes.
Fig. 3 is a flowchart illustrating an embodiment of a task processing method according to an embodiment of the present application. Referring to fig. 3, a task processing method provided in an embodiment of the present application may include:
s301, when a user uses a target APP to execute a task, the first device obtains information of the task.
The task performed by the user using the target APP can be understood as: the user opens the target APP on the first device to start executing the task. For example, referring to a in fig. 4, an icon of a pinyin learning type APP and an english learning type APP are displayed on a desktop of the first device, and when a user clicks the icon of the pinyin learning type APP, the user may be triggered to execute a task using the target APP. Or, the user clicks an icon of the pinyin learning APP, the first device may be triggered to display a main page of the pinyin learning APP, the user may select a task on the main page of the pinyin learning APP, and the user may be triggered to execute the task by using the target APP.
For example, when the user clicks the icon of the pinyin learning APP, the first device may display a task interface, and the user may start to execute the task, where the task interface may be as shown in b in fig. 4. It will be appreciated that b in figure 4 is the same as a in figure 1 and reference may be made to the associated description in figure 1.
The information of a task can be understood as: information of the task contained in the target APP. In one example, at least one task may be included in the target APP. Illustratively, taking the pinyin learning type APP as an example, the pinyin learning type APP may include a pinyin learning task, or the pinyin learning type APP may include multiple tasks such as a primary pinyin learning task, a middle pinyin learning task, and a high pinyin learning task.
For a task, the information of the task may include: the name of the task, the number of subtasks included in the task, and the name of the subtask. In one example, the information of the task may be characterized in the form of a five-tuple, and the information of the task may include: the method comprises the steps of identifying a user, the name of a target APP, the number of a subtask, the name of the subtask, and the duration of the subtask. The user's identity may be an account number, such as user ID1. In an example, the name of the target APP may also be represented by an identifier of the target APP, and the identifier of the target APP may also be a target APP icon, a version number, or the like, besides the name of the target APP. The number of the subtask and the name of the subtask may both be referred to as an identification of the subtask.
In an example, when the target APP includes a plurality of tasks, information of the tasks may further include names of the tasks to distinguish different tasks in the target APP, in this embodiment, the target APP includes one task as an example for description, and information of the tasks does not include names of the tasks.
For example, taking a pinyin learning APP as an example, the information of the task may include: the user ID1, the pinyin learning APP and the subtask number of the user include 1-6, the subtask names are playing, recognizing, saying, spelling, practicing and writing, and the durations of the subtasks 1-6 are 60s, 30s, 60s, 90s and 30s respectively. For example, the information of the task in the pinyin learning type APP can be as shown in the following table one:
watch 1
Figure BDA0003859639610000061
It should be noted that the "duration of the subtask" in the information of the task is a predefined duration when the user uses the target APP for the first time. Taking the "play" subtask as an example, the duration 60s of the "play" subtask is a predefined duration, and 60s may be an empirical value, for example, the duration 60s of the "play" subtask may be characterized as: the duration for most users to complete the "play" subtask is 60s.
In the embodiment of the application, for different users, when a user uses a target APP for the first time, the time lengths of subtasks in the information of the task are the same, for example, for the user 1 and the user 2, when the user 1 uses the target APP for the first time, and when the user 2 uses the target APP for the first time, in the information of the task, the time lengths of "playing" the subtasks are both 60s, the time lengths of "recognizing" the subtasks are both 60s, … …, and the time lengths of "writing" the subtasks are both 30s. The first device may adaptively adjust the duration of the subtask according to different situations when each user subsequently performs the subtask, which may be referred to in the following description of the embodiments.
Here, with reference to fig. 5, an internal interaction process of the first device is explained:
referring to fig. 5, S301 may include step 1) to step 4):
1) And when the user uses the target APP to execute the task, the first device runs the target APP, namely the target APP is started.
2) And the target APP sends a message of starting the target APP to the service APP through the APP SDK of the target APP.
3) And the service APP is started in response to the message of starting the target APP.
4) And the business APP acquires the information of the task.
After the service APP is started, the APP SDK of the target APP can be called to obtain the information of the task.
In an example, information of a task may be statically configured in a target APP, and a service APP may call an APP SDK of the target APP to obtain the information of the task.
In one example, the service end of the target APP is configured with information of a task, and an account of the service end accessing the target APP is embedded in the APP SDK of the target APP. When the service APP calls the APP SDK of the target APP, the APP SDK of the target APP can be accessed to the server of the target APP according to the built-in account so as to request the server of the target APP for acquiring the information of the task. After the APP SDK of the target APP obtains the information of the task from the server of the target APP, the information of the task may be sent to the service APP.
In one example, information for a task may be configured in a local system of the first device. For example, the local system may store names of different target APPs and information of tasks corresponding to the different target APPs. In such an example, the business APP may request the local system to obtain information for the task.
For example, the local system may configure information of the task in a software directory of the target APP, such as configuring information of the task in a config file of the target APP. Alternatively, the local system may configure information of the task of the target APP in a local database.
S302, the first device caches the information of the task.
In one embodiment, when a user performs a task using a target APP, a first device may obtain information of the task, and the first device may determine whether the information of the task is already stored in the first device. If the information of the task is stored in the first device, the first device may not store the newly acquired information of the task any more, so as to reduce the occupation of the information of the task on the memory space of the first device. If the information of the task is not stored in the first device, the first device may store the acquired information of the task.
In an embodiment, if the target APP is a pinyin learning type APP, with the update of the version of the pinyin learning type APP, if the subtask included in the task changes, or the number of the subtasks increases or decreases, the information of the task may be changed. Therefore, in the embodiment of the present application, when the first device determines that the information of the task is already stored in the first device, the first device may further detect whether the information of the already stored task is the same as the information of the task newly acquired by the first device. If the information of the task stored in the first device is the same as the information of the task newly acquired by the first device, the first device may not store the information of the newly acquired task any more, so as to reduce the occupation of the information of the task on the memory space of the first device. If the information of the task stored in the first device is different from the information of the task newly acquired by the first device, the first device may update the information of the stored task to the information of the newly acquired task to store accurate information of the task.
As in the above two embodiments, the first device may use the name of the target APP as an input to detect whether the information of the task is stored in the first device, or detect whether the information of the task stored in the first device is the same as the information of the task newly acquired by the first device. When the information of the task stored in the first device is detected to be the same as the information of the task newly acquired by the first device, the first device may no longer cache the information of the newly acquired task. It can be understood that, because the first device sends the information of the task to the cloud server after acquiring the information of the task each time, when the information of the task already stored in the first device is detected, or when it is detected that the information of the task already stored in the first device is the same as the information of the task newly acquired by the first device, the first device may not send the information of the newly acquired task to the cloud server. In other words, the cloud server may store the information of the task of the target APP, and in such an embodiment, the first device may not perform S302-S303, but directly perform S305, because the cloud server may store the information of the task of the target APP, so the cloud server may perform S306.
Referring to fig. 5, S302 may include step 5):
5) And after the business APP obtains the information of the task, caching the information of the task. Illustratively, the business APP may cache a five-tuple (user ID, name of target APP, number of subtask, name of subtask, duration of subtask).
When the business APP caches the information of the task, the name of the target APP may be used as input to detect whether the information of the task of the target APP is cached in the first device, or whether the information of the task of the target APP cached in the first device is the same as the information of the task of the target APP newly acquired by the first device.
And S303, the first equipment sends the information of the task to the cloud server.
The first device may send the information of the task to the cloud server after acquiring the information of the task.
Referring to fig. 5, S303 may include step 6):
6) And the business APP sends the information of the task to a business APP server side in the cloud server.
S304, in the process that the user executes the first subtask in the task, the first device obtains user behavior data.
The first subtask is a subtask among tasks. For example, when the target APP is a pinyin learning APP, the first sub-task may be a "play" sub-task, a "recognize" sub-task, a "say" sub-task, a "spell" sub-task, a "practice" sub-task, or a "write" sub-task. It should be understood that each subtask in the task may be referred to as a first subtask, and the processing steps of the first device for any first subtask are the same.
During the execution of a first subtask of the task by the user, the first device may obtain user behavior data.
In one example, user behavior data may include, but is not limited to: the video of the first subtask is completed by the user, and the time taken for the user to complete the first subtask. The time length for the user to complete the first subtask may be referred to as: the actual length of time for the user to complete the first subtask.
In one example, user behavior data may include, but is not limited to: the user completes the video and voice of the first subtask, and the time length for the user to complete the first subtask. In one example, the user behavior data may include: in this example, the cloud server may obtain a duration taken by the user to complete the first subtask according to the video taken by the user to complete the first subtask, and may refer to the related description of the duration taken by the first device to complete the first subtask in S304.
In one example, a video of a user completing a first subtask may include: desktop video of the first subtask, and video of the user. The speech of the user completing the first subtask may include: the voice output by the first device in the first subtask, and the voice of the user. The first device can record a screen, and acquire a desktop video of the first subtask, a voice output by the first device in the first subtask, and a voice of a user.
In one example, a camera may be disposed on the first device, and the first device may open the camera when the user performs a task using the target APP to capture a video of the user. In one example, the first device may be connected to a camera in an environment in which the first device is located, the camera being used to capture video of the user. In this example, the first device may turn on a camera when the user performs a task using the target APP to capture a video of the user, and the camera may send the video of the user captured by the camera to the first device.
In the process of recording the screen by the first device, the first device can determine the first subtask according to the desktop video of the first subtask. For example, the first device may identify an identification of the first subtask in a video frame in the desktop video to determine that the current subtask is for the first subtask. The identification of the first subtask may include, but is not limited to: pictures, characters, and the like.
As shown in a of fig. 1, the interface 101, which is a "play" subtask, includes text 12 for identifying a first subtask "play", and the first device may determine that the first subtask performed by the user is the "play" subtask according to a video frame in the desktop video of the "play" subtask. Accordingly, the first device may also obtain the length of time the user takes to complete the "play" subtask. Illustratively, the first device may detect the duration of the video frame containing the "play" text 12 and use the duration of the video frame containing the "play" text 12 as the duration for the user to complete the "play" subtask.
Or the first device may further detect, according to the desktop video of the first subtask, an operation of the user on the desktop of the first subtask, and further determine a time length for the user to complete the first subtask. For example, when the first device detects a video frame containing the "play" text 12, the timer may be started until the user clicks the next task control 11 on the interface 101 and the timer is over. The first device may use the time counted as the time taken by the user to complete the "play" subtask.
Or, the first device may further obtain a duration used by the user to complete the first subtask according to the voice of the user. For example, when the first device first detects a video frame containing the "play" word 12, timing can be started until the user speaks "next task" or "skip task" or the like, and the timing is ended. The first device may use the time duration obtained by the timing as the time duration for the user to complete the "play" subtask.
It should be understood that the first device may mark corresponding capture time when capturing the desktop video of the first subtask, the voice output by the first device in the first subtask, the voice of the user, and the video of the user, so that the cloud server (or the first device) may align the desktop video of the first subtask, the voice output by the first device in the first subtask, the voice of the user, and the video of the user according to the capture time.
Referring to fig. 5, S304 may include step 7):
7) And in the process that the user completes the first subtask in the task, the business APP acquires user behavior data.
The service APP can start a screen recording component in the first device to acquire a desktop video of the first subtask, voice output by the first device in the first subtask, and voice of a user. In addition, the service APP may also trigger the first device to start a camera of the first device, so as to obtain a video of the user.
S305, when the user completes the first subtask, the first device reports the user behavior data to the cloud server.
As described in S304, the first device may obtain a time length for the user to complete the first subtask, and accordingly, the first device determines a time when the user completes the first subtask, and then the first device may report the user behavior data to the cloud server when the user completes the first subtask.
Referring to fig. 5, S305 may include step 8):
8) And when the user finishes the first subtask, the business APP reports the user behavior data to the business APP server side.
In one example, "desktop video of the first subtask, voice output by the first device in the first subtask, and voice of the user" obtained by screen recording of the first device may be referred to as screen recording data, and video of the user captured by a camera of the first device may be referred to as video data. The screen recording data and the video data may be referred to as user behavior data.
When the user completes the first subtask, the service APP may report two paths of data to the service end of the service APP, where the two paths of data may be referred to as user behavior data. Wherein, the data of one path comprises: the user ID, the name of the target App, the number of the first subtask, the name of the first subtask, the time length for completing the first subtask, and screen recording data. The other path of data may include: the user ID, the name of the target App, the number of the first subtask, the name of the first subtask, the time length for completing the first subtask, and video data.
In an example, when the first device does not obtain the duration for completing the first subtask according to the desktop video of the first subtask, the service APP may report two paths of data to the service end of the service APP, where the two paths of data may be referred to as user behavior data. Wherein, one path of data comprises: the user ID, the name of the target App, the number of the first subtask, the name of the first subtask, and screen recording data. The other path of data may include: the user ID, the name of the target App, the number of the first subtask, the name of the first subtask, and video data.
In one example, a business APP server may save user behavior data. For example, the service APP server may establish a data table, where the data table uses a user ID, a name of the target APP, and a number of the first subtask (or a name of the first subtask) as main keys, and includes a "screen recording data" item and a "video data" item.
The service APP server side can store the screen recording data in the screen recording data item and the video data in the video data item respectively according to the user ID, the name of the target App and the main key of the serial number of the first subtask. It should be understood that the service APP server stores the user behavior data in the form of a data table as an example, and the embodiment of the present application does not limit the manner in which the service APP server stores the user behavior data.
S306, the cloud server obtains a first completion result of the user completing the first subtask according to the user behavior data.
In one example, the first completion result may include: video synthesis results, effective learning duration evaluation results, and fatigue detection results.
In one embodiment, the cloud server may obtain the video composition result according to the user behavior data. For example, the cloud server may synthesize the desktop video of the first subtask and the video of the user according to the collection time in the user behavior data to obtain a video synthesis result. The video synthesis result comprises: the video that is the composite of the desktop video of the first subtask and the video of the user may be referred to as a composite video. Referring to fig. 6, the cloud server may synthesize the desktop video frame of the first subtask and the video frame of the user, which are acquired at the same acquisition time, according to the acquisition time, so as to obtain a synthesized video. For example, the cloud server may compose the video frame of the user in the upper right corner of the desktop video frame of the first subtask to obtain a video composition result. It should be understood that the desktop video frame of the first subtask is a video frame in the desktop video of the first subtask, and the video frame of the user is a video frame in the video of the user.
In an embodiment, the cloud server may obtain the fatigue detection result according to the user behavior data. For example, the cloud server may further detect the fatigue of the user according to the voice of the user who completes the first subtask and the video of the user, so as to obtain a fatigue detection result. For example, the cloud server may input the voice of the user and the video of the user into the fatigue detection model, so that the fatigue detection model may detect the fatigue of the user to obtain a fatigue detection result. It should be understood that the fatigue detection model may adopt a currently existing fatigue detection model, and details of the detection principle of the fatigue detection model in the embodiment of the present application are not repeated. For example, the fatigue detection model may obtain the fatigue of the user according to the expression, mouth shape, and voice characteristics of the user.
Different from the prior art, in the embodiment of the application, the cloud server can set different fatigue degree detection models for different types of tasks, so that the fatigue degrees of users can be detected by adopting different fatigue degree detection models according to different types of tasks, and an accurate fatigue degree detection result can be obtained. Illustratively, for the language learning task and the mathematical learning task, the feature of the fatigue detection model for determining whether the user is tired in performing the language learning task is different from the feature of the fatigue detection model for determining whether the user is tired in performing the mathematical learning task. Therefore, for different tasks, the cloud server can adopt the fatigue detection model corresponding to the task to obtain the fatigue detection result of the user.
In an example, the fatigue detection result may be characterized by "fatigue" and "fatigue failure", or by a specific value of a specific fatigue, which is not limited in the embodiment of the present application, and the following embodiments take the "fatigue" and "fatigue failure" as examples for description.
In an embodiment, the cloud server may obtain an effective learning duration evaluation result according to the information of the task and the user behavior data. In an example, the cloud server may obtain an effective learning duration evaluation result of the first subtask according to a duration used by the user to complete the first subtask in the user behavior data and a duration of the first subtask in the task information. It should be noted that, in an example, when the user behavior data does not include the duration taken by the user to complete the first subtask, the cloud server may obtain, according to the screen recording data (such as a desktop video) in the user behavior data, the duration taken by the user to complete the first subtask, and further obtain an effective learning duration evaluation result of the first subtask according to the duration taken by the user to complete the first subtask and the duration taken by the first subtask in the task information.
The cloud server can determine the first subtask according to the desktop video of the first subtask. For example, the first device may identify, in a video frame in the desktop video, an identifier of the first subtask, and then use a duration of occurrence of the identifier of the first subtask as a duration taken by the user to complete the first subtask. The identification of the first subtask may include, but is not limited to: pictures, text, etc. For example, taking the identifier of the first subtask as the text, the cloud server may detect the duration of the video frame containing the text 12 of "play", and then use the duration of the video frame containing the text 12 of "play" as the duration for the user to complete the "play" subtask.
In an example, the cloud server obtains a duration taken by the user to complete the first subtask, and may further refer to the related description of the duration taken by the first device to complete the first subtask in S304.
If the time length used by the user for completing the first subtask is less than the time length threshold, it is determined that the user does not effectively complete the first subtask, and the effective learning time length evaluation result of the first subtask is as follows: and (6) abnormal. It is to be understood that the effective learning period evaluation result may include abnormality and normality. In one example, the effective learning duration evaluation result may also be characterized in the form of a score or other forms.
For example, the duration threshold may be 5s, and if the duration taken by the user to complete the first subtask is less than the duration threshold, the user is characterized to just start executing the first subtask, and then skip the first subtask and enter the next subtask of the first subtask. It should be understood that the duration threshold 5s is illustrated as an example, and the duration threshold may be custom set.
For example, if the first subtask is a "play" subtask, and as shown in table one, the duration of the "play" subtask is 60s, and if the duration of the "play" subtask completed by the user is 3s, the cloud server may determine that the duration of the "play" subtask completed by the user is less than a duration threshold, and may determine that the effective learning duration evaluation result of the first subtask is "abnormal".
If the time length used by the user for completing the first subtask is within the time length range of the first subtask, the user is determined to effectively complete the first subtask, and the effective learning time length evaluation result of the first subtask is as follows: and (4) normal. For example, if the first subtask is a "play" subtask, the duration of the "play" subtask is 60s, and the duration range of the first subtask may be set to 30s to 70s, and as long as the duration of the "play" subtask completed by the user is within the duration range of 30s to 70s, the cloud server may determine that the effective learning duration evaluation result of the first subtask is "normal". If the time length of the user for completing the 'playing' subtask is out of the time length range of 30s-70s, the cloud server may determine that the effective learning time length evaluation result of the first subtask is 'abnormal'.
In an embodiment, the cloud server may obtain, according to the first completion result, a comprehensive detection result of the user completing the first subtask. For example, when the effective learning period evaluation result is normal and the fatigue degree detection result is not fatigue, the integrated detection result may be normal. When the effective learning duration evaluation result is abnormal or the fatigue degree detection result is fatigue, the comprehensive detection result may be abnormal. It should be understood that the comprehensive detection result may be characterized by "normal" and "abnormal", or by using a specific score, which is not limited in the embodiment of the present application.
For each subtask in the task, the cloud server may use the same method as the first subtask to obtain a completion result of each first subtask, as shown in the following table two:
watch two
Figure BDA0003859639610000111
As shown in table two above, it can be derived: the user does not learn efficiently when performing the "spell" subtask, and may skip the subtask. If the actual time length of the spelling subtask is 2s, the comprehensive detection result of the spelling subtask is abnormal. Through the second table, the user can accurately acquire the completion condition of each subtask when the user completes the task, and further can learn in a targeted manner.
Referring to fig. 5, S306 may include steps 9) -10):
9) And the business APP server sends the information of the task and the user behavior data to the processing module.
10 And the processing module acquires a first completion result of the first subtask completed by the user according to the user behavior data.
The processing module may obtain, by reference to the first device, a description related to a first completion result of the first subtask completed by the user.
S307, the cloud server sends the first completion result to the second device.
The cloud server may determine an identity of the first user, such as user ID1, according to the information of the task. The cloud server may determine the identifier of the second user according to a mapping relationship between the identifier of the first user and the identifier of the second user, and further send the first completion result to the second device of the second user.
Referring to fig. 5, S307 may include step 11):
11 And the processing module sends a first completion result to the service APP server.
12 And the service APP server sends a first completion result to the service APP management service.
S308, the second device displays the first completion result.
In the embodiment of the application, each time a user completes one first subtask on the first device, the second device may receive a first completion result of the first subtask from the cloud server. In other words, the user can see the completion result of each subtask on the second device in real time. For example, if a child completes a task on a first device (e.g., a smart learning tablet), a parent may view the completion of each sub-task on a second device (e.g., a cell phone) whenever the child completes one of the sub-tasks, and remote supervision may be implemented.
Illustratively, referring to fig. 7, taking the first subtask as the "play" subtask as an example, when the user completes the "play" subtask on the first device, the second device may receive a completion result of the "play" subtask from the cloud server. The second device may display the completed results of the "play" subtasks as shown in Table three. It should be understood that fig. 7 illustrates the second device as a mobile phone, and fig. 7 does not show all contents in table three, and illustrates names of the subtasks, actual time lengths of the user completing the subtasks, and fatigue detection results.
Watch III
Figure BDA0003859639610000121
Accordingly, referring to fig. 5, S308 may include step 13):
13 And the business APP management service displays the first completion result.
And S308, the second equipment responds to the operation of the user and plays the composite video.
The first completion result includes a video composition result, so the second device can display the video composition result while displaying the first completion result. Illustratively, as shown in fig. 7, the second device may display an identification of the composite video indicating the video composite result. In fig. 7, the identifier of the composite video is taken as any one of the video frames 71 in the composite video, for example, the video frame 71 may be the first video frame in the composite video. The user may trigger the second device to play the composite video by operating the video frame, and the manner in which the user operates the video frame may include, but is not limited to: click, double click, long press, etc., which are not described in detail in the embodiments of the present application.
In an embodiment, after obtaining the first completion result, the cloud server may further send the first completion result to the first device. In this way, after the user completes the task on the first device, the first completion result of each first subtask in the task may be displayed, and the first completion result may be as shown in table two.
In one embodiment, some of the steps shown in FIG. 5 are optional steps, and the steps may be combined with each other.
In the embodiment of the application, in the process of completing the task by the user, the user behavior data of the user when completing each subtask can be acquired, and then the cloud server can analyze the result of completing each subtask by the user according to the user behavior data of the user when completing each subtask. On one hand, when the user completes one subtask, the cloud server can send the result of the subtask to the second device, so that the second device can output the result of the subtask in real time, and the granularity of result output is finer and more comprehensive. On the other hand, the user can see the result of each subtask on the second device in real time, which facilitates remote supervision.
In an embodiment, the first device may be provided with a service APP server and a processing module. In this embodiment, the first device may directly interact with the second device to implement the task processing method provided in this embodiment of the application. The first device may perform the operation performed by the cloud server in fig. 3, so that when the first device obtains the first completion result of the first subtask, the first completion result may be sent to the second device.
In summary, an executing subject executing the task processing method provided in the embodiment of the present application may be the first device or the cloud server, and the executing subject is taken as the cloud server in fig. 8 for example. Referring to fig. 8, a task processing method provided in an embodiment of the present application may include:
s801, acquiring user behavior data in the process that a user executes a first subtask in a task.
The first device can collect user behavior data in the process that a user executes a first subtask in the task. When the execution subject is a cloud server, the first device may send the user behavior data to the cloud server, so that the cloud server may obtain the user behavior data in a process in which the user executes a first subtask among the tasks. The first device acquires the user behavior data, which may refer to the relevant description in S304.
S802, when the user finishes the first subtask, a first finishing result of the first subtask is obtained according to the user behavior data.
S803, the first completion result is sent to the second device.
It should be understood that the execution subject for executing S801-S803 may be the first device or the cloud server, and S801, S802, and S803 may refer to the relevant descriptions in S304, S306, and S307, respectively.
It should be understood that the task processing method provided in the embodiment of the present application has the same technical effects as the embodiment shown in fig. 3 and the embodiment shown in fig. 5, and is not described herein again.
In one embodiment, an initial fatigue detection model may be set in the cloud server. After the cloud server receives the user behavior data from the first device, the cloud server can update the initial fatigue degree detection model by taking the user behavior data corresponding to the task as training data according to the type of the task, so as to obtain fatigue degree detection models corresponding to different types of tasks.
For example, a user executes a language learning task on a first device, a cloud server may receive first user behavior data from the first device, and the cloud server may output the first user behavior data as training data to an initial fatigue detection model for training, and update the initial fatigue detection model to obtain a fatigue detection model corresponding to the language learning task. The fatigue detection model corresponding to the language learning task can be used for obtaining the fatigue of the user of the language learning task. For example, a user executes a mathematical learning task on a first device, the cloud server may receive second user behavior data from the first device, and the cloud server may output the second user behavior data as training data to the initial fatigue detection model for training, and update the initial fatigue detection model to obtain a fatigue detection model corresponding to the mathematical learning task. The fatigue detection model corresponding to the math learning task can be used for detecting the fatigue of the users of the math learning task.
Therefore, fatigue detection models corresponding to different types of tasks can be stored in the cloud server, so that when a user executes a certain type of task on the first device, the cloud server can detect the fatigue of the user by adopting the fatigue detection model corresponding to the type of task, and the accurate fatigue of the user can be obtained.
Illustratively, referring to fig. 5, the processing module in the cloud server may further perform 14):
14 And the processing module updates the fatigue detection model according to the user behavior data.
In the embodiment of the application, the cloud server can continuously update the fatigue detection model corresponding to the task according to the user behavior data when the user executes the task, so that the more accurate fatigue detection model which accords with the task is obtained, and the accuracy of fatigue detection is improved.
In one embodiment, the cloud server may continually optimize the duration of the subtasks in the task shown in table one for the same user, rather than always employing the same subtasks. It should be appreciated that for the same task, as the proficiency of the user in each subtask in the task increases, the time length used by the user to complete each subtask also decreases accordingly, and therefore the cloud server can update the time length of each subtask in the task. For example, the cloud server may update the duration of the subtask shown in table one to table four:
watch four
Figure BDA0003859639610000141
As shown in table four, as the proficiency of the user on each subtask in the task increases, the time length of the user for completing each subtask in the task of the pinyin learning type APP also decreases correspondingly, and correspondingly, the cloud server may decrease the time length of each subtask. For example, taking the "play" subtask as an example, the cloud server may reduce the duration of the "play" subtask from 60s to 40s. In other words, the cloud server may update the duration criterion for detecting whether the user effectively completes the subtask, so as to more accurately detect whether the user effectively completes the subtask.
For example, after the cloud server updates the duration of each subtask in the task, the cloud server may update the duration range of each subtask, where the duration range is used to detect whether the user has effectively completed the subtask. Illustratively, the duration of the "play" subtask is reduced from 60s to 40s, and accordingly, the cloud server may update the duration of the "play" subtask from 30s-70s to "20s-50s". In this way, the cloud server can accurately acquire the effective learning duration evaluation result of the user executing the 'play' subtask by using the updated duration range of the 'play' subtask.
In this embodiment, the duration of each subtask in the task may be updated by the cloud server in conjunction with the actual duration of each subtask completed by the user. Therefore, for the same user, when the first subtask is completed at different times, the time length of the first subtask used for obtaining the effective learning time length evaluation result of the first subtask may be different, so as to more accurately obtain the effective learning time length evaluation result. Therefore, for different users, the cloud server can store different detection standards of the effective learning duration evaluation result, so as to more accurately obtain the effective learning duration evaluation result of each subtask completed by the user. Therefore, aiming at different users, the embodiment of the application can provide the detection standard which is adaptive to the effective learning duration evaluation result of the user habit so as to more fit the habit of the user and improve the detection accuracy.
Since the first device can provide a user with a plurality of types of tasks, the cloud server can store the completion results of the user for completing the plurality of types of tasks for the same user. In one embodiment, the cloud server may integrate the completion results of the various types of tasks performed by the user, and may output one integrated result. The comprehensive result can include: the completion results of the user completing each type of task, and the suggestions provided by the cloud server. The suggestions may be used, for example, to indicate to the user to complete a good task, to complete a bad task, and to follow up on the user's learning suggestions.
For example, the cloud server may synthesize the completion results of the users completing the various types of tasks at intervals of a preset duration. Or when the type of the task completed by the user exceeds the threshold, the cloud server may synthesize the completion results of the user completing the tasks of the plurality of types, which is not limited in the embodiment of the present application.
In the embodiment of the application, the cloud server can synthesize the completion results of the user completing each type of task, summarize the user completing various types of tasks and give corresponding suggestions so as to improve the user experience.
In an embodiment, an embodiment of the present application further provides an electronic device, which may be the first device, the second device, and the cloud server described in the foregoing embodiments. Referring to fig. 9, the electronic device may include: a processor 901 (e.g., CPU), a memory 902. The memory 902 may include a random-access memory (RAM) and a non-volatile memory (NVM), such as at least one disk memory, and the memory 902 may store various instructions for performing various processing functions and implementing the method steps of the present application.
Optionally, the electronic device related to the present application may further include: a power supply 903, a communication bus 904, and a communication port 905. The communication port 905 is used for implementing connection and communication between the electronic device and other peripherals. In an embodiment of the present application, the memory 902 is used for storing computer executable program code, the program code comprising instructions; when the processor 901 executes the instructions, the instructions cause the processor 901 of the electronic device to execute the actions in the above method embodiments, which implement similar principles and technical effects, and are not described herein again.
In one embodiment, a display screen 906 may also be included in the electronic device. The display screen 906 is used to display an interface of the electronic device.
It should be noted that the modules or components described in the above embodiments may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when one of the above modules is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code, such as a controller. As another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The term "plurality" herein refers to two or more. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". In addition, it is to be understood that the terms first, second, etc. in the description of the present application are used for distinguishing between the descriptions and not necessarily for describing a sequential or chronological order.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application.
It should be understood that, in the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.

Claims (11)

1. A task processing method, comprising:
acquiring user behavior data in the process that a user executes a first subtask in a task;
when the user finishes the first subtask, acquiring a first completion result of the first subtask according to the user behavior data;
and sending the first completion result to the second equipment.
2. The method of claim 1, wherein the obtaining user behavior data comprises:
receiving the user behavior data from a first device, the first device being a device on which the user performed the task, the user behavior data comprising: the user records screen data in the process of executing the first subtask, and the user video in the process of executing the first subtask.
3. The method of claim 2, wherein the first completion result comprises: and video synthesis results, wherein the screen recording data comprises: a desktop video of the first subtask;
the obtaining a first completion result of the first subtask according to the user behavior data includes:
and synthesizing the desktop video of the first subtask and the video of the user to obtain the video synthesis result.
4. The method of claim 2, wherein the first completion result comprises: and the fatigue detection result, wherein the screen recording data comprises: the voice of the user;
the obtaining a first completion result of the first subtask according to the user behavior data includes:
and inputting the voice of the user and the video of the user into a fatigue degree detection model to obtain the fatigue degree detection result.
5. The method of claim 2, wherein the first completion result further comprises: effective learning duration evaluation results;
the obtaining a first completion result of the first subtask according to the user behavior data includes:
acquiring the time length for the user to complete the first subtask according to the desktop video of the first subtask;
and obtaining the effective learning duration evaluation result according to the duration of the first subtask and the duration used by the user to complete the first subtask.
6. The method of claim 5, wherein prior to obtaining user behavior data, further comprising:
receiving information of the task from the first device, wherein the information of the task comprises: the duration of the first subtask and the information of the task are sent when the user starts to execute the task on the first device.
7. The method of any of claims 2-6, wherein after obtaining the first completion result for the first subtask, further comprising:
sending the first completion result to the first device to display the first completion result of each first subtask in the task by the first device when the user completes the task on the first device.
8. The method of claim 4, further comprising:
and updating the fatigue detection model by taking the voice of the user and the video of the user as training data to obtain a fatigue detection model corresponding to the task, wherein the fatigue detection model corresponding to the task is used for obtaining a fatigue detection result when the user executes a task of the same type as the task.
9. The method of claim 5 or 6, further comprising:
and updating the duration of the first subtask according to the duration used by the user to complete the first subtask, wherein the updated duration of the first subtask is used for obtaining an effective learning duration evaluation result.
10. An electronic device, comprising: a processor and a memory;
the memory stores computer instructions;
the processor executing the computer instructions stored by the memory causes the processor to perform the method of any of claims 1-9.
11. A computer-readable storage medium, in which a computer program or instructions are stored which, when executed, implement the method of any one of claims 1-9.
CN202211157924.3A 2022-09-22 2022-09-22 Task processing method, electronic device and readable storage medium Active CN115658255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211157924.3A CN115658255B (en) 2022-09-22 2022-09-22 Task processing method, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211157924.3A CN115658255B (en) 2022-09-22 2022-09-22 Task processing method, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN115658255A true CN115658255A (en) 2023-01-31
CN115658255B CN115658255B (en) 2023-06-27

Family

ID=84984968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211157924.3A Active CN115658255B (en) 2022-09-22 2022-09-22 Task processing method, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115658255B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050187915A1 (en) * 2004-02-06 2005-08-25 Barbara De Lury Systems, methods and apparatus to determine relevance of search results in whole/part search
US20110301433A1 (en) * 2010-06-07 2011-12-08 Richard Scott Sadowsky Mental state analysis using web services
CN108711320A (en) * 2018-08-06 2018-10-26 北京导氮教育科技有限责任公司 A kind of network-based immersion on-line education system and method
CN112131977A (en) * 2020-09-09 2020-12-25 湖南新云网科技有限公司 Learning supervision method and device, intelligent equipment and computer readable storage medium
CN112306832A (en) * 2020-10-27 2021-02-02 北京字节跳动网络技术有限公司 User state response method and device, electronic equipment and storage medium
CN112511818A (en) * 2020-11-24 2021-03-16 上海哔哩哔哩科技有限公司 Video playing quality detection method and device
CN112783330A (en) * 2021-03-16 2021-05-11 展讯通信(上海)有限公司 Electronic equipment operation method and device and electronic equipment
CN113949933A (en) * 2021-09-30 2022-01-18 卓尔智联(武汉)研究院有限公司 Playing data analysis method, device, equipment and storage medium
CN115067945A (en) * 2022-08-22 2022-09-20 深圳市海清视讯科技有限公司 Fatigue detection method, device, equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050187915A1 (en) * 2004-02-06 2005-08-25 Barbara De Lury Systems, methods and apparatus to determine relevance of search results in whole/part search
US20110301433A1 (en) * 2010-06-07 2011-12-08 Richard Scott Sadowsky Mental state analysis using web services
CN108711320A (en) * 2018-08-06 2018-10-26 北京导氮教育科技有限责任公司 A kind of network-based immersion on-line education system and method
CN112131977A (en) * 2020-09-09 2020-12-25 湖南新云网科技有限公司 Learning supervision method and device, intelligent equipment and computer readable storage medium
CN112306832A (en) * 2020-10-27 2021-02-02 北京字节跳动网络技术有限公司 User state response method and device, electronic equipment and storage medium
CN112511818A (en) * 2020-11-24 2021-03-16 上海哔哩哔哩科技有限公司 Video playing quality detection method and device
CN112783330A (en) * 2021-03-16 2021-05-11 展讯通信(上海)有限公司 Electronic equipment operation method and device and electronic equipment
CN113949933A (en) * 2021-09-30 2022-01-18 卓尔智联(武汉)研究院有限公司 Playing data analysis method, device, equipment and storage medium
CN115067945A (en) * 2022-08-22 2022-09-20 深圳市海清视讯科技有限公司 Fatigue detection method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PI Z 等: "Learning process and learning outcomes of video podcasts including the instructor and PPT slides: A Chinese case", 《INNOVATIONS IN EDUCATION AND TEACHING INTERNATIONAL》 *
夏令: "互联网+教育\"背景下巧用学习类APP", 《教育实践与研究》 *

Also Published As

Publication number Publication date
CN115658255B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN108984389B (en) Application program testing method and terminal equipment
JP2008547128A5 (en)
CN110442519B (en) Crash file processing method and device, electronic equipment and storage medium
CN110221959B (en) Application program testing method, device and computer readable medium
US11813538B2 (en) Videogame telemetry data and game asset tracker for session recordings
US20140157147A1 (en) Feedback system, feedback method and recording media thereof
CN111918386B (en) Positioning method, positioning device, storage medium and electronic equipment
US11475387B2 (en) Method and system for determining productivity rate of user in computer-implemented crowd-sourced environment
CN115658255B (en) Task processing method, electronic device and readable storage medium
CN112416751A (en) Processing method and device for interface automation test and storage medium
CN115118687B (en) Message pushing method and device, storage medium and computer equipment
WO2020093613A1 (en) Page data processing method and apparatus, storage medium, and computer device
CN116611401A (en) Document generation method and related device, electronic equipment and storage medium
CN116186400A (en) Front-end page refined region dotting method and system
WO2019227633A1 (en) Methods and apparatuses for establishing user profile and establishing state information analysis model
CN109933260A (en) Know screen method, apparatus, terminal and storage medium
CN109189523A (en) A kind of method, system and the method for closing virtual machine of judgement idle virtual machine
CN112951013B (en) Learning interaction method and device, electronic equipment and storage medium
CN111796846B (en) Information updating method, device, terminal equipment and readable storage medium
WO2021142607A1 (en) Vehicle diagnosis process playback method, apparatus, and readable storage medium
CN114036074A (en) Test method and test device for terminal equipment
US11810022B2 (en) Contact center call volume prediction
CN112948017A (en) Guide information display method, device, terminal and storage medium
CA3119490A1 (en) Contact center call volume prediction
TWI715193B (en) Remote invigilation system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant