CN111179128A - Information playing method, sound box equipment and storage medium - Google Patents
Information playing method, sound box equipment and storage medium Download PDFInfo
- Publication number
- CN111179128A CN111179128A CN201911037433.3A CN201911037433A CN111179128A CN 111179128 A CN111179128 A CN 111179128A CN 201911037433 A CN201911037433 A CN 201911037433A CN 111179128 A CN111179128 A CN 111179128A
- Authority
- CN
- China
- Prior art keywords
- image
- reviewed
- played
- information
- multimedia
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000012360 testing method Methods 0.000 claims abstract description 268
- 238000012545 processing Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 abstract description 25
- 238000012552 review Methods 0.000 abstract description 15
- 230000003993 interaction Effects 0.000 abstract description 9
- 238000011156 evaluation Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000012937 correction Methods 0.000 description 4
- 210000001747 pupil Anatomy 0.000 description 3
- 239000011521 glass Substances 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Tourism & Hospitality (AREA)
- Data Mining & Analysis (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Educational Technology (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Educational Administration (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Multimedia (AREA)
- Electrically Operated Instructional Devices (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the application discloses an information playing method, sound box equipment and a storage medium. The method in the embodiment of the application comprises the following steps: receiving an image to be reviewed sent by a wearable device; sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played; receiving the test questions to be played and the multimedia analysis information sent by the server; and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
Description
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to an information playing method, a speaker device, and a storage medium.
Background
At present, in the process of correcting the test paper, correction can be performed in two ways: one is a manual modification mode, which is less efficient; the other method is to manually collect each test question in the test paper image through the mobile terminal and sequentially upload each test question to the review server so as to revise the question through the review server, and the method also needs manual shooting and is still low in intelligence.
Disclosure of Invention
The embodiment of the application provides an information playing method, a sound box device and a storage medium, wherein a wearable device is adopted to collect an image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
According to a first aspect of embodiments of the present application, there is provided an information playing method, where the method is applied to a sound box device, and the sound box device is configured with a multimedia player, and the method includes:
receiving an image to be reviewed sent by a wearable device;
sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played;
receiving the test questions to be played and the multimedia analysis information sent by the server;
and playing the test questions to be played and the multimedia analysis information through the multimedia player.
Optionally, the method further comprises:
acquiring a user image of a target user wearing the wearable device;
performing user authentication according to the user image to obtain a user authentication result;
the sending the image to be reviewed to a server includes:
and if the user authentication result comprises user authentication success, sending the image to be reviewed to a server.
Optionally, in a case that the pending image includes a plurality of test questions, the method further includes:
acquiring the test question completion degree of the image to be reviewed;
the sending the image to be reviewed to a server includes:
and sending the image to be reviewed to a server if the test question completion degree is greater than or equal to a preset completion degree threshold value.
Optionally, the multimedia parsing information includes one of the following types: audio type, video type, and text type.
Optionally, the playing, by the multimedia player, the test question to be played and the multimedia analysis information includes:
determining the information type of the multimedia analysis information;
under the condition that the information type comprises an audio type, playing the test questions to be played and the audio information through an audio player;
or,
under the condition that the information type comprises a video type, playing the test question to be played and the video information through a display and an audio player;
or,
and under the condition that the information type comprises a text type, playing the test questions to be played and the text information through a display.
Optionally, in the case that the information type of the multimedia parsing information includes the audio type, or the video type; the method further comprises the following steps:
receiving a volume adjustment instruction sent by the wearable device;
and adjusting the volume of the test questions to be played and the multimedia analysis information according to the volume adjustment instruction.
According to a second aspect of embodiments of the present application, there is provided an audio amplifier device configured with a multimedia player, the audio amplifier device including:
the receiving module is used for receiving an image to be reviewed, which is sent by the wearable device;
the sending module is used for sending the image to be reviewed to a server, and the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played;
the receiving module is also used for receiving the test questions to be played and the multimedia analysis information sent by the server;
and the playing module is used for playing the test questions to be played and the multimedia analysis information through the multimedia player.
Optionally, the speaker device further includes: a processing module;
the processing module is used for acquiring a user image of a target user wearing the wearable device; and the number of the first and second groups,
performing user authentication according to the user image to obtain a user authentication result;
the sending module is further configured to send the image to be reviewed to the server if the user authentication result includes user authentication success.
Optionally, in a case that the image to be reviewed includes a plurality of test questions, the processing module is further configured to obtain a test question completion degree of the image to be reviewed;
the sending module is further configured to send the image to be reviewed to the server when the test question completion degree is greater than or equal to a preset completion degree threshold.
Optionally, the multimedia parsing information includes one of the following types: audio type, video type, and text type.
Optionally, the playing module is further configured to determine an information type of the multimedia parsing information;
under the condition that the information type comprises an audio type, playing the test questions to be played and the audio information through an audio player;
or,
under the condition that the information type comprises a video type, playing the test question to be played and the video information through a display and an audio player;
or,
and under the condition that the information type comprises a text type, playing the test questions to be played and the text information through a display.
Optionally, in the case that the information type of the multimedia parsing information includes the audio type, or the video type;
the receiving module is further configured to receive a volume adjustment instruction sent by the wearable device;
and the processing module is further used for adjusting the volume of the test questions to be played and the multimedia analysis information according to the volume adjustment instruction.
According to a third aspect of embodiments of the present application, there is provided an acoustic enclosure apparatus including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the steps of the information playing method according to the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the steps of the information playing method according to the first aspect.
According to a fifth aspect of the embodiments of the present application, there is provided a computer program product, which, when running on a computer, causes the computer to execute the steps of the information playing method according to the first aspect of the embodiments of the present application.
According to the technical scheme, the embodiment of the application has the following advantages:
receiving an image to be reviewed sent by a wearable device; sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played; receiving the test questions to be played and the multimedia analysis information sent by the server; and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to the drawings.
Fig. 1 is a schematic flowchart illustrating a first information playing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a second information playing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a third information playing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a fourth information playing method according to an embodiment of the present application;
fig. 5 is a block diagram illustrating a first speaker device according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating a second speaker device according to an embodiment of the present application;
fig. 7 is a block diagram of a third speaker device according to an embodiment of the present application.
Detailed Description
For a person skilled in the art to better understand the present application, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The embodiments in the present application shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiments of the present application may be applied to the following exemplary scenarios:
scene one: the student tests through the test paper at home, and at this moment, the head of a family is usually required to correct the test paper. Considering that the degree of mastering the questions in the test paper by parents may be low, the questions need to be searched by means of a question searching application installed on the terminal device so as to obtain the search results of the questions. At present, in the title search process, parents are required to manually start a title search application program of terminal equipment, a camera is started on a title search page included in the title search application program, then the camera is aligned to a title to be searched, and finally a shooting control is triggered to search the title.
Scene two: schools typically require students to be evaluated at intervals due to the large number of students in a class. Therefore, the teacher needs to spend a lot of time in the process of evaluating the test paper for correction, so that the correction efficiency is low. Moreover, the teacher needs to explain the correction result in a wrong way. At this time, the teacher is required to perform wrong statistics on the test paper of all students, so that the problem of low statistical efficiency is caused. The above scenarios are merely exemplary and are not intended to be limiting.
In summary, embodiments of the present application provide an information playing method, a sound box device, and a storage medium, where a wearable device is used to collect an image to be reviewed, and through information interaction between the wearable device and the sound box device, an image collection process is simplified, thereby improving review efficiency. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
The present application will be described in detail with reference to examples.
Example one
Fig. 1 is a schematic flow chart of an information playing method shown in an embodiment of the present application, and as shown in fig. 1, the method is applied to a sound box device, and the sound box device is configured with a multimedia player, and the method may include the following steps:
101. an image to be reviewed sent by the wearable device is received.
By way of example, wearable devices may include, but are not limited to, wrist-supported watch types (e.g., wrist watches, wrist-supported products), foot-supported shoes types (e.g., shoes, socks, or other leg-worn products), head-supported glass types (e.g., glasses, helmets, headbands, etc.), and various types of non-mainstream product forms such as smart clothing, school bags, crutches, accessories, and the like.
In the embodiment of the application, the image collector is configured on the wearable device, so that the image collector of the wearable device can be controlled to perform image collection operation. Optionally, the image collector arranged on the wearable device may be angularly adjustable, so that the image to be reviewed may be collected by adjusting the angle of the image collector.
Further, the image acquisition operation of the image acquirer of the wearable device can be controlled in the following ways:
in the first mode, in the case that a voice receiver (such as a microphone) and a processor are arranged on the wearable device; the processor determines first text information to be recognized corresponding to the first voice information under the condition that the voice receiver receives the first voice information; judging whether the first text information to be recognized comprises preset shooting words or not; and when the first text information to be recognized comprises the preset shooting words, controlling the image collector to carry out image collection operation. Illustratively, the preset photographing words may be "take a picture", and the like.
The method II is that under the condition that the wearable equipment is provided with the fingerprint input device and the processor; the processor controls the image collector to carry out image collection operation under the condition that the fingerprint input device receives preset input operation. Illustratively, the preset input operation includes a click operation, a long-press operation, a slide operation, or the like.
In a third mode, when the wearable device is provided with an image collector and a processor, the following description can be further divided into cases:
(1) and the processor controls the image collector to carry out image collection operation under the condition that the image collector collects the preset shooting gesture. For example, the preset photographing gesture may be an "ok" gesture, or a fist making gesture, etc.
(2) The image collector can rotate. In this way, the initial state of the image collector is towards the eyes of the user wearing the wearable device, so as to collect the eye image of the user. The rotation direction of at least one pupil that this application can confirm eyes image includes to under the rotation direction is the condition of default direction, the treater controls image collector and rotates, so that image collector wears this wearable equipment's user's eyes dorsad, thereby controls image collector and treats to review the image and carry out image acquisition operation.
(3) The image collector comprises 2 image collecting units. Wherein one image acquisition unit faces the eyes of a user wearing the wearable device so as to acquire the eye image of the user. The other image acquisition unit faces away from eyes of a user wearing the wearable device so as to perform image acquisition operation on an image to be reviewed. In this way, under the condition that the eye image acquired by one image acquisition unit includes at least one pupil and the rotation direction of the at least one pupil is the preset direction, the processor controls the other image acquisition unit to perform the image acquisition operation on the image to be reviewed.
In an alternative embodiment, the wearable device has a pre-binding relationship with the speaker device. The wearable device can be provided with a memory, so that the bound equipment identifier of the sound box equipment can be stored in advance through the memory, and the image to be examined is sent to the sound box equipment according to the equipment identifier.
It should be noted that the wearable device may be bound with a plurality of candidate speaker devices in advance (for example, the wearable device is bound with a candidate speaker device in a home, a candidate speaker device in a school, and the like). Therefore, in order to accurately play the information, the following methods may be included and not limited to determine the speaker device from the plurality of candidate speaker devices:
and acquiring position information corresponding to the candidate loudspeaker box devices respectively, and determining the loudspeaker box devices from the candidate loudspeaker box devices according to the position information. For example, the candidate speaker device closest to the wearable device may be determined to be the speaker device.
Further, at least one positioning interface is configured in the terminal device (i.e. the wearable device or any one of the candidate speaker devices). In this way, at least one positioning interface can be triggered to send a positioning request to a corresponding positioning server, and position information sent by the positioning server corresponding to the at least one positioning interface is acquired.
When the at least one positioning interface includes a plurality of positioning interfaces, the response time of each positioning interface from a first time to a second time may be obtained, where the first time may be a time when each positioning interface sends a positioning request, and the second time may be a time when each positioning interface receives location information; comparing the response time corresponding to each positioning interface with a response threshold; and according to the comparison result, acquiring the candidate positioning interfaces of which the response time does not exceed the response threshold, and acquiring the target positioning interface from the candidate positioning interfaces, so as to acquire the position information of the terminal equipment based on the position information received by the target positioning interface.
Optionally, the target positioning interface may be a candidate positioning interface corresponding to the minimum response time, or the candidate positioning interfaces may be sorted in the order from the smaller response time to the larger response time, and the candidate positioning interface with the sorting number smaller than or equal to the preset sequence number threshold may be used as the target positioning interface. For example, the average value of the position information received by the target positioning interface may be calculated to obtain the position information of the terminal device. Therefore, the accuracy of the geographic information can be improved by arranging different positioning interfaces.
Of course, the speaker device may be determined from the plurality of candidate speaker devices in other manners. Exemplarily, the second voice information is received, second text information to be recognized corresponding to the second voice information is determined, a target device identifier included in the second text information to be recognized is recognized, and the candidate loudspeaker box device corresponding to the target device identifier is determined to be a loudspeaker box device.
For example, if the wearable device is bound with a candidate speaker device in a home and a candidate speaker device in a school, and the device identifier of the candidate speaker device in the home is "AA", and the device identifier of the candidate speaker device in the school is "BB", it is determined that the candidate speaker device in the home is a speaker device when the received second voice message includes "send to AA", the above example is only an example, and this application is not limited to this.
102. And sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed includes the test questions to be played.
In the embodiment of the present application, the test question to be played may be a question with a wrong answer, or a question with a high answer error rate (i.e., a question with an answer error rate greater than or equal to a preset error rate threshold), and so on. The following describes the process of determining the test questions to be played in different cases:
under the condition that the test question to be played comprises a question with an incorrect answer, the test question to be played can be obtained through the following steps:
step 1, since the image to be reviewed may include a plurality of test questions, and there is usually a division mark between each test question. Therefore, the server can divide the plurality of test questions in the image to be reviewed through the division identifier to obtain the test question area of each test question. Illustratively, the division identification may be a test question number, or the like.
Step 2 to step 5 are respectively executed for each test question area, and a target test question area is taken as an example for explanation, wherein the test question area comprises the target test question area:
and 2, carrying out region division on the target test question region to obtain a test question subregion and an answer subregion. The characters included in the test question sub-area are usually print characters, and the characters included in the answer sub-area are handwritten characters, so that the character types of the characters included in the target test question area can be determined through a font recognition model, and the target test question area is divided into the test question sub-area and the answer sub-area according to the character types. The character recognition model is a model obtained by training in advance according to the print body sample and the handwriting body sample.
And 3, acquiring a reference answer according to the test question characters in the test question sub-area. If a plurality of candidate reference answers are obtained according to the test question characters, the reference answers can be determined according to evaluation parameters of the candidate reference answers. Illustratively, the evaluation parameter may include, but is not limited to, one of: a utilization rate, an evaluation value, and the like.
Optionally, under the condition that the evaluation parameters are in direct proportion to the correctness of the candidate reference answers, if the test questions included in the target test question region are determined to be objective questions according to the test question characters, the candidate reference answer with the highest evaluation parameter can be determined to be the reference answer. Or if the test questions included in the target test question area are determined to be subjective questions according to the test question characters, sorting the candidate reference answers according to the evaluation parameters to obtain the ranking of the candidate reference answers, wherein if the evaluation parameters are larger, the sorting result is closer to the front; if the evaluation parameter is smaller, the ranking result is closer, and therefore, the candidate reference answer with the ranking less than or equal to the preset ranking threshold value can be determined as the reference answer.
And 4, acquiring the matching degree between the reference answer and the answer character included in the answer sub-area.
If the test questions included in the target test question area are determined to be objective questions according to the test question characters, the matching degree is 0 or 100%.
If the test questions included in the target test question area are determined to be subjective questions according to the test question characters, under the condition that the reference answers include a single answer, semantic analysis needs to be performed on the reference answers and the answer characters respectively, and the semantic analysis results of the reference answers are matched with the semantic analysis results of the answer characters to obtain the matching degree; or, under the condition that the reference answer includes multiple answers, performing semantic analysis on each reference answer and each answer character, and matching the semantic analysis result of the target reference answer with the semantic analysis result of the answer character to obtain the matching degree between the target reference answer and the answer character, where the reference answer includes the target reference answer.
And 5, under the condition that the matching degree meets the preset matching condition, determining the target test question included in the target test question area as the test question to be played.
If the test questions included in the target test question area are determined to be objective questions according to the test question characters, the preset matching conditions include: the degree of matching is 100%.
If the test questions included in the target test question area are determined to be subjective questions according to the test question characters, the preset matching condition includes that under the condition that the reference answer includes a single answer: the matching degree is less than or equal to a preset matching degree threshold value; or, in the case that the reference answer includes a plurality of answers, the preset matching condition includes: the ratio of the number of the target matching degrees to the number of the reference answers is greater than or equal to a preset proportion threshold, and the target matching degrees comprise matching degrees which are less than or equal to a preset matching degree threshold.
And secondly, under the condition that the test questions to be played comprise test questions with higher answer error rate, obtaining the test questions to be played through the following steps:
step 1, the server can divide the multiple test questions in the image to be reviewed through the dividing marks to obtain the test question area of each test question. The detailed process may refer to the description in case one, and is not described herein again.
The steps 2 to 5 are performed for each test question area, and a target test question area is taken as an example to be described below, and the test question area includes the target test question area.
And 2, acquiring the number of times of wrong answers of the target test question included in the target test question area.
In the embodiment of the present application, the target test question belongs to a wrong answer test question, wherein the method for determining the target test question as the wrong answer test question may refer to the process for determining the test question to be played described in the first case, and details are not described herein again. The following method for determining the number of times of answering errors is described for different scenes:
in the first scenario, the same object performs multiple tests on the target test question in different time periods, so that the number of times of answer errors of the target test question can be determined according to the test results of the same object in the multiple tests.
For example, a certain student performs the ith test on the target test question, at this time, the number of times of wrong answer is the number of times of wrong answer of the target test question by the certain student in the i tests, and i is a positive integer. In this scene, this wearable equipment can be dressed to the student in this application, perhaps, this wearable equipment is dressed to student's head of a family.
It can be understood that the sound box device can store the test results of the target test question for i-1 times, determine the historical error times of the target test question for i-1 times according to the test results, and calculate the sum of the historical error times and 1 to obtain the answer error times of the target test question under the condition that the target test question is an answer error in the ith test.
And in the second scenario, different objects test the target test question in the same time period, so that the number of times of wrong answer of the target test question can be determined according to the test results of the different objects on the target test question.
Illustratively, j students in a certain class are tested against the same test paper. At this time, for the target test question, the number of wrong answers may be the number of students who wrongly answer the target test question in the certain class, the same test paper includes the target test question, and j is a positive integer.
In this scene, this wearable equipment can be dressed to the mr of this certain class, and like this, the mr gathers every student's examination paper through this wearable equipment in proper order to alright wearable equipment sends the pending image of reading of the examination paper of gathering to audio amplifier equipment. Of course, this wearable equipment can also be dressed respectively to j students of a certain class, and like this, every student gathers the examination paper of oneself through the wearable equipment of dressing to alright wearable equipment sends the picture of waiting to examine of the examination paper of gathering to audio amplifier equipment.
It can be understood that, in the second scenario, the test questions to be played may be obtained in the following manners, but not limited to:
one possible implementation is: the sound box equipment can store the test results of j students aiming at the same test paper, determine the number of students with wrong target test questions according to the test results, and determine the number of the students as the number of wrong target test questions.
Another possible implementation is: the sound box equipment acquires images to be reviewed of different students at different time, so that the number of historical students with wrong target test questions can be determined according to the acquired images to be reviewed in advance; and determining whether the target test question included in the current image to be reviewed is wrong, and calculating the sum of the number of students in the history and 1 under the condition that the target test question included in the current image to be reviewed is wrong, so as to obtain the number of wrong answer times of the target test question. It should be noted that, when it is determined that the target test question does not belong to the test question to be played according to the number of wrong answer times, the number of wrong answer times of the target test question is determined as the number of students in the history that the target test question is wrong.
And 3, calculating the answer error rate of the target test question according to the answer error times.
Continuing to explain by taking the example in the step 2 as an example, in the first scenario, the answer error rate of the target test question is the ratio of the number of answer errors to the number of tests for multiple tests; in the second scenario, the answer error rate of the target test question is the ratio of the number of times of answer errors to the number of objects of different objects.
And 4, determining the target test question as the test question to be played under the condition that the answer error rate of the target test question is greater than or equal to a preset error rate threshold value.
It should be noted that the multimedia parsing information in the present application is a reference answer of the test question to be played. Wherein the multimedia parsing information includes one of: audio information, video information, and text information.
103. And receiving the test questions to be played and the multimedia analysis information sent by the server.
104. And playing the test questions to be played and the multimedia analysis information through the multimedia player.
In the embodiment of the application, the information type of the multimedia parsing information is determined, and in the case that the information type includes an audio type, the multimedia player includes an audio player. At this time, the test question to be played and the audio information are played through the audio player in the step.
Or,
in case the information type comprises a video type, the multimedia player comprises a display and an audio player. At this time, the test question to be played and the video information are played through the display and the audio player.
Or,
in case the information type comprises a text type, the multimedia player comprises a display. At this time, the test questions to be played and the text information are played through the display.
Therefore, the sound box device has the functions of audio playing, image displaying and the like. Therefore, the multimedia analysis information of the video type (or the text type) can be visually displayed, and the diversity of information playing is realized.
It should be noted that it is considered that the presentation of the test questions together with the analysis information is helpful for the user to understand. Optionally, the playing type of the test question to be played can be determined according to the information type of the multimedia analysis information. Illustratively, if the information type includes an audio type, the playing type of the test question to be played is the audio type; if the information type comprises a text type, the playing type of the test question to be played is the text type; if the information type includes a video type, the playing type of the test question to be played is a text type or an audio type.
By implementing the embodiment, the image to be reviewed sent by the wearable device is received; sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played; receiving the test questions to be played and the multimedia analysis information sent by the server; and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
Example two
Fig. 2 is a schematic flowchart of an information playing method shown in an embodiment of the present application, and as shown in fig. 2, the method is applied to a sound box device, where the sound box device is configured with a multimedia player, and the method includes:
201. an image to be reviewed sent by the wearable device is received.
For details, reference may be made to step 101, which is not described herein again.
Consider that user1 may privately use the wearable device of user2 to play information through the audio box device, thereby causing power loss to the audio box device and reducing the privacy of the device's use.
202. Acquiring a user image of a target user wearing the wearable device.
In the embodiment of the application, the wearable device is provided with the image collector, and the specific position of the image collector is not specially limited in the application. Therefore, in the step, the user image of the target user can be collected through the image collector on the wearable device, and therefore the wearable device sends the user image to the sound box device.
The user image may include a face image or an iris image, and the like.
203. And carrying out user authentication according to the user image to obtain a user authentication result.
It can be understood that the sound box device may store a preset user image of the wearable device in advance, so as to perform user authentication according to the user image and the preset user image, and obtain a user authentication result.
Wherein, the user authentication result comprises user authentication success or user authentication failure.
204. And sending the image to be reviewed to a server under the condition that the user authentication result comprises successful user authentication, wherein the image to be reviewed is used for acquiring the multimedia analysis information of the examination questions to be played by the server under the condition that the server detects that the image to be reviewed comprises the examination questions to be played.
The detailed process can refer to step 102, which is not described herein.
205. And receiving the test questions to be played and the multimedia analysis information sent by the server.
For details, reference may be made to step 103, which is not described herein again.
206. And playing the test questions to be played and the multimedia analysis information through the multimedia player.
The detailed process can refer to step 104, which is not described herein.
By implementing the embodiment, the image to be reviewed sent by the wearable device is received; acquiring a user image of a target user wearing the wearable device; performing user authentication according to the user image to obtain a user authentication result; sending the image to be reviewed to a server under the condition that the user authentication result comprises user authentication success, wherein the image to be reviewed is used for acquiring multimedia analysis information of the examination questions to be played by the server under the condition that the server detects that the image to be reviewed comprises the examination questions to be played; and receiving the test questions to be played and the multimedia analysis information sent by the server, and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved. In addition, in order to avoid using other people's audio amplifier equipment privately under the circumstances that does not pass through the permission, need carry out user authentication to the target user who wears wearable equipment, improved audio amplifier equipment's use privacy, promoted the security.
EXAMPLE III
Fig. 3 is a schematic flow chart of an information playing method shown in an embodiment of the present application, and as shown in fig. 3, the method is applied to a sound box device, the sound box device is configured with a multimedia player, and when the pending reading image includes a plurality of test questions, the method includes:
301. an image to be reviewed sent by the wearable device is received.
For details, reference may be made to step 101, which is not described herein again.
And the user performs the shooting operation of the test paper under the condition of finishing fewer test questions. Thus, the test questions that the user has not completed are usually regarded as wrong questions. Considering that the user may grasp the unfinished test questions, there is a deviation in the analysis of the test questions. In summary, the present application can improve the analysis result of the test question through the following steps.
302. And acquiring the test question completion degree of the image to be reviewed.
Wherein, the test question completeness of the image to be reviewed can be obtained through the following steps:
step 1, since the image to be reviewed may include a plurality of test questions, and there is usually a division mark between each test question. Therefore, the sound box device can divide the plurality of test questions in the image to be reviewed through the division identifier to obtain the test question area of each test question. Illustratively, the division identification may be a test question number, or the like.
Step 2 to step 4 are respectively executed for each test question area, and a target test question area is taken as an example for explanation, wherein the test question area comprises the target test question area:
and 2, carrying out region division on the target test question region to obtain a test question subregion and an answer subregion.
The detailed dividing process may refer to step 102, which is not described herein again. It should be noted that, in the case that the handwritten character is not detected by the method in step 102, further, the application may determine the test question sub-area according to the position of the print characters included in the target test question area, and use the blank area included in the target test question area as the answer sub-area.
And 3, detecting whether the answer sub-area comprises a blank area.
Under the condition that the answer sub-area comprises a blank area, executing the step 4;
in case that the handwritten character string is included in the answer sub-area, step 5 is performed.
And 4, determining the completion degree of the answer sub-area as a first numerical value.
Illustratively, the first value may be 0.
And 5, determining the completion degree of the answer sub-area as a second numerical value.
Illustratively, the first value may be 1.
The above method for determining the completion degree of the answer sub-area is only an exemplary illustration, and the application does not limit this. Of course, the test question type can be determined according to the test question sub-region, so that under the condition that the test question type includes a subjective question, whether the number of the handwritten character strings included in the answer sub-region is larger than or equal to a preset number threshold value or not is judged, if yes, the completion degree of the answer sub-region is determined to be a second numerical value, and if not, the completion degree of the answer sub-region is determined to be a first numerical value.
The completion degree of the answer sub-area in each test question area can be obtained through the steps 1 to 5, and the step 6 is executed.
And 6, under the condition of acquiring the completion degree of the answer sub-area in each test question area, acquiring the completion degree of the test question of the image to be reviewed according to the completion degree of the answer sub-area in each test question area.
Further, the total completion degree can be obtained by calculating the sum of the completion degrees of the answer sub-areas in each test question area, and the ratio of the total completion degree to the total number of the test questions in the image to be reviewed is calculated to obtain the test question completion degree of the image to be reviewed.
303. And sending the image to be reviewed to a server under the condition that the test question completion degree is greater than or equal to a preset completion degree threshold value, wherein the image to be reviewed is used for acquiring the multimedia analysis information of the test question to be played by the server under the condition that the server detects that the image to be reviewed comprises the test question to be played.
The detailed process can refer to step 102, which is not described herein.
304. And receiving the test questions to be played and the multimedia analysis information sent by the server.
For details, reference may be made to step 103, which is not described herein again.
305. And playing the test questions to be played and the multimedia analysis information through the multimedia player.
The detailed process can refer to step 104, which is not described herein.
By implementing the embodiment, the image to be reviewed sent by the wearable device is received; acquiring the test question completion degree of the image to be reviewed; sending the image to be reviewed to a server under the condition that the test question completion degree is larger than or equal to a preset completion degree threshold value, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test question to be played by the server under the condition that the server detects that the image to be reviewed comprises the test question to be played; and receiving the test questions to be played and the multimedia analysis information sent by the server, and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved. In addition, in order to avoid errors in review results caused by transmission of the image to be reviewed when the user completes fewer test questions, the method and the device can acquire the completion degree of the image to be reviewed, so that the user can review the test questions when the number of test questions is large, and further the effectiveness of the review of the test questions is improved.
Example four
Fig. 4 is a schematic flow diagram of an information playing method shown in an embodiment of the present application, where as shown in fig. 4, the method is applied to a sound box device, and the sound box device is configured with a multimedia player, and in the embodiment of the present application, an information type of the multimedia parsing information includes an audio type, or a video type is taken as an example for description, and the method includes:
401. an image to be reviewed sent by the wearable device is received.
For details, reference may be made to step 101, which is not described herein again.
402. And sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed includes the test questions to be played.
The detailed process can refer to step 102, which is not described herein.
403. And receiving the test questions to be played and the multimedia analysis information sent by the server.
For details, reference may be made to step 103, which is not described herein again.
404. And playing the test questions to be played and the multimedia analysis information through the multimedia player.
The detailed process can refer to step 104, which is not described herein.
405. And receiving a volume adjusting instruction sent by the wearable device.
Wherein, the volume adjustment instruction includes: and a volume up instruction or a volume down instruction.
Generally, in the process of playing audio or video, the volume needs to be adjusted according to the needs of the user. To simplify the volume adjustment operation, the present application may implement the volume adjustment through the wearable device, which may include but is not limited to the following:
in a first mode, a volume adjusting button is arranged on the wearable device, and the volume adjusting button may include a first button and a second button. In this way, when the user triggers the first button, the volume up instruction is generated; and generating the volume turn-down instruction under the condition that the user triggers the second button.
And in a second mode, an image collector is arranged on the wearable device, and a volume adjustment instruction is generated under the condition that the preset volume adjustment gesture is collected by the image collector. For example, if the preset volume adjustment gesture is that the index finger of the left hand is raised, the volume adjustment instruction comprises a volume up instruction; if the preset volume adjustment gesture is that the index finger of the right hand is upright, the volume adjustment instruction comprises a volume turn-down instruction.
406. And adjusting the volume of the test questions to be played and the multimedia analysis information according to the volume adjustment instruction.
It should be noted that, in alternative embodiments, step 405 and step 406 in this application may also be implemented in other manners. For example, the current environment volume of the environment where the sound box device is located is obtained, and the playing volume of the test question to be played and the multimedia information is adjusted according to the current environment volume. If the current environment sound box is smaller, the playing volume is smaller.
By implementing the embodiment, the image to be reviewed sent by the wearable device is received; acquiring a user image of a target user wearing the wearable device; performing user authentication according to the user image to obtain a user authentication result; sending the image to be reviewed to a server under the condition that the user authentication result comprises user authentication success, so that the server can acquire multimedia analysis information of the test question to be played under the condition that the server detects that the image to be reviewed comprises the test question to be played; and receiving the test questions to be played and the multimedia analysis information sent by the server, and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved. In addition, this application carries out volume control through wearable equipment, has simplified the volume control operation.
EXAMPLE five
Fig. 5 is a block diagram of a sound box device 50 according to an embodiment of the present application, where as shown in fig. 5, the sound box device 50 is configured with a multimedia player, and the sound box device 50 includes:
a receiving module 501, configured to receive an image to be reviewed, which is sent by a wearable device;
a sending module 502, configured to send the image to be reviewed to a server, where the image to be reviewed is used for the server to obtain multimedia analysis information of the test question to be played when the server detects that the image to be reviewed includes the test question to be played;
the receiving module 501 is further configured to receive the test questions to be played and the multimedia analysis information sent by the server;
the playing module 503 is configured to play the test question to be played and the multimedia analysis information through the multimedia player.
Fig. 6 is a block diagram of a structure of an audio amplifier device 50 according to an embodiment of the present application, and as shown in fig. 6, the audio amplifier device 50 further includes: a processing module 504;
the processing module 504 is configured to obtain a user image of a target user wearing the wearable device; and the number of the first and second groups,
performing user authentication according to the user image to obtain a user authentication result;
the sending module 502 is further configured to send the image to be reviewed to a server if the user authentication result includes user authentication success.
Optionally, the processing module 504 is further configured to obtain a test question completion degree of the image to be reviewed;
the sending module 502 is further configured to send the image to be reviewed to a server if the test question completion degree is greater than or equal to a preset completion degree threshold.
Optionally, the multimedia parsing information includes one of the following types: audio type, video type, and text type.
Optionally, the playing module 503 is further configured to determine an information type of the multimedia parsing information;
under the condition that the information type comprises an audio type, playing the test questions to be played and the audio information through an audio player;
or,
under the condition that the information type comprises a video type, playing the test question to be played and the video information through a display and an audio player;
or,
and under the condition that the information type comprises a text type, playing the test questions to be played and the text information through a display.
Optionally, in the case that the information type of the multimedia parsing information includes the audio type, or the video type;
the receiving module 501 is further configured to receive a volume adjustment instruction sent by the wearable device;
the processing module 504 is further configured to perform volume adjustment on the test question to be played and the multimedia analysis information according to the volume adjustment instruction.
By adopting the device, the image to be examined sent by the wearable device bound with the sound box device is received; sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played; and receiving the test questions to be played and the multimedia analysis information sent by the server, and playing the test questions to be played and the multimedia analysis information through the multimedia player. Therefore, the wearable device is adopted to collect the image to be reviewed, and the image collection process is simplified through information interaction between the wearable device and the sound box device, so that the review efficiency is improved. In addition, the sound box equipment plays the multimedia analysis information of the test questions to be played in a targeted manner, so that equipment loss is effectively reduced, and user experience is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring to fig. 7, fig. 7 is a block diagram of a speaker device according to an embodiment of the present disclosure. As shown in fig. 7, the speaker device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
wherein, the processor 702 calls the executable program code stored in the memory 701 to execute part or all of the steps of the method in the above method embodiments.
The embodiment of the application also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
The embodiments of the present application also disclose a computer program product, wherein, when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present application also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The information playing method, the speaker device, and the storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied in the description to explain the principles and implementations of the present application, and the description of the embodiments above is only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An information playing method is applied to sound box equipment, the sound box equipment is provided with a multimedia player, and the method comprises the following steps:
receiving an image to be reviewed sent by a wearable device;
sending the image to be reviewed to a server, wherein the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played;
receiving the test questions to be played and the multimedia analysis information sent by the server;
and playing the test questions to be played and the multimedia analysis information through the multimedia player.
2. The method of claim 1, further comprising:
acquiring a user image of a target user wearing the wearable device;
performing user authentication according to the user image to obtain a user authentication result;
the sending the image to be reviewed to a server includes:
and if the user authentication result comprises user authentication success, sending the image to be reviewed to the server.
3. The method according to claim 1 or 2, wherein in the case where the pending image includes a plurality of questions, the method further comprises:
acquiring the test question completion degree of the image to be reviewed;
the sending the image to be reviewed to a server includes:
and sending the image to be reviewed to the server when the test question completion degree is larger than or equal to a preset completion degree threshold value.
4. The method according to claim 1 or 2, wherein the multimedia parsing information comprises one of the following types: audio type, video type, and text type.
5. The method according to claim 4, wherein the playing the test question to be played and the multimedia parsing information through the multimedia player comprises:
determining the information type of the multimedia analysis information;
under the condition that the information type comprises an audio type, playing the test questions to be played and the audio information through an audio player;
or,
under the condition that the information type comprises a video type, playing the test question to be played and the video information through a display and an audio player;
or,
and under the condition that the information type comprises a text type, playing the test questions to be played and the text information through a display.
6. The method according to claim 4 or 5, wherein in case that the information type of the multimedia parsing information includes the audio type or the video type; the method further comprises the following steps:
receiving a volume adjustment instruction sent by the wearable device;
and adjusting the volume of the test questions to be played and the multimedia analysis information according to the volume adjustment instruction.
7. A speaker apparatus, wherein the speaker apparatus is provided with a multimedia player, the speaker apparatus comprising:
the receiving module is used for receiving an image to be reviewed, which is sent by the wearable device;
the sending module is used for sending the image to be reviewed to a server, and the image to be reviewed is used for acquiring multimedia analysis information of the test questions to be played when the server detects that the image to be reviewed comprises the test questions to be played;
the receiving module is also used for receiving the test questions to be played and the multimedia analysis information sent by the server;
and the playing module is used for playing the test questions to be played and the multimedia analysis information through the multimedia player.
8. The audio enclosure device of claim 7, further comprising: a processing module;
the processing module is used for acquiring a user image of a target user wearing the wearable device; and the number of the first and second groups,
performing user authentication according to the user image to obtain a user authentication result;
the sending module is further configured to send the image to be reviewed to the server if the user authentication result includes user authentication success.
9. An acoustic enclosure apparatus, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the steps of the information playing method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program causes a computer to execute the steps of the information playback method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911037433.3A CN111179128A (en) | 2019-10-29 | 2019-10-29 | Information playing method, sound box equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911037433.3A CN111179128A (en) | 2019-10-29 | 2019-10-29 | Information playing method, sound box equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111179128A true CN111179128A (en) | 2020-05-19 |
Family
ID=70655746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911037433.3A Pending CN111179128A (en) | 2019-10-29 | 2019-10-29 | Information playing method, sound box equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179128A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911744A (en) * | 2017-12-04 | 2018-04-13 | 微鲸科技有限公司 | Play management-control method and device |
CN110299036A (en) * | 2019-06-25 | 2019-10-01 | 百度在线网络技术(北京)有限公司 | Interaction reading method, device, system and storage medium |
CN110334712A (en) * | 2019-06-11 | 2019-10-15 | 广州市小篆科技有限公司 | Intelligence wearing terminal, cloud server and data processing method |
-
2019
- 2019-10-29 CN CN201911037433.3A patent/CN111179128A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107911744A (en) * | 2017-12-04 | 2018-04-13 | 微鲸科技有限公司 | Play management-control method and device |
CN110334712A (en) * | 2019-06-11 | 2019-10-15 | 广州市小篆科技有限公司 | Intelligence wearing terminal, cloud server and data processing method |
CN110299036A (en) * | 2019-06-25 | 2019-10-01 | 百度在线网络技术(北京)有限公司 | Interaction reading method, device, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106210836B (en) | Interactive learning method and device in video playing process and terminal equipment | |
CN109766412B (en) | Learning content acquisition method based on image recognition and electronic equipment | |
CN109478142B (en) | Methods, systems, and media for presenting a user interface customized for predicted user activity | |
CN108446320A (en) | A kind of data processing method, device and the device for data processing | |
CN112632349B (en) | Exhibition area indication method and device, electronic equipment and storage medium | |
CN113393347B (en) | Method and device for preventing cheating in online examination | |
CN109086431B (en) | Knowledge point consolidation learning method and electronic equipment | |
CN109800301B (en) | Weak knowledge point mining method and learning equipment | |
CN109410984B (en) | Reading scoring method and electronic equipment | |
CN115205764B (en) | Online learning concentration monitoring method, system and medium based on machine vision | |
CN110569347A (en) | Data processing method and device, storage medium and electronic equipment | |
CN111768170A (en) | Method and device for displaying operation correction result | |
CN108647354A (en) | Tutoring learning method and lighting equipment | |
CN111079499B (en) | Writing content identification method and system in learning environment | |
KR102586286B1 (en) | Contextual digital media processing systems and methods | |
JP2020173787A (en) | Information processing apparatus, information processing system, information processing method, and information processing program | |
CN111427990A (en) | Intelligent examination control system and method assisted by intelligent campus teaching | |
CN108388338B (en) | Control method and system based on VR equipment | |
CN111026901A (en) | Learning content searching method and learning equipment | |
CN111724638B (en) | AR interactive learning method and electronic equipment | |
CN109635214A (en) | Learning resource pushing method and electronic equipment | |
CN110443122B (en) | Information processing method and related product | |
CN111179128A (en) | Information playing method, sound box equipment and storage medium | |
CN112057874A (en) | Game auxiliary system and method with privacy protection function | |
CN111027536A (en) | Question searching method based on electronic equipment and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200519 |