CN117095340B - Eye-protection lamp control method and device - Google Patents
Eye-protection lamp control method and device Download PDFInfo
- Publication number
- CN117095340B CN117095340B CN202311363375.XA CN202311363375A CN117095340B CN 117095340 B CN117095340 B CN 117095340B CN 202311363375 A CN202311363375 A CN 202311363375A CN 117095340 B CN117095340 B CN 117095340B
- Authority
- CN
- China
- Prior art keywords
- topic
- eye
- question
- video
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 100
- 238000012544 monitoring process Methods 0.000 claims abstract description 40
- 238000012937 correction Methods 0.000 claims abstract description 27
- 230000008569 process Effects 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims abstract description 3
- 230000004044 response Effects 0.000 claims description 10
- 239000000284 extract Substances 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 34
- 230000001276 controlling effect Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012790 confirmation Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/105—Controlling the light source in response to determined parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
- G06V30/41—Analysis of document content
- G06V30/418—Document matching, e.g. of document images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Strategic Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The embodiment of the application discloses a control method and a device of an eye-protection lamp; according to the method, in the process of playing the net lesson video, the eye-protection lamp shoots a question making area through the camera to obtain a monitoring video; when the eye-protection lamp detects that the training state is entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of a learning terminal for playing a net lesson video; when the eye-protection lamp detects that the user finishes doing the questions, taking a video frame in the monitoring video when the user finishes doing the questions as a target frame; and the eye-protection lamp associates the target subject content with the target frame to generate job data, wherein the job data is used for automatic correction. The scheme can realize automatic correction of the operation.
Description
Technical Field
The application relates to the technical field of lighting lamps, in particular to a control method and a device of an eye-protection lamp.
Background
The existing eye-protection lamp can integrate the homework correcting function, such as the proposal proposed by the patent document 1, the video camera of the eye-protection lamp captures the test paper image, then the questions and the answering contents of the students are identified from the test paper image, and then the homework correcting is carried out.
In recent years, with the popularization of net lessons, more and more students play net lesson videos through learning terminals and answer questions in the net lesson videos on homework books. For example, in a scenario that a child uses an electronic device, such as a tablet, a learning terminal, a notebook computer, etc., to play a lesson, the child plays a lesson video through the electronic device, and a question to be answered by the child is displayed in the lesson video, and the child waits for a period of time after the question appears, so that the child uses a pen to answer the question on the exercise book.
Under the net lesson scene, the work modifying scheme proposed by the patent document 1 is not applicable, because the test paper image captured by the camera of the patent document 1 is in the same test paper, and questions and the answering contents written by students are on the electronic equipment in the scene that the children are using the electronic equipment to surf the net lesson, and the answering contents written by children are on the exercise book, so that the desk lamp cannot realize the automatic modifying function.
Patent document 1: patent name is an intelligent table lamp and an automatic paper reading method based on the same, publication number CN109978734A, publication date 2020.10.16.
Disclosure of Invention
The embodiment of the application provides a control method and a control device for an eye-protection lamp, which are used for solving the technical problem that under the scene of writing an answer on a homework book when electronic equipment is used for surfing the net, as the questions are on the electronic equipment, only the answer written by a user is on the homework book, the questions and the answer are not on the homework book at the same time, and therefore the automatic correction function is difficult to realize.
In a first aspect, an embodiment of the present application provides a method for controlling an eye protection lamp, including:
in the process of playing the net lesson video, a question making area is shot by a camera to obtain a monitoring video;
when the exercise state is detected to be entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of a learning terminal for playing a net lesson video;
when the user is detected to finish doing the questions, taking a video frame of the monitoring video when the user finishes doing the questions as a target frame;
and associating the target topic content with the target frame to generate job data, wherein the job data is used for automatic correction.
Optionally, detecting entering the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
The generating the target topic content specifically comprises the following steps:
and generating target topic contents by the eye-protection lamp based on the topic screenshot.
Optionally, before performing the operation of acquiring the title to generate the target title content when detecting the eye-protection lamp to enter the exercise state, the method further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the eye-protection lamp generates target topic contents based on the topic screen shots, and the eye-protection lamp comprises the following components:
the eye-protection lamp extracts text contents of the theme screen shots, and part of the text contents of the theme screen shots in the theme area are shielded;
and the eye-protection lamp acquires the topic text content matched with the text content of the topic screenshot from the topic content set, and takes the topic text content as target topic content.
Optionally, detecting entering the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
the generating the target topic content specifically comprises the following steps:
and the eye-protection lamp takes the theme screenshot as target theme content.
Optionally, before performing the operation of acquiring the title to generate the target title content when the eye-protection lamp detects that the exercise state is entered, the method further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
And extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic.
Optionally, detecting entering the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the learning terminal determines a video frame which is currently played according to the current online class video playing progress;
and acquiring target topic content from the topic content set based on the currently played video frame.
Optionally, detecting entering the exercise state specifically includes:
when the learning terminal detects that the net lesson video is played to the first frame of the theme, the learning terminal enters an exercise state; the title head frame is a video frame in which each title appears for the first time;
the executing operation of acquiring the title specifically comprises the following steps:
the learning terminal executes screen capturing operation to obtain a theme screen capturing;
the generating the target topic content specifically comprises the following steps:
and the learning terminal takes the topic screenshot as target topic content.
Optionally, detecting entering the exercise state specifically includes:
When the learning terminal detects that the net lesson video is played to the first frame of the theme, the learning terminal enters an exercise state; the title head frame is a video frame in which each title appears for the first time;
the executing operation of acquiring the title specifically comprises the following steps:
the learning terminal executes screen capturing operation to obtain a theme screen capturing;
the generating the target topic content specifically comprises the following steps:
the learning terminal generates the topic content according to the topic screenshot
Optionally, detecting that the user completes doing the questions specifically includes:
and detecting that the gesture of the user is a preset answer completion gesture.
Optionally, detecting that the user completes doing the questions includes:
and detecting that the question making area has a preset pattern.
Optionally, the control method of the eye-protection lamp further includes:
and when the entering of the exercise state is detected, controlling the pause playing of the net lesson video.
Optionally, the control method of the eye-protection lamp further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
Extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the entering exercise state specifically comprises the following steps:
and the eye protection lamp identifies the question making state of the user according to the monitoring video, and enters the exercise state when the eye protection lamp detects that the question making state indicates the user to start making questions and the video frame currently played by the learning terminal belongs to any video frame group.
In a second aspect, an embodiment of the present application provides a control device for an eye-protection lamp, including:
the shooting unit is used for shooting the question making area through the camera to obtain a monitoring video in the process of playing the net lesson video;
the execution unit is used for executing the operation of acquiring the questions when the exercise state is detected to be entered, so as to generate target question contents which contain question data of the learning terminal for playing the net lesson video;
the detection unit is used for taking a video frame when the user finishes doing the questions in the monitoring video as a target frame when the user finishes doing the questions;
and the association unit is used for associating the target subject content with the target frame to generate job data, and the job data is used for automatic correction.
In a third aspect, an embodiment of the present application further provides a computer device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, where the processor executes steps in the method for controlling an eye-protection lamp provided in any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application further provide a storage medium storing a plurality of instructions adapted to be loaded by a processor to perform steps in a method for controlling an eye-protecting lamp provided in any one of the embodiments of the present application.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
in the prior art, the scheme of associating the answer on the paper with the questions needs to be realized by the help of other forces, such as marking the question number by a teacher, or designating the questions by an APP, in other words, the students need to answer a certain question, but the students cannot be designated to answer a certain question or answer the answer in a certain area or upload the answer of the certain question to a designated position. When the user starts to answer the questions, the questions corresponding to the answer contents are displayed on the screen at the moment, and the screen capturing operation is performed through the learning terminal at the moment, so that the questions corresponding to the answer contents of the user at the moment can be acquired, the answer contents and the corresponding questions are associated, and the job data are generated. The operation data can be used for surfing the net with the electronic equipment, under the scene of writing an answer on the operation book, when the questions are on the electronic equipment, only the answer written by the user is on the operation book, and when the questions and the answer are not on the operation book, the automatic correction function is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a control method of an eye-protection lamp according to an embodiment of the present application
Fig. 2 is a flowchart of a control method of an eye-protection lamp according to an embodiment of the present application;
fig. 3 is a schematic diagram of video playing of a current net lesson in the control method of the eye-protection lamp according to the embodiment of the present application;
fig. 4 is a schematic diagram of another existing net lesson video playing of the control method of the eye-protection lamp according to the embodiment of the present application;
fig. 5 is a diagram illustrating video playing of a net lesson in the control method of the eye-protection lamp according to the embodiment of the present application;
fig. 6 is a schematic diagram of an answer page of the control method of the eye-protection lamp provided in the embodiment of the present application;
fig. 7 is a schematic diagram of another answer page of the control method of the eye-protection lamp provided in the embodiment of the present application;
fig. 8 is a schematic diagram of another answer page of the control method of the eye-protection lamp provided in the embodiment of the present application;
Fig. 9 is another flowchart of a control method of an eye-protection lamp provided in an embodiment of the present application;
fig. 10 is another flowchart of a control method of an eye-protection lamp provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a condition or event described" if detected "may be interpreted in the context of meaning" upon determination "or" in response to determination "or" upon detection of a condition or event described "or" in response to detection of a condition or event described ".
According to the background technology of the application, users have a need of participating in answering in the process of playing and watching the lecture video, and hope to obtain feedback on answering results.
As known, an existing app on the market, such as a net lesson lecture video provided by a job lecture app, a user may log in the job lecture app to play a net lesson, but in the process that the user provided by the job lecture app makes a question making method for the user to play a net lesson, a page including questions displayed by the net lesson video is shown in fig. 3, the questions are "example questions 1 and 48 are composed of 4 ten and how many 1", after waiting for a period of time, a page including questions, alternative answers and answer confirmation controls is displayed, as shown in fig. 4, the alternative answers are displayed as 1, 2, 3, 4, 5, 6, 7, 8, 9 and 0, the answer confirmation controls are "v", the user clicks on the selected alternative answer, such as "8", then clicks on the answer confirmation control "v", confirms the answer to answer 1, and the user selects the answer considered correct by interacting with the alternative answer and the answer confirmation controls, so as to complete the method. However, the answer is performed by the method, only the answer result has no answer step, the user cannot know how to calculate the answer, and other controls are easy to touch by mistake, for example, the control '1' is touched by mistake, and then the wrong answer is selected, so that the wrong answer is selected.
In addition, patent document 2 with application number 202011399979.6 discloses a teaching live broadcast method, which is characterized in that a teacher terminal sends collected live broadcast data to a student terminal, the student terminal plays questions in the live broadcast data, students answer questions through a first lattice pen and a first lattice paper, answers corresponding to the questions and serial numbers are written in the answering process of the students, the situation that the questions and the answers cannot be automatically associated is avoided, first handwriting is obtained, the first handwriting is sent to the teacher terminal, and the teacher recognizes and revises the first handwriting. The teacher gives questions to students, strict question numbers and small serial numbers are needed, and the questions and answers are associated when the students answer questions, so that the situation that the answers cannot match the corresponding questions is avoided. The student answers on dot matrix paper, this dot matrix paper is blank dot matrix paper, and the student is in the blank dot matrix paper of no problem on the answer, and the student is when answering according to answer standard requirement, and what the standard required is that the problem number, serial number and corresponding answer are regional, and the problem of being convenient for is correlated with the answer, and patent document 2 is only applicable to live broadcast scene, because under live broadcast scene, the mr can be according to the standard of predetermineeing, for problem mark number, need mr and student to interdynamic mark the problem number and carry out the correlation when reminding the student to answer simultaneously. In the prior art, the scheme of associating the answer on the paper with the question needs to be realized by the help of the other force, for example, the teacher marks the question number, in other words, the user needs to be appointed to answer a certain question, and the embodiment of the application provides the following scenes: the user watches recorded lecture videos through the learning terminal at home, the prerecorded lecture videos are sourced from network channels, a teacher who records lessons may not need to strictly adhere to a specification for marking the number of the questions, and meanwhile, the situation that the lecture teacher in the videos interacts with the user does not exist, but the user cannot be appointed to answer a certain question or answer in a certain area or upload a collusion answer to the appointed state. Therefore, the scheme of patent document 2 cannot be applied to the scene of the present application. In addition, compared with the task question app, the method provided by the embodiment of the application realizes automatic correction of the task, can record the question answering step of the user, can clearly know the question making condition of the user, and can prevent false touch from obtaining wrong question answering.
In order to solve the technical problem that when the electronic equipment is used for surfing the net and writing an answer on the exercise book, as the questions are on the electronic equipment, only the answer written by a user is on the exercise book, and the questions and the answer are not on the exercise book at the same time, the problem that the automatic correction function is blocked is caused.
The embodiment of the application provides a control system of eye-protection lamp, including: the eye-protection lamp 11, study terminal 12, camera 13, refer to fig. 1, this eye-protection lamp 11 can include two lamp holders, and one of them lamp holder is used for being study terminal 12 screen area illumination, and another lamp holder is used for being do the illumination of problem area 14, and camera 13 is installed to this eye-protection lamp 11's lighting fixture, and this camera 13 is used for shooing the surveillance video of doing problem area 14, and eye-protection lamp 11 and study terminal 12 communication connection for obtain study terminal 12 on the topic content, wherein, communication connection can be through wireless network connection, can also be wired connection, study terminal 12 can be mobile terminal such as cell-phone, tablet, notebook computer.
Example 1
The embodiment of the application provides a control method of an eye-protection lamp, which is described by taking an example of interaction execution of the eye-protection lamp (or a learning terminal) and a background server as an example, as shown in fig. 2, the control method of the eye-protection lamp includes the following specific procedures:
s101, in the process of playing the net lesson video, the monitoring video is obtained by shooting a question area through a camera.
The online lesson video can be a video recorded in advance, and students can watch the recorded online lesson video by logging in a client on the learning terminal when surfing online lessons.
The camera can be installed on the eye-protection lamp for shoot the surveillance video of doing the problem area, refer to fig. 1, the installation of making a video recording can be installed on the lighting fixture of eye-protection lamp, wherein, eye-protection lamp can be double-end eye-protection lamp, and one lamp holder of eye-protection lamp is used for being study terminal screen area illumination, and another lamp holder of eye-protection lamp is used for being doing the problem area illumination, and this do the problem area includes the region that the exercise book placed, and eye-protection lamp also can be single lamp holder, and when the lamp holder of eye-protection lamp is single lamp holder, a lamp holder can throw light on study terminal screen area and do the problem area simultaneously.
According to the embodiment of the application, the monitoring video of the question making area is shot by the camera on the eye-protection lamp, the answer step and the answer result written on the exercise book of the question making area can be timely obtained in the internet class making process of the students, and therefore automatic correction is achieved after the question content of the internet class video played on the learning terminal is obtained.
And S102, when the exercise state is detected to be entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of the learning terminal for playing the net lesson video.
The step of entering the exercise state refers to that the eye-protection lamp or the learning terminal enters the exercise state, for example, when the eye-protection lamp detects that a user starts doing questions, the eye-protection lamp enters the exercise state, or when the learning terminal detects that a net lesson video is played to a question head frame, the eye-protection lamp enters the learning state.
The target topic content can be a screenshot or text content.
In one embodiment, detecting entry into the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
The eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
the generating the target topic content specifically comprises the following steps:
and generating target topic contents by the eye-protection lamp based on the topic screenshot.
The eye-protection lamp can determine the topic content matched with the topic screenshot from the topic content set, and takes the matched topic content as target topic content, wherein the target topic content is the topic corresponding to the target screenshot.
In an example, a user may log in a website of a corresponding net lesson video on an APP of a learning terminal according to a usage habit, play the corresponding net lesson video according to a learning plan of the user, and since the net lesson videos are not recorded by the APP, the title corresponding to the playing progress cannot be obtained in a form of advance labeling. For a net lesson video not recorded by the app, the embodiment adopts a screen capturing operation mode to acquire a theme screenshot, then finds a screenshot of a completely non-shielded theme from the acquired theme screenshot, extracts a theme text of the theme screenshot of the completely non-shielded theme, and generates target theme content. The method can meet the learning and automatic correction requirements of different users on network lesson videos of different sources.
The method comprises the steps of detecting a question making state, taking a condition that a user starts to make questions as a trigger condition, controlling a learning terminal to perform screenshot operation to obtain a question screenshot, and accurately matching the time of the questions appearing on the learning terminal. Therefore, the automatic correcting function is realized under the condition that the questions and the questions are not in the same area.
In one example, it may be determined that the user is beginning to do the question when it is detected that the user is beginning to take a pen to begin writing on the workbook in the question area. Or when the user sends out the voice for starting to make questions, the user is confirmed to start answering questions, the voice content can be 'I start making questions' or 'answer starts', or the user uses the voice content agreed in advance to inform about starting to answer questions, and the like.
In an embodiment, before performing the operation of acquiring the title to generate the target title content when the eye-protection lamp detects that the exercise state is entered, the method further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
Extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the eye-protection lamp generates target topic contents based on the topic screen shots, and the eye-protection lamp comprises the following components:
the eye-protection lamp extracts text contents of the theme screen shots, and part of the text contents of the theme screen shots in the theme area are shielded;
and the eye-protection lamp acquires the topic text content matched with the text content of the topic screenshot from the topic content set, and takes the topic text content as target topic content.
In an example, in order for a user to quickly locate a topic in a lesson video and to efficiently process the lesson video later, for example, the user may drag a progress bar to quickly reach a designated location to find the topic for exercise, the provider may label a video segment of the lesson video according to the progress of the topic in the lesson video, where the labeling principle is: each title head frame generates at least one video frame group, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time.
The effect can be achieved: on the one hand, the topic picture corresponding to each topic is extracted from the video frames which are not shielded in the display topic area, and then the topic content set is obtained based on the text content of the topic picture, so that the influence of the movement of a teacher on the text recognition effect in the net lesson video can be eliminated; on the other hand, since the topic is from the topic content set, even if the topic screenshot is blocked by part of text content, the complete topic text can be obtained by matching with the topic content set.
In an example, in the process of playing the lesson video by the learning terminal, a lesson video playing progress bar is arranged below the screen of the learning terminal, as shown in fig. 5, each lesson first frame is indicated by an arrow in fig. 5, a section between each first frame is a video frame group, each video frame group includes video frames of the same lesson, the lesson first frame is a video frame in which each lesson first appears, for example, a video frame group 1 is a video frame group between a first lesson first frame and a second lesson first frame, a video frame group 1 includes video frames of the first lesson first, a video frame group between a second lesson first frame and a third lesson first frame is a video frame group 2, a video frame group 2 includes video frames of the second lesson first frame, and so on.
According to the method and the device for processing the online class video, recorded online class videos can be processed in advance, the topic content of each topic in the online class videos is extracted, a topic content set is generated, when a subsequent student is in online class, after a learning terminal performs screen capturing operation to obtain topic screenshots, the corresponding topic content can be obtained from the topic content set according to the topic screenshots and is used as target topic content. If the topic content is extracted only by extracting the topic screenshot, the obtained topic screenshot possibly contains the video picture of a teacher, so that the topic content identification is possibly inaccurate, but the recorded network video is processed in advance to obtain the topic content set, and the topic screenshot is directly matched with the corresponding topic content from the topic content set, so that the interference of the teacher and other objects can be eliminated, the acquisition speed of the topic content can be improved, and the automatic correction efficiency of the operation data can be improved.
In one embodiment, detecting entry into the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
the generating the target topic content specifically comprises the following steps:
and the eye-protection lamp takes the theme screenshot as target theme content.
In an example, the eye-protection lamp sends the screenshot of the question and the target frame containing the answer data written by the user to the background server, and the background server performs image recognition to obtain the text of the question and the answer, so that the cost of computing resources of the eye-protection lamp controller is reduced.
In an embodiment, before performing the operation of acquiring the title to generate the target title content when the eye-protection lamp detects that the exercise state is entered, the method further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
Selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
and extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic.
In an example, since the topic picture corresponding to each topic is directly extracted from the video frame in which the display topic area is not blocked, and then the topic content set is obtained based on the text content of the topic picture, the topic data can be obtained by matching with the text content in the topic content set after the topic screenshot is obtained, and the influence of the teacher walking in the net lesson video on the text recognition effect can be eliminated by the method.
In one embodiment, detecting entry into the exercise state specifically includes:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
The executing operation of acquiring the title specifically comprises the following steps:
the learning terminal determines a video frame which is currently played according to the current online class video playing progress;
and acquiring target topic content from the topic content set based on the currently played video frame.
In an example, when at least one video frame group is generated for each topic head frame, and each video frame group includes video pictures of the same topic, the topic corresponding to the video frame of each video frame group in the net lesson video is already determined, so that the learning terminal is not required to capture a picture, the currently played video frame is directly obtained according to the current net lesson video playing progress, the currently played video frame can be compared with the video frame group, the corresponding topic can be obtained, and by the method, the cost stored by the learning terminal can be reduced, image recognition is not required, and the topic can be quickly and accurately obtained.
In one embodiment, detecting entry into the exercise state specifically includes:
when the learning terminal detects that the net lesson video is played to the first frame of the theme, the learning terminal enters an exercise state; the title head frame is a video frame in which each title appears for the first time;
the executing operation of acquiring the title specifically comprises the following steps:
The learning terminal executes screen capturing operation to obtain a theme screen capturing;
the generating the target topic content specifically comprises the following steps:
and the learning terminal takes the topic screenshot as target topic content.
In an example, the learning terminal starts to play the title and actively starts to capture the picture, and the picture capture command is not required to be sent by the eye-protection lamp, so that the learning terminal operates the picture capture command, and the picture capture is more accurate.
In an example, the screen capturing operation performed by the learning terminal may be started to obtain the theme screen capturing when the learning terminal is detected to start playing the video of the lesson. For example, after the learning terminal starts playing the video of the lesson, the learning terminal automatically executes the screen capturing operation to obtain the theme screenshot when receiving the theme screenshot voice of the user, where the theme screenshot voice may be "i want to screenshot" or "start screenshot" or inform the learning terminal to start screenshot by using the voice content predetermined by the user in advance, so as to obtain the theme screenshot.
In one embodiment, detecting entry into the exercise state specifically includes:
when the learning terminal detects that the net lesson video is played to the first frame of the theme, the learning terminal enters an exercise state; the title head frame is a video frame in which each title appears for the first time;
The executing operation of acquiring the title specifically comprises the following steps:
the learning terminal executes screen capturing operation to obtain a theme screen capturing;
the generating the target topic content specifically comprises the following steps:
and the learning terminal generates target topic contents according to the topic screenshot.
The learning terminal can determine the topic content matched with the topic screenshot from the topic content set, and take the matched topic content as target topic content, wherein the target topic content is the topic corresponding to the target screenshot.
In an example, by executing the operation of obtaining the topic, there may be a topic screenshot in which a part of text content is blocked, and the generated topic screenshot may be matched with the topic content set to obtain matched topic content as target topic content.
In an embodiment, the method for controlling an eye-protection lamp further includes:
and when the entering of the exercise state is detected, controlling the pause playing of the net lesson video.
In an example, a user plays a lesson video through a learning terminal, the learning terminal plays the lesson video, a camera of an eye-protection lamp recognizes that the user starts to do a question at 7 o 'clock, a screenshot of a first question a is acquired, the user is recognized to finish the question at 7 o' clock 3, the user is recognized to finish the first question a, the user is acquired to answer the first question a to obtain a target frame of an answer a, but since the question doing time of the user exceeds the question playing waiting time of the learning terminal, a second question b starts to play, and when the second question b is played, the screenshot of the second question b is automatically acquired, and a question screenshot of the second question b is formed, and at the moment, the target frame of the answer a is a question associated with which question screenshot (namely, the target frame of the answer a is associated with the first question a or the target frame of the answer a is associated with the second question b), so that the playing of the lesson video needs to be paused when the user does the question. It is desirable to acquire a corresponding target frame after acquiring one item of item data, and acquire new item data after acquiring the target frame, that is, it is desirable to acquire a target frame of answer a after acquiring data of a first item a, and acquire data of a second item b after acquiring the target frame of answer a, so as to ensure accurate matching between items and answers. If the pause playing of the net lesson video is not controlled after the entering of the exercise state is detected, a plurality of theme data appear before the user finishes writing a theme, and the matching is disordered. On the other hand, after the condition of entering the exercise state is detected, the pause playing of the net lesson video is controlled, so that the user can conveniently see the questions.
In an embodiment, the method for controlling an eye-protection lamp further includes:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not shielded from a video frame group based on boundary coordinates of the topic area shown in the video frame of each topic, and extracting a topic picture corresponding to each topic; wherein each video frame group corresponds to the same title picture;
extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the entering exercise state specifically comprises the following steps:
and the eye protection lamp identifies the question making state of the user according to the monitoring video, and enters the exercise state when the eye protection lamp detects that the question making state indicates the user to start making questions and the video frame currently played by the learning terminal belongs to any video frame group.
In an example, in actual use, the user may perform some actions in the question making area without making questions, and these actions are easily misjudged as starting the question making action, which is critical in that misjudgment may result in that the question screenshot of the learning terminal is not data about the questions. Therefore, whether to enter a question making state is determined by judging whether the video frame currently played by the learning terminal belongs to any video frame group.
And S103, when the fact that the user finishes doing the questions is detected, taking a video frame in the monitoring video when the user finishes doing the questions as a target frame.
Whether the user completes making questions can be determined by identifying the question making gesture of the user in the monitoring video, for example, when the user is identified to make a pre-agreed gesture or a agreed mark is drawn on a homework book of a question making area, the user can be determined to complete making questions.
In an embodiment, one lamp cap of the eye-protection lamp irradiates a screen area of the learning terminal, the other lamp cap irradiates a question making area containing the exercise book, when the learning terminal starts to play the net lesson video, the camera of the eye-protection lamp can shoot the question making area containing the exercise book, and when the user detects that the question making gesture of the user indicates that the user finishes making questions in the process of solving the net lesson video questions of the learning terminal on the exercise book, the camera of the eye-protection lamp takes a video frame at the moment as a target frame. The target frame comprises a problem solving step and a problem solving result written by a user on the exercise book. However, since the questions answered by the user are not on the exercise book, after the question solving step and the question solving result of the user are obtained through the camera, the corresponding questions cannot be obtained through the camera, so that automatic correction cannot be performed only through the target frames obtained through the camera. It is therefore necessary to obtain the information of the title from elsewhere. Preferably, the corresponding topic screenshot can be obtained from the learning terminal in communication connection with the eye-protection lamp, so as to obtain the corresponding target topic content, and the specific obtaining of the topic screenshot and the target topic content can refer to the method provided in the above embodiment, which is not described herein.
In an embodiment, detecting that the user completes doing the question specifically includes:
and detecting that the gesture of the user is a preset answer completion gesture.
In an example, the user may make some agreed gestures, such as an ok gesture, or a scissor hand gesture, or a finger extending gesture, with respect to the camera to indicate that the user is done making the question, and the eye-protecting lamp determines that the user is done making the question when detecting that the user makes some gestures, such as an ok gesture, or a scissor hand gesture, or at least one of the finger extending gestures with respect to the camera. The method can determine whether the user is finished doing the questions more accurately than the gesture mode, and avoid misjudgment.
In one embodiment, detecting that the user has completed doing the question includes:
and detecting that the question making area has a preset pattern.
In an example, the user may mark the exercise book to indicate that the user completes the question, and when detecting that the question-making area has a preset pattern, determine that the user completes the answer. For example, the user may draw some contracted labels on the exercise book, such as drawing contracted patterns on the exercise book, such as triangle, circle, square, five-pointed star, etc., or hook the exercise book, such as "v", to mark himself for answering, and when the eye-protection lamp detects at least one of the patterns, it is determined that the user is finished making the questions. By the detection method provided by the embodiment of the invention, whether the user finishes doing the questions can be more accurately determined, and erroneous judgment is avoided.
In one embodiment, a video frame containing the user's solution step and the solution result may be captured as a target frame by a voice-controlled camera. When the user finishes a question, the camera can be told to capture the current video frame as a target frame through voice, a video frame interception control can be installed on the camera, and when the user answers the question, the user can press the video frame interception control to acquire the target frame.
S104, associating the target topic content with the target frame to generate job data, wherein the job data is used for automatic correction.
Two queues may be established, one of which is a topic queue of topic contents, for placing topic data, for example, an id of a topic screenshot or an id of the topic contents may be stored in the topic queue; one is an answer queue of answers corresponding to the question content, which is used for placing answers written by a user; and establishing association according to the order relation of the two queues. If the sequence of the target question content is obtained and added into the question queue, the sequence of the answers (target frames) is obtained and added into the answer queue, and when the association is carried out, the corresponding questions and the answers are associated from the opposite ends of the two queues.
The operation data comprises question contents and corresponding answer data, the question contents are obtained through target question contents, and the answer data are obtained through target frames. The existing method is that the answer written by the user is on the exercise book, the question is on the learning terminal, the camera does not acquire the video frame containing the answer, then automatic correction cannot be achieved, the target question content and the target frame are associated through the embodiment, the operation data are produced, and automatic correction can be achieved.
In an embodiment, when the target question content is not matched with the answer contained in the target frame, the job data may be the note data of the user, which is used for recording the question making condition of the user, so as to facilitate the follow-up wrong question statistics and the review of the question making process.
In one embodiment, the eye-protection lamp may correlate the obtained target subject matter content with the target frame to generate the job data.
In an embodiment, the eye-protection lamp may upload the obtained target subject matter content and the target frame to the background server, so that the background server may correlate the received target subject matter content and the target frame to generate the job data.
In an embodiment, the problem solving can be performed by using a matched partition exercise book, a user writes in an area of the exercise book a, the problem solving step of the problem a is represented, and the problem solving result, such as the writing content of the area of the exercise book a, can be identified when the camera of the eye-protection lamp identifies the area of the exercise book a in the area of the problem making. And further, the automatic correction operation can be realized.
In one embodiment, when recording the video of the lesson, a partition display method may be used, so that the question content and the teacher are displayed on the screen of the learning terminal in a partition manner. And when the user uses the learning terminal to surf the internet for video, the learning terminal can directly acquire the screenshot of the area displaying the theme content when the user performs the screenshot operation to acquire the theme screenshot. Specifically, the page of the newly recorded net lesson video is divided into two areas, one area displays the teacher, and the other area displays the teaching content (including the contents of questions, audio-video courseware and the like), so that when the learning terminal performs screen capturing operation for obtaining the questions, the area displaying the teaching content of the teacher can be directly intercepted, namely the area displaying the questions can be directly intercepted. The data volume of the intercepted topic content can be reduced, and the text content of the topic screenshot can be extracted more quickly and accurately in the follow-up extraction.
According to the method and the device for automatically correcting the task data, the task data can be generated by associating the acquired target topic content with the target frame, the topic making condition of the user can be known in time, and personalized learning coaching is provided for the user.
As can be seen from the foregoing, the embodiments of the present application may obtain the target question content on the learning terminal, and obtain the target frame when the user completes making the question by using the camera, and associate the target question content with the target frame, so that the job data including the target question content, the question solving step made by the user on the target question content, and the question solving result may be obtained, so that automatic correction of the question solving step and the question solving result of the user may be implemented in time, that is, automatic correction of the job data may be implemented in time, so that the technical problem that the problem is obstructed when the user writes the answer on the electronic device in the scene of writing the answer on the job book, and the eye protection lamp implements automatic correction function when the answer is different from the question written on the job book may be solved.
Example two
The embodiment of the application also provides an operation modifying method based on the eye-protection lamp, which comprises the following specific procedures:
in an embodiment, another method for controlling an eye-protection lamp is further provided, a user makes a plurality of answers of questions in a page of a homework book of a question area, so that it is difficult to distinguish which answer corresponds to which question, the user writes the answer at a random position on the homework book, under the condition that the page space is insufficient, the answers of the same question are written into different random areas, a newly added part in a video frame can be found by comparing the video frames, and the newly added part is used as answer content.
In the process of playing the net lesson video, the eye-protection lamps shoot the question making area through the camera to obtain a monitoring video;
after obtaining the question making data, the background server executes the following steps:
when the exercise state is detected to be entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of a learning terminal for playing a net lesson video;
when the user is detected to finish doing the questions, taking a video frame of the monitoring video when the user finishes doing the questions as a target frame;
the target topic content and the target frame are associated to generate job data, and the job data is used for automatic correction; the question data comprise identifiers of a monitoring video and a net lesson video and a target playing progress set;
the background server also performs the following steps:
acquiring an N-th target frame corresponding to an N-th question and an N-1-th target frame corresponding to an N-1-th question from a monitoring video, wherein the N-th target frame is a video frame when a user finishes the N-th question, and the N-1-th target frame is a video frame when the user finishes the N-1-th question;
and comparing the N-1 target frame with the N target frame, determining a newly added part, and taking the newly added part as an answer image.
The identifier of the net lesson video is the id of the title, and the target playing progress set is a plurality of target playing progress sets.
In an embodiment, the step of using the newly added portion as the answer image on the same page of the exercise book may specifically include:
if the newly added part is at the same position of the same operation page, directly taking the newly added part as an answer image;
if the newly added part is at different positions of the same operation page, splicing the newly added part at different positions to a concentrated area according to the writing time sequence of the newly added part to form an answer image.
In an example, the N-1 th frame and the N-1 th frame are shown in FIG. 6, the answer content of the N-1 th question is displayed in the N-1 th frame, the answer content of the N-1 th question and the N-th question is displayed in the N-th frame, and when the N-th question is not written enough in the original answer position, the rest of the answers of the N-th question are written in other positions. The writing parts of the two newly added positions of the nth frame can be used as answer contents. The method comprises the steps of acquiring the original answer position of an Nth question and the writing time of other positions, splicing the newly added parts of the two positions to a concentrated area according to the writing time sequence of the two positions, wherein the part of the original answer position with the previous writing time is placed in a front section area in fig. 6, and the part of the other position with the subsequent writing time is placed in a rear section area in fig. 6 to form an answer image.
In an embodiment, the adding a new part on a different page of the exercise book, taking the new part as an answer image, may specifically include:
acquiring the last image frame of an Nth page on the exercise book, wherein the Nth page comprises partial answer content of the Nth item;
acquiring an image frame of an N-th question answered by an N+1th page on the exercise book;
extracting the image of the last image frame, and extracting the image of the image frame of the N-th question after answering;
and combining the two extracted images to generate an answer image of the Nth question.
In an example, a user may write an answer to a random area on a exercise book, when a piece of paper remains in an insufficient page space, the user may write the answer of the same question on another paper page, a writing part may be found from two frames, the writing part and the original part are used as a part of the newly added answer content on different pages of the exercise book, for example, referring to fig. 7, the nth page of the exercise book is insufficient to write the answer of the nth question, and the remaining answer of the nth question needs to be written on the (n+1) th page. The last image frame of the nth page may be determined by comparing a plurality of image frames of the nth page, by comparing contents of display of the plurality of image frames, and the first image frame of the n+1th page may be determined by contents of display of the image frames, and when it is detected that the n+1th page has a mark of finishing the question, the current image frame may be acquired as an image frame of finishing the nth question. The image of the last video frame and the image of the N-th question can be extracted, and the two images are combined to obtain the answer image of the N-th question. In an embodiment, in order to make a subsequent understanding of the answer situation of the student, the questions played by the learning terminal, the answers written by the student, and the reference answers may be obtained, and the homework data may be spliced based on the questions played by the learning terminal, the answers written by the student, and the reference answers.
Specifically, the target playing progress of the online class video for which the user starts to do questions is obtained, wherein the target playing progress is the playing progress of the online class video when the user starts to do questions is detected in the process of shooting the monitoring video; the screenshot of the pure title of the nth title is obtained based on the target playing progress, as shown in fig. 8, which is the pure title of the nth title.
The determining whether the user starts to make the questions may be determined by referring to the method for determining whether the user starts to make the questions in the embodiment described above.
Further, a screenshot is obtained that contains the title and the answer: acquiring the last frame of the N-th question from a recorded video played by the net lesson video, wherein the last frame comprises the question of the N-th question and a reference answer;
and integrating the screenshot of the pure questions of the N-th question, the last frame of the N-th question and the image of the answer content solved by the user to generate a user operation record so as to know the answer condition of the user later.
The last frame of the nth question may be determined by comparing the contents of adjacent video frames in the recorded video, for example, the first frame starts to appear as the first question in the recorded video, and the last frame of the nth question may be determined by comparing whether the adjacent frame contains the nth question, if the first frame is continued until the nth frame contains the first question, and if the nth frame does not contain the nth question, the last frame of the nth question may be determined by comparing whether the adjacent frame contains the nth question.
Example III
The embodiment of the present application further provides another method for controlling an eye-protection lamp, which is described by using an example in which the eye-protection lamp, the learning terminal and the background server are interactively executed, as shown in fig. 9, a specific flow of the method for controlling an eye-protection lamp may be as follows:
s201, in the process of playing the net lesson video, the eye-protection lamp shoots a question area through the camera to obtain a monitoring video.
When the learning terminal starts to play the net lesson video, the eye-protection lamp starts to shoot the question making area monitoring video through the camera, the camera can shoot the content written on the exercise book of the user question making area, and the written content can comprise a question solving step and a question solving result for solving the questions on the learning terminal.
The method and the device can identify the question making gesture of the user in the monitoring video, and judge whether the user starts to make questions or not according to the question making gesture of the user, and whether the user finishes making questions or not.
S202, when the eye-protection lamp detects that the training state is entered, the operation of acquiring the questions is executed to generate target question contents, wherein the target question contents comprise question data of the learning terminal for playing the net lesson video.
In an embodiment, when the eye-protection lamp detects that the exercise state is entered, a question trigger signal is generated, the eye-protection lamp sends the question trigger signal to the learning terminal, the learning terminal executes a screen capturing operation to obtain a question screenshot after receiving the question trigger signal, the learning terminal sends the question screenshot to the eye-protection lamp, and the eye-protection lamp generates target question content based on the question screenshot.
In one example, the exercise state is entered when the eye-shield light detects that the user begins to do a question. When the eye-protection lamp detects that the gesture of the user is a gesture agreed in advance or detects that the user starts writing on the exercise book, the user can be determined to start doing questions, or when the eye-protection lamp monitors the voice of starting doing questions sent by the user, the user can be determined to start doing questions.
In an embodiment, when the eye-protection lamp detects that the exercise state is entered, a question trigger signal is generated, the eye-protection lamp sends the question trigger signal to the learning terminal, the learning terminal executes a screen capturing operation to obtain a question screenshot after receiving the question trigger signal, the learning terminal sends the question screenshot to the eye-protection lamp, and the eye-protection lamp takes the received question screenshot as target question content.
In an embodiment, when the eye-protection lamp detects that the exercise state is entered, a question trigger signal is generated, the eye-protection lamp sends the question trigger signal to the learning terminal, the learning terminal determines a currently played video frame according to the current net lesson video playing progress after receiving the question trigger signal, and the target question content is acquired from the question content set based on the currently played video frame.
S203, when the eye-protection lamp detects that the user finishes doing the questions, taking a video frame in the monitoring video when the user finishes doing the questions as a target frame.
In an embodiment, when the eye-protection lamp detects that the gesture of the user is a preset answer completion gesture, it is determined that the user completes making the questions.
In an embodiment, when the eye-protection lamp detects that the problem-making area has the preset pattern, it is determined that the user completes making the problem.
In an example, the eye-protection light may determine whether the user has completed making the question by recognizing a question gesture, a gesture, or a label made on the workbook of the user. For example, the eye-protection lamp determines that the user is finished doing the question when detecting that the user makes some gestures, such as an ok gesture, or makes a scissor hand gesture, or makes at least one gesture of a finger extending gesture against the camera. Or the eye-protection lamp determines that the user finishes doing the questions when detecting that the user draws the appointed patterns, such as the patterns of triangles, circles, squares, pentagons and the like on the exercise book or hooks are hooked on the exercise book, such as 'v'.
S204, the background server correlates the target subject content with the target frame to generate job data, and the job data is used for automatic correction.
In an embodiment, the eye-protection lamp sends the topic screenshot as target topic content to the background server, the eye-protection lamp sends a target frame containing answer data written by a user to the background server, and the background server performs image recognition on the topic screenshot and the target frame to obtain texts of topics and answers, so that the cost of computing resources of the learning terminal and the eye-protection lamp controller is reduced.
From the above, in the embodiment of the present application, the learning terminal may obtain the topic screenshot, and then the eye-protection lamp obtains the target topic content after receiving the topic screenshot sent by the learning terminal, and the camera installed on the eye-protection lamp obtains the target frequency frame when the user completes making the topic, where the eye-protection lamp may upload the target topic content and the target frame to the background server, and the background server correlates the target topic content with the target frame, so as to obtain the target topic content, the topic solving step of the user for the target topic content, and the topic solving result, that is, the above-mentioned operation data is obtained, and therefore, automatic modification of the topic solving step and the topic solving result of the user can be achieved in time, that is, automatic modification of the operation data is achieved in time, and further, the topic making situation of the user can be known in time, and personalized learning coaching is provided for the user.
Example IV
The embodiment of the present application further provides another method for controlling an eye-protection lamp, which is described by using an example in which the eye-protection lamp, the learning terminal and the background server are interactively executed, as shown in fig. 10, a specific flow of the method for controlling an eye-protection lamp may be as follows:
s301, in the process of playing the net lesson video, the eye-protection lamps shoot the question area through the camera to obtain the monitoring video.
In an embodiment, a control system of an eye-protection lamp is provided, the control system comprises a client, a background server, the eye-protection lamp, a learning terminal and a camera, wherein the client is installed in the learning terminal, the learning terminal is in communication connection with the eye-protection lamp, and a user can log in a client on the learning terminal to finish a net class and finish a subject on the net class.
The monitoring video of the problem making area can be shot through the camera installed on the eye protection lamp, and the camera can be installed on the lamp bracket of the eye protection lamp so as to ensure that the problem solving step and the problem solving result written on the exercise book of the problem making area by a user can be shot through the camera. The monitoring video shot by the camera and the target frame containing the problem solving step and the screenshot result can be sent to a background server. The learning terminal can acquire a topic screenshot corresponding to the user topic solving, take the topic screenshot as target topic content and send the target topic content to the background server, and the background server can conduct image recognition on the received target topic content to obtain the corresponding topic content. Then, the target frame and the topic content are associated to generate job data for automatic correction. The method can realize timely correction of the problem solving step and the problem solving result of each user, so that personalized learning coaching is provided for each user based on the correction result, and further help is provided for improving the learning efficiency of the user, and the problems that the existing problems are at a learning terminal, the problem that the problem solving answer of the user is on a homework book, and the operation cannot be corrected in time can be solved.
S302, when the learning terminal detects that the learning terminal enters an exercise state, the operation of acquiring the questions is executed to generate target question contents, wherein the target question contents comprise question data of the video of the net lesson played by the learning terminal.
In an embodiment, when the learning terminal detects that the learning terminal enters the exercise state, the learning terminal executes a screen capturing operation to obtain a topic screen capturing, and the learning terminal takes the topic screen capturing as the target topic content after obtaining the topic screen capturing.
In an embodiment, when the learning terminal detects that the learning terminal enters the exercise state, the learning terminal executes a screen capturing operation to obtain a topic screen capture, and after the learning terminal obtains the topic screen capture, the learning terminal determines target topic content matched with the topic screen capture from the topic content set.
S303, when the eye-protection lamp detects that the user finishes doing the questions, taking a video frame in the monitoring video when the user finishes doing the questions as a target frame.
S304, the background server correlates the target subject content with the target frame to generate job data, and the job data is used for automatic correction.
In an embodiment, the learning machine sends the topic screenshot as target topic content to the background server, the eye-protection lamp sends a target frame containing answer data written by a user to the background server, and the background server performs image recognition on the topic screenshot and the target frame to obtain texts of topics and answers, so that the cost of computing resources of the learning terminal and the eye-protection lamp controller is reduced.
In one embodiment, the eye-protection lamp may send the target frame to the learning terminal, and then the learning terminal may correlate the target topic content with the target frame to generate the job data.
As can be seen from the foregoing, in this embodiment, the learning terminal may obtain the topic screenshot, and then the background server obtains the target topic content after receiving the topic screenshot sent by the learning terminal, the eye-protection lamp obtains the target frame when the user completes making the topic through the camera, and the target frame is generated to the background server, and the background server associates the target topic content with the target frame, so that the job data including the target topic content, the problem solving step of the user on the target topic content, and the problem solving result is obtained.
In order to better implement the above method, correspondingly, the embodiment of the application also provides a control device of the eye-protection lamp, which comprises a unit for executing the control method of the eye-protection lamp in any embodiment. It should be noted that, the control device for an eye-protection lamp provided in the embodiment of the present application and the control method for an eye-protection lamp in the above embodiment belong to the same concept, and any method provided in the embodiment of the control method for an eye-protection lamp can be implemented by using the control device for an eye-protection lamp, and detailed implementation processes of the method are shown in the embodiment of the control method for an eye-protection lamp, which is not repeated herein.
The control means of the eye-protection lamp described above may be implemented in the form of a computer program which can be run on a computer device as shown in fig. 11.
As shown in fig. 11, the embodiment of the present application provides a computer device, including a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 perform communication with each other through the communication bus 114,
a memory 113 for storing a computer program;
in one embodiment of the present application, the processor 111 is configured to implement the control method of the eye-protection lamp provided in any one of the foregoing method embodiments when executing the program stored in the memory 113.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program may be stored in a storage medium that is a computer readable storage medium. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program. The computer program, when executed by the processor, causes the processor to execute the method for controlling the eye-protection lamp provided by any one of the method embodiments.
The storage medium is a physical, non-transitory storage medium, and may be, for example, a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk. The computer readable storage medium may be nonvolatile or may be volatile.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, while the present application is directed to such modifications and variations as fall within the scope of the claims and the equivalents thereof, the present application is intended to encompass such modifications and variations.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (7)
1. A method of controlling an eye-shield lamp, comprising:
in the process of playing the net lesson video, the eye-protection lamps shoot the question making area through the camera to obtain a monitoring video;
when the eye-protection lamp detects that the training state is entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of a learning terminal for playing a net lesson video;
wherein, detect and get into the exercise state, specifically include:
The eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
the generating the target topic content specifically comprises the following steps:
generating target topic contents by the eye-protection lamp based on the topic screenshot;
when the eye-protection lamp detects that the user finishes doing the questions, taking a video frame in the monitoring video when the user finishes doing the questions as a target frame;
the eye-protection lamp associates the target subject matter content with the target frame to generate job data, and the job data can be used for automatic correction.
2. The method of claim 1, wherein the step of performing the task of capturing the task when the eye-shield lamp detects entry into the exercise state, before generating the target task content, further comprises:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
Selecting a video frame which shows that the topic area is not blocked from a video frame group based on boundary coordinates of the topic area shown in the video frame of each channel of topics, and extracting a topic picture corresponding to each topic from the video frame; wherein each video frame group corresponds to the same title picture;
extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the eye-protection lamp generates target topic contents based on the topic screen shots, and the eye-protection lamp comprises the following components:
the eye-protection lamp extracts text contents of the theme screen shots, and part of the text contents of the theme screen shots in the theme area are shielded;
and the eye-protection lamp acquires the topic text content matched with the text content of the topic screenshot from the topic content set, and takes the topic text content as target topic content.
3. The method according to claim 1, wherein detecting that the user has completed doing a question comprises:
and detecting that the gesture of the user is a preset answer completion gesture.
4. The method of claim 1, wherein detecting that the user has completed doing the task comprises:
And detecting that the question making area has a preset pattern.
5. The method according to claim 1, wherein the method further comprises:
generating at least one video frame group for each title head frame, wherein each video frame group comprises video pictures of the same title, and the title head frame is a video frame of each title which appears for the first time;
selecting a video frame which shows that the topic area is not blocked from a video frame group based on boundary coordinates of the topic area shown in the video frame of each channel of topics, and extracting a topic picture corresponding to each topic from the video frame; wherein each video frame group corresponds to the same title picture;
extracting text content of each topic picture to obtain a topic content set, wherein the topic content set comprises topic contents of each topic;
the entering exercise state specifically comprises the following steps:
and the eye protection lamp identifies the question making state of the user according to the monitoring video, and enters the exercise state when the eye protection lamp detects that the question making state indicates the user to start making questions and the video frame currently played by the learning terminal belongs to any video frame group.
6. An operation modifying method based on an eye-protection lamp is characterized by being applied to a background server and comprising the following steps:
In the process of playing the net lesson video, the eye-protection lamps shoot the question making area through the camera to obtain a monitoring video;
after obtaining the question making data, the background server executes the following steps:
when the exercise state is detected to be entered, executing the operation of acquiring the questions to generate target question contents, wherein the target question contents comprise question data of a learning terminal for playing a net lesson video;
wherein, detect and get into the exercise state, specifically include:
the eye-protection lamp identifies the question making state of the user according to the monitoring video;
when the eye-protection lamp detects that the question making state indicates that the user starts to make questions, the eye-protection lamp enters an exercise state;
the executing operation of acquiring the title specifically comprises the following steps:
the eye-protection lamp receives a theme screenshot returned by the learning terminal; the topic screenshot is obtained by the learning terminal executing a screen capturing operation in response to a received topic triggering signal, and the topic triggering signal is generated when the eye-protection lamp enters an exercise state;
the generating the target topic content specifically comprises the following steps:
generating target topic contents by the eye-protection lamp based on the topic screenshot;
when the user is detected to finish doing the questions, taking a video frame of the monitoring video when the user finishes doing the questions as a target frame;
The target topic content and the target frame are associated to generate job data, and the job data is used for automatic correction; the question data comprise identifiers of a monitoring video and a net lesson video and a target playing progress set;
the background server also performs the following steps:
acquiring an N-th target frame corresponding to an N-th question and an N-1-th target frame corresponding to an N-1-th question from a monitoring video, wherein the N-th target frame is a video frame when a user finishes the N-th question, and the N-1-th target frame is a video frame when the user finishes the N-1-th question;
and comparing the N-1 target frame with the N target frame, determining a newly added part, and taking the newly added part as an answer image.
7. An eye-protection lamp, characterized in that it comprises a control device, a camera, the control device comprising means for performing the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311363375.XA CN117095340B (en) | 2023-10-20 | 2023-10-20 | Eye-protection lamp control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311363375.XA CN117095340B (en) | 2023-10-20 | 2023-10-20 | Eye-protection lamp control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117095340A CN117095340A (en) | 2023-11-21 |
CN117095340B true CN117095340B (en) | 2024-03-29 |
Family
ID=88773925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311363375.XA Active CN117095340B (en) | 2023-10-20 | 2023-10-20 | Eye-protection lamp control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117095340B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111405381A (en) * | 2020-04-17 | 2020-07-10 | 深圳市即构科技有限公司 | Online video playing method, electronic device and computer readable storage medium |
CN112001210A (en) * | 2019-05-27 | 2020-11-27 | 广东小天才科技有限公司 | Homework correcting method and device, intelligent desk lamp and computer readable storage medium |
CN112381689A (en) * | 2020-11-04 | 2021-02-19 | 锐捷网络股份有限公司 | Online correction operation method and device |
CN112580503A (en) * | 2020-12-17 | 2021-03-30 | 深圳市元德教育科技有限公司 | Operation correction method, device, equipment and storage medium |
CN112785885A (en) * | 2021-01-29 | 2021-05-11 | 北京乐学帮网络技术有限公司 | Online learning method and device, electronic equipment and storage medium |
CN113516395A (en) * | 2021-07-19 | 2021-10-19 | 读书郎教育科技有限公司 | Operation monitoring method and system based on intelligent desk lamp |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114120166B (en) * | 2021-10-14 | 2023-09-22 | 北京百度网讯科技有限公司 | Video question-answering method and device, electronic equipment and storage medium |
-
2023
- 2023-10-20 CN CN202311363375.XA patent/CN117095340B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112001210A (en) * | 2019-05-27 | 2020-11-27 | 广东小天才科技有限公司 | Homework correcting method and device, intelligent desk lamp and computer readable storage medium |
CN111405381A (en) * | 2020-04-17 | 2020-07-10 | 深圳市即构科技有限公司 | Online video playing method, electronic device and computer readable storage medium |
CN112381689A (en) * | 2020-11-04 | 2021-02-19 | 锐捷网络股份有限公司 | Online correction operation method and device |
CN112580503A (en) * | 2020-12-17 | 2021-03-30 | 深圳市元德教育科技有限公司 | Operation correction method, device, equipment and storage medium |
CN112785885A (en) * | 2021-01-29 | 2021-05-11 | 北京乐学帮网络技术有限公司 | Online learning method and device, electronic equipment and storage medium |
CN113516395A (en) * | 2021-07-19 | 2021-10-19 | 读书郎教育科技有限公司 | Operation monitoring method and system based on intelligent desk lamp |
Also Published As
Publication number | Publication date |
---|---|
CN117095340A (en) | 2023-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109271945B (en) | Method and system for realizing job correction on line | |
CN109960809B (en) | Dictation content generation method and electronic equipment | |
CN110162164B (en) | Augmented reality-based learning interaction method, device and storage medium | |
CN109635772A (en) | Dictation content correcting method and electronic equipment | |
CN106210836A (en) | Interactive learning method and device in video playing process and terminal equipment | |
CN111027537B (en) | Question searching method and electronic equipment | |
CN109637286A (en) | Spoken language training method based on image recognition and family education equipment | |
JP2015161892A (en) | Analysis device and program | |
CN111081117A (en) | Writing detection method and electronic equipment | |
CN114926889B (en) | Job submission method and device, electronic equipment and storage medium | |
CN110210040A (en) | Text interpretation method, device, equipment and readable storage medium storing program for executing | |
CN111079501B (en) | Character recognition method and electronic equipment | |
CN111077996A (en) | Information recommendation method based on point reading and learning equipment | |
JP2933562B2 (en) | Exercise posture analyzer using personal computer | |
CN112055257B (en) | Video classroom interaction method, device, equipment and storage medium | |
CN111078179B (en) | Dictation, newspaper and read progress control method and electronic equipment | |
CN107364256B (en) | Intelligent pen, intelligent learning system and working method thereof | |
CN117095340B (en) | Eye-protection lamp control method and device | |
CN111753715B (en) | Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium | |
CN111050111A (en) | Online interactive learning communication platform and learning device thereof | |
KR101211641B1 (en) | System for explaining workbook using image code and method thereof | |
CN112861591A (en) | Interactive identification method, interactive identification system, computer equipment and storage medium | |
CN114863448A (en) | Answer statistical method, device, equipment and storage medium | |
US10593366B2 (en) | Substitution method and device for replacing a part of a video sequence | |
CN111028590B (en) | Method for guiding user to write in dictation process and learning device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |