CN111860121A - Reading ability auxiliary evaluation method and system based on AI vision - Google Patents
Reading ability auxiliary evaluation method and system based on AI vision Download PDFInfo
- Publication number
- CN111860121A CN111860121A CN202010499710.9A CN202010499710A CN111860121A CN 111860121 A CN111860121 A CN 111860121A CN 202010499710 A CN202010499710 A CN 202010499710A CN 111860121 A CN111860121 A CN 111860121A
- Authority
- CN
- China
- Prior art keywords
- reading
- page
- time
- difficulty
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 27
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 claims description 36
- 238000013135 deep learning Methods 0.000 claims description 36
- 230000009471 action Effects 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 20
- 230000003068 static effect Effects 0.000 claims description 20
- 230000028838 turning behavior Effects 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 19
- 230000006399 behavior Effects 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000001186 cumulative effect Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 238000009825 accumulation Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 2
- 230000015654 memory Effects 0.000 description 21
- 230000005291 magnetic effect Effects 0.000 description 10
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000004397 blinking Effects 0.000 description 3
- 230000004424 eye movement Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 210000001747 pupil Anatomy 0.000 description 3
- 238000010408 sweeping Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013210 evaluation model Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007087 memory ability Effects 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of reading ability evaluation, and provides a reading ability auxiliary evaluation method and a reading ability auxiliary evaluation system based on AI vision, wherein the method comprises the following steps: s1: reading at will within the range which can be identified by the AI device, and analyzing the reading of the user by the AI device from the first page turning until the user leaves the reading state; s2: analyzing the time from the first page turning to the time that the user leaves the reading state; s3: during the reading behavior of the user, identifying and scanning the number of single words or words appearing in the reading content of the user through AI equipment; s4: identifying reading content and judging the difficulty of reading the content; s5: and calculating the reading ability score of the user according to factors including reading time, reading amount and reading difficulty. The method solves the problem that in the prior art, no method can analyze the reading capability of a reader, especially the reading capability of reading a paper book in real time.
Description
Technical Field
The invention relates to the technical field of reading capability assessment, in particular to a reading capability auxiliary assessment method and system based on AI vision.
Background
When a human being reads on a daily basis, especially when reading a paper book, there is usually no way to assess the ability to read after reading. The disadvantage is that a study plan more suitable for the user is recommended in subsequent reading.
In this case, it is necessary to find a solution for auxiliary evaluation of reading ability, especially in reading paper books. In the prior art, there is no method available for assessing the reading ability, in particular of paper books, for the time being.
In the market, the application background capable of analyzing the reading ability of the reader is basically provided for the reader to read based on the electronic reading device, and the reading ability is analyzed through the reading record of the electronic device. However, many times, people still need to read and learn through the traditional paper books, and how to analyze the reading capability of the paper books is difficult.
In "CN 109567817A-a reading ability evaluation method and system and its auxiliary device", a "reading ability evaluation method is disclosed, which comprises the following steps: step one, data acquisition, namely dividing reading capability into three dimensions, namely memory, understanding and reasoning; the method comprises the following steps of enabling a person to be evaluated to complete three types of reading tasks including memory, understanding and reasoning, simultaneously providing reading materials and answering questions for the person to be evaluated in each reading task, and acquiring the following eye movement parameters by using an eye movement data acquisition device: the number of times of blinking during reading, the number of times of watching during reading, the video frequency rate of watching during reading, the average watching time during reading, the video frequency rate of sweeping during reading, the average sweeping time during reading, the average pupil diameter during reading, the number of times of reviewing during reading, the number of times of blinking during answering, the number of times of watching during answering and the number of times of reviewing during answering; step two, data analysis, namely calculating the ability value of each reading task according to the following model, and evaluating the reading ability from three dimensions; reading memory assessment model: reading memory ability value is 3.139 × 10-16+0.012 × average pupil diameter in reading +0.091 × blinking times in answering +0.93 × fixation times in answering-0.096 × review times in answering; reading comprehension capacity evaluation model: the reading comprehension ability value is-1.814 × 10-16+0.032 × the number of blinks in reading +0.261 × the number of fixations in reading +0.410 × the number of gazing videos in reading-1.149 × the average gazing time in reading-1.077 × the number of sweeping videos in reading +0.260 × the average pupil diameter in reading-0.379 × the number of times of review in reading +0.319 × the number of times of review in answering; the reading reasoning ability evaluation model comprises the following steps: the reading reasoning ability value is-8.297 x 10-16-0.487 x the video-watching rate in reading +0.385 x the average watching time in reading +1.331 x the video-scanning rate in reading +0.031 x the average glance time in reading +0.397 x the number of times of review in answering. In the technical scheme, the reading ability is evaluated only through eye movement behaviors, the specific content of the book is completely separated, and when the content of the book is different in difficulty and easiness, the correct reading ability cannot be well evaluated.
In summary, in the prior art, there is no good method for analyzing the reading ability of the reader, especially the reading ability of the paper book, in real time, so as to recommend a more adaptive learning plan according to the reading ability.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a reading ability auxiliary evaluation method and system based on AI vision, which obtains parameters including reading time, reading amount, and reading difficulty, and calculates a final reading ability score. The reading ability of the reader is optimally analyzed, and then when the learning plan is appointed, a most appropriate learning plan can be formulated according to the reading ability.
The above object of the present invention is achieved by the following technical solutions:
an auxiliary assessment method for reading ability based on AI vision comprises the following steps:
s1: opening an AI device, reading at will in a range which can be identified by the AI device, and analyzing reading of a user by the AI device from the first page turning until the user leaves a reading state;
s2: analyzing the time from the beginning of turning the page for the first time to the time when the user leaves the reading state as the reading time of the user;
S3: in the process of reading behaviors of a user, identifying and scanning the number of single words or words appearing in the reading content of the user through the AI device, and taking the number of the single words or the number of the words as the reading amount of the current reading;
s4: judging the difficulty of reading the content by identifying the reading content;
s5: calculating the reading ability score of the user by factors including reading time, reading amount and reading difficulty, wherein the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
Further, in step S1, the AI device analyzes the reading of the user from the first page turning until the user leaves the reading state, further comprising: identifying a page turning action of a user, specifically:
s11: shooting images in a range which can be identified by the AI equipment to obtain information carrier images in the identification range, positioning and detecting edge positions of the information carrier images by a static image deep learning neural network monitored by a page to realize real-time updating of the size and position information of the book, wherein the position information comprises left and right edges and a book center line, and the left and right edges comprise any one of real edges of the book and edges of a view field of the AI equipment;
S12: defaults the occurrence positions of page turning behaviors to be the left edge and the right edge of the book, detects the image content in the book range by using a deep learning neural network capable of performing time series processing, detects the page turning behavior as turning to the upper page when the page content of the book is changed from the left edge, and detects the page turning behavior as turning to the lower page when the page content of the book is changed from the right edge;
s13: after the deep learning neural network capable of performing time series processing is finished, the static image deep learning neural network monitored by the page continuously works, and the page turning action is confirmed after a clear difference is generated between the page and the page turning action.
Further, the analyzing the time from the beginning of turning the page for the first time to the time when the user leaves the reading state specifically includes:
and recording the time of turning pages of the home page, and calculating the time difference between two times as the reading time of the current reading by recording the time of turning the pages for the last time as the time of leaving the reading state of the user.
Further, in step S3, the number of words or words appearing in the reading content of the scanning user is identified, and the specific process is as follows:
s31: randomly sampling information in a page range of reading content to serve as a candidate focus language feature point image, cutting the candidate focus language feature point image by using an image character cutting deep learning neural network, and outputting the width size of each single character or word;
S32: positioning the information of the lines in the reading content by using a deep learning neural network for processing the static image to obtain the line number and the line width of each line;
s33: calculating the width size of each single word or word to estimate the number of characters in each line, and further calculating the number of characters in the page corresponding to the reading content;
s34: and accumulating the number of the texts on all the pages in each reading to obtain the total number of the texts currently read.
Further, in step S4, the difficulty of reading the content is determined by identifying the reading content, specifically:
s41: different article type difficulty coefficients are set for the types of reading contents including articles, poems and scientific literatures;
s42: setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
s43: in the reading behavior process of a user, reading contents of the user are identified and scanned through AI, the reading difficulty of the time is accumulated and calculated, and the reading difficulty is calculated after the reading is finished, wherein the specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
An AI vision-based reading ability auxiliary evaluation system, comprising: the reading system comprises a reading state analysis module, a reading time acquisition module, a reading amount acquisition module, a reading difficulty acquisition module and a reading capability calculation module;
The reading state analysis module is used for reading randomly within the range which can be identified by the AI device after the AI device is opened, and the AI device starts to analyze the reading of the user from the first page turning until the user leaves the reading state;
the reading time acquisition module is used for analyzing the time from the beginning of turning pages for the first time to the time when the user leaves the reading state as the reading time of the user;
the reading amount acquisition module is used for identifying and scanning the number of single words or words appearing in the reading content of the user through the AI device in the reading behavior process of the user as the reading amount of the current reading;
the reading difficulty acquisition module is used for judging the difficulty of reading contents by identifying the reading contents;
the reading ability calculating module is used for calculating the reading ability score of the user according to factors including reading time, reading amount and reading difficulty, and the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
Further, the reading state analysis module further includes:
the image positioning and edge detecting unit is used for shooting images in a range which can be identified by the AI equipment through the AI equipment, acquiring information carrier images in the identification range, positioning and detecting edge positions of the information carrier images through a static image deep learning neural network monitored by a page, and realizing real-time updating of the size and position information of the book, wherein the position information comprises left and right edges and a book center line, and the left and right edges comprise any one of a real edge of the book and an edge of a visual field of the AI equipment;
A page turning behavior detection unit for detecting image contents within a book range using a deep learning neural network capable of time series processing when default page turning behavior occurrence positions are a left edge and a right edge of the book, the page turning behavior being detected as turning one page upward when the page contents of the book are changed from the left edge, and the page turning behavior being detected as turning one page downward when the page contents of the book are changed from the right edge;
and the page turning confirming unit is used for continuously working the static image deep learning neural network monitored by the page after the deep learning neural network capable of performing time series processing finishes working, confirming that the page is clearly distinguished from the page turning action before the page turning action occurs, and confirming the page turning action.
Further, the reading amount obtaining module further includes:
the single character size acquisition unit is used for randomly sampling information in a page range of reading content to serve as a candidate focus language feature point picture, cutting the candidate focus language feature point picture by using an image character cutting deep learning neural network and outputting the width size of each single character or word;
the line number and line width acquisition unit is used for positioning the information of the lines in the reading content by using a deep learning neural network for processing the static image to acquire the line number and the line width of each line;
The single-page character number calculating unit is used for calculating the width size of each output single character or word, estimating the character number of each line and further calculating the character number in the page corresponding to the reading content;
and the total text quantity accumulation unit is used for accumulating the text quantity on all the pages read each time to obtain the total text quantity read currently.
Further, the reading difficulty obtaining module further includes:
the text difficulty setting unit is used for setting different article type difficulty coefficients aiming at the types of reading contents including articles, poems and scientific documents;
the uncommon word difficulty setting unit is used for setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
the reading difficulty calculating unit is used for identifying and scanning the reading content of the user through AI in the reading behavior process of the user, accumulating and calculating the reading difficulty at this time, and calculating the reading difficulty after the reading is finished, wherein the specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
Compared with the prior art, the invention has the beneficial effects that:
the reading ability auxiliary evaluation method based on AI vision specifically comprises the following steps: s1: opening an AI device, reading at will in a range which can be identified by the AI device, and analyzing reading of a user by the AI device from the first page turning until the user leaves a reading state; s2: analyzing the time from the beginning of turning the page for the first time to the time when the user leaves the reading state as the reading time of the user; s3: in the process of reading behaviors of a user, identifying and scanning the number of single words or words appearing in the reading content of the user through the AI device, and taking the number of the single words or the number of the words as the reading amount of the current reading; s4: judging the difficulty of reading the content by identifying the reading content; s5: calculating the reading ability score of the user by factors including reading time, reading amount and reading difficulty, wherein the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time. According to the technical scheme, the reading ability of a reader, particularly the reading ability of reading paper books can be analyzed in real time, so that a more adaptive learning plan can be recommended according to the reading ability in the follow-up process.
Drawings
FIG. 1 is a general flowchart of an auxiliary assessment method for reading ability based on AI vision according to the present invention;
FIG. 2 is a diagram illustrating an overall structure of an AI vision-based reading ability auxiliary evaluation system according to the present invention;
FIG. 3 is a block diagram of a reading status analysis module in the reading ability auxiliary evaluation system based on AI vision according to the present invention;
FIG. 4 is a diagram of a reading capacity obtaining module in the reading ability auxiliary evaluation system based on AI vision according to the present invention;
fig. 5 is a structural diagram of a reading difficulty obtaining module in the reading ability auxiliary evaluation system based on AI vision according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The application scenarios of the invention are as follows: when a human being reads on a daily basis, especially when reading a paper book, there is usually no way to assess the ability to read after reading. The disadvantage is that a study plan more suitable for the user is recommended in subsequent reading. In this case, it is necessary to find a solution for auxiliary evaluation of reading ability, especially in reading paper books. In the prior art, there is no method available for assessing the reading ability, in particular of paper books, for the time being.
Based on the application scenarios, the core thought of the invention is as follows: the reading ability auxiliary evaluation method based on the AI poetry is provided, and the score of the reading ability is deduced through a formula by identifying factors including reading time, reading amount and reading difficulty. Through the deduced reading ability score, the reading ability of the reader, especially the reading ability of reading paper books, can be analyzed in real time, so that a more adaptive learning plan can be recommended according to the reading ability later.
First embodiment
As shown in fig. 1, the embodiment provides a reading ability auxiliary evaluation method based on AI vision, which includes the following steps:
s1: opening an AI device, reading at will in a range which can be identified by the AI device, and analyzing reading of a user by the AI device from the first page turning until the user leaves a reading state.
Specifically, the AI device may be any device having a shooting function, including a camera, and the user reads within a range that can be recognized by the AI device, so as to ensure that all of the read book is within the range that can be recognized by the AI device. And recognizing the page turning action of each time of the user through the AI device, starting the first page turning action as the starting time point of reading by the user, and taking the time of the last page turning action as the finishing time point of reading by the user. The above scheme is a preferable scheme, and in addition, if the user reads in a reading mode, the starting time and the ending time of the reading of the user can be judged by the voice of the user.
S2: and analyzing the time from the first page turning to the time when the user leaves the reading state as the reading time of the user.
Specifically, the difference between the end time and the start time of the reading of the user is obtained as the time of the current reading of the user.
S3: and in the process of the reading behavior of the user, identifying and scanning the number of the single words or the number of the words appearing in the reading content of the user through the AI device, and taking the number of the single words or the number of the words as the reading amount of the current reading.
S4: and judging the difficulty of reading the content by identifying the reading content.
Specifically, the method is a distinctive point, introduces the concept of reading difficulty, designs different difficulty coefficients and difficulty accumulation values aiming at different text types and uncommon words appearing in the text, and objectively analyzes the reading capability of the user in multiple aspects.
S5: calculating the reading ability score of the user by factors including reading time, reading amount and reading difficulty, wherein the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
Further, in step S1, the AI device analyzes the reading of the user from the first page turning until the user leaves the reading state, further comprising: identifying a page turning action of a user, specifically:
s11: shooting images in a range which can be identified by the AI equipment to obtain information carrier images in the identification range, positioning and detecting edge positions of the information carrier images by a static image deep learning neural network monitored by a page to realize real-time updating of the size and position information of the book, wherein the position information comprises left and right edges and a book center line, and the left and right edges comprise any one of real edges of the book and edges of a view field of the AI equipment;
S12: defaults the occurrence positions of page turning behaviors to be the left edge and the right edge of the book, detects the image content in the book range by using a deep learning neural network capable of performing time series processing, detects the page turning behavior as turning to the upper page when the page content of the book is changed from the left edge, and detects the page turning behavior as turning to the lower page when the page content of the book is changed from the right edge;
s13: after the deep learning neural network capable of performing time series processing is finished, the static image deep learning neural network monitored by the page continuously works, and the page turning action is confirmed after a clear difference is generated between the page and the page turning action.
Further, the analyzing the time from the beginning of turning the page for the first time to the time when the user leaves the reading state specifically includes:
and recording the time of turning pages of the home page, and calculating the time difference between two times as the reading time of the current reading by recording the time of turning the pages for the last time as the time of leaving the reading state of the user.
Further, in step S3, the number of words or words appearing in the reading content of the scanning user is identified, and the specific process is as follows:
s31: randomly sampling information in a page range of reading content to serve as a candidate focus language feature point image, cutting the candidate focus language feature point image by using an image character cutting deep learning neural network, and outputting the width size of each single character or word;
S32: positioning the information of the lines in the reading content by using a deep learning neural network for processing the static image to obtain the line number and the line width of each line;
s33: calculating the width size of each single word or word to estimate the number of characters in each line, and further calculating the number of characters in the page corresponding to the reading content;
s34: and accumulating the number of the texts on all the pages in each reading to obtain the total number of the texts currently read.
Further, in step S4, the difficulty of reading the content is determined by identifying the reading content, specifically:
s41: different article type difficulty coefficients are set for the types of reading contents including articles, poems and scientific literatures;
s42: setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
s43: in the reading behavior process of a user, reading contents of the user are identified and scanned through AI, the reading difficulty of the time is accumulated and calculated, and the reading difficulty is calculated after the reading is finished, wherein the specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
Second embodiment
As shown in fig. 2, the present embodiment provides a reading ability auxiliary evaluation system based on AI vision, including: the reading state analysis module 1, the reading time acquisition module 2, the reading amount acquisition module 3, the reading difficulty acquisition module 4 and the reading capability calculation module 5;
The reading state analysis module 1 is configured to randomly read within a range that can be recognized by the AI device after the AI device is turned on, and the AI device analyzes reading of a user from a first page turning until the user leaves a reading state;
the reading time acquisition module 2 is used for analyzing the time from the beginning of turning pages for the first time to the time when the user leaves the reading state as the reading time of the user;
the reading amount obtaining module 3 is configured to identify, by the AI device, the number of individual words or words appearing in the reading content of the scanned user as the reading amount of the current reading in the process of the reading behavior of the user;
the reading difficulty acquisition module 4 is used for judging the difficulty of reading the content by identifying the reading content;
the reading ability calculating module 5 is configured to calculate a reading ability score of the user according to factors including reading time, reading amount, and reading difficulty, and the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
Further, as shown in fig. 2, the reading state analysis module 1 further includes:
the image positioning and edge detecting unit 11 is configured to capture an image in a range that can be identified by the AI device, acquire an information carrier image in the identification range, perform positioning and edge position detection on the information carrier image through a static image deep learning neural network monitored by a page, and realize real-time update of size and position information of a book, where the position information includes left and right edges and a book center line, and the left and right edges include any one of a real edge of the book and an edge of a view of the AI device;
A page turning behavior detection unit 12, configured to detect image content within a range of a book using a deep learning neural network capable of performing time series processing when default page turning behavior occurrence positions are a left edge and a right edge of the book, where a page turning behavior is detected as turning one page upward when the page content of the book is changed from the left edge, and detected as turning one page downward when the page content of the book is changed from the right edge;
and a page turning confirmation unit 13, configured to, after the deep learning neural network capable of performing time series processing is completed, continuously work the static image deep learning neural network monitored by the page, and confirm that a clear difference is generated between the page and the page turning action before the page turning action occurs, so as to confirm the page turning action.
Further, as shown in fig. 4, the reading amount obtaining module 3 further includes:
an individual character size obtaining unit 31, configured to randomly sample information within a page range of the read content, use the information as a candidate focus language feature point image, cut the candidate focus language feature point image using an image and text cutting deep learning neural network, and output a width size of each individual character or word;
A line number and line width acquisition unit 32, configured to locate line information in the read content by using a deep learning neural network that processes a static image, and obtain a line number and a line width of each line;
a single-page character number calculation unit 33, configured to calculate the width size of each output single character or word, estimate the number of characters in each line, and further calculate the number of characters in the page corresponding to the reading content;
and the total text quantity accumulation unit 34 is configured to accumulate the text quantities on all the pages read each time, so as to obtain the total text quantity currently read.
Further, as shown in fig. 5, the reading difficulty obtaining module 4 further includes:
a text difficulty setting unit 41 for setting different article type difficulty coefficients for the types of reading contents including articles, poems, and scientific documents;
the uncommon word difficulty setting unit 42 is used for setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
a reading difficulty calculating unit 43, configured to identify and scan the reading content of the user through AI in the reading behavior process of the user, accumulate and calculate the reading difficulty of the current time, and calculate the reading difficulty after the reading is completed, where a specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
A computer readable storage medium storing computer code which, when executed, performs the method as described above. Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
The software program of the present invention can be executed by a processor to implement the steps or functions described above. Also, the software programs (including associated data structures) of the present invention can be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functionality of the present invention may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various functions or steps. The method disclosed by the embodiment shown in the embodiment of the present specification can be applied to or realized by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
Embodiments also provide a computer readable storage medium storing one or more programs that, when executed by an electronic system including a plurality of application programs, cause the electronic system to perform the method of embodiment one. And will not be described in detail herein.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave. It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In addition, some of the present invention can be applied as a computer program product, such as computer program instructions, which when executed by a computer, can invoke or provide the method and/or technical solution according to the present invention through the operation of the computer. Program instructions which invoke the methods of the present invention may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the invention herein comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or solution according to embodiments of the invention as described above.
Claims (10)
1. An AI vision-based reading ability auxiliary evaluation method is characterized by comprising the following steps:
s1: opening an AI device, reading at will in a range which can be identified by the AI device, and analyzing reading of a user by the AI device from the first page turning until the user leaves a reading state;
S2: analyzing the time from the beginning of turning the page for the first time to the time when the user leaves the reading state as the reading time of the user;
s3: in the process of reading behaviors of a user, identifying and scanning the number of single words or words appearing in the reading content of the user through the AI device, and taking the number of the single words or the number of the words as the reading amount of the current reading;
s4: judging the difficulty of reading the content by identifying the reading content;
s5: calculating the reading ability score of the user by factors including reading time, reading amount and reading difficulty, wherein the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
2. The AI vision-based reading ability auxiliary evaluation method according to claim 1, wherein the AI device analyzes the reading of the user from the first page turning until the user leaves the reading state in step S1, further comprising: identifying a page turning action of a user, specifically:
s11: shooting images in a range which can be identified by the AI equipment to obtain information carrier images in the identification range, positioning and detecting edge positions of the information carrier images by a static image deep learning neural network monitored by a page to realize real-time updating of the size and position information of the book, wherein the position information comprises left and right edges and a book center line, and the left and right edges comprise any one of real edges of the book and edges of a view field of the AI equipment;
S12: defaults the occurrence positions of page turning behaviors to be the left edge and the right edge of the book, detects the image content in the book range by using a deep learning neural network capable of performing time series processing, detects the page turning behavior as turning to the upper page when the page content of the book is changed from the left edge, and detects the page turning behavior as turning to the lower page when the page content of the book is changed from the right edge;
s13: after the deep learning neural network capable of performing time series processing is finished, the static image deep learning neural network monitored by the page continuously works, and the page turning action is confirmed after a clear difference is generated between the page and the page turning action.
3. The AI vision-based reading ability auxiliary evaluation method according to claim 1, wherein the time from the beginning of the first page turning to the time when the user leaves the reading state is analyzed, specifically:
and recording the time of turning pages of the home page, and calculating the time difference between two times as the reading time of the current reading by recording the time of turning the pages for the last time as the time of leaving the reading state of the user.
4. The AI vision-based reading ability auxiliary evaluation method according to claim 1, wherein in step S3, the number of words or words appearing in the reading content of the scanning user is identified by the following specific procedures:
S31: randomly sampling information in a page range of reading content to serve as a candidate focus language feature point image, cutting the candidate focus language feature point image by using an image character cutting deep learning neural network, and outputting the width size of each single character or word;
s32: positioning the information of the lines in the reading content by using a deep learning neural network for processing the static image to obtain the line number and the line width of each line;
s33: calculating the width size of each single word or word to estimate the number of characters in each line, and further calculating the number of characters in the page corresponding to the reading content;
s34: and accumulating the number of the texts on all the pages in each reading to obtain the total number of the texts currently read.
5. The AI vision-based reading ability auxiliary evaluation method according to claim 1, wherein in step S4, the difficulty of reading the content is determined by identifying the reading content, specifically:
s41: different article type difficulty coefficients are set for the types of reading contents including articles, poems and scientific literatures;
s42: setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
S43: in the reading behavior process of a user, reading contents of the user are identified and scanned through AI, the reading difficulty of the time is accumulated and calculated, and the reading difficulty is calculated after the reading is finished, wherein the specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
6. An auxiliary assessment system for reading ability based on AI vision, comprising: the reading system comprises a reading state analysis module, a reading time acquisition module, a reading amount acquisition module, a reading difficulty acquisition module and a reading capability calculation module;
the reading state analysis module is used for reading randomly within the range which can be identified by the AI device after the AI device is opened, and the AI device starts to analyze the reading of the user from the first page turning until the user leaves the reading state;
the reading time acquisition module is used for analyzing the time from the beginning of turning pages for the first time to the time when the user leaves the reading state as the reading time of the user;
the reading amount acquisition module is used for identifying and scanning the number of single words or words appearing in the reading content of the user through the AI device in the reading behavior process of the user as the reading amount of the current reading;
the reading difficulty acquisition module is used for judging the difficulty of reading contents by identifying the reading contents;
The reading ability calculating module is used for calculating the reading ability score of the user according to factors including reading time, reading amount and reading difficulty, and the specific formula is as follows: reading ability score-reading difficulty/reading quantity/reading time.
7. The AI vision-based reading ability auxiliary evaluation system according to claim 6, wherein the reading state analysis module further comprises:
the image positioning and edge detecting unit is used for shooting images in a range which can be identified by the AI equipment through the AI equipment, acquiring information carrier images in the identification range, positioning and detecting edge positions of the information carrier images through a static image deep learning neural network monitored by a page, and realizing real-time updating of the size and position information of the book, wherein the position information comprises left and right edges and a book center line, and the left and right edges comprise any one of a real edge of the book and an edge of a visual field of the AI equipment;
a page turning behavior detection unit for detecting image contents within a book range using a deep learning neural network capable of time series processing when default page turning behavior occurrence positions are a left edge and a right edge of the book, the page turning behavior being detected as turning one page upward when the page contents of the book are changed from the left edge, and the page turning behavior being detected as turning one page downward when the page contents of the book are changed from the right edge;
And the page turning confirming unit is used for continuously working the static image deep learning neural network monitored by the page after the deep learning neural network capable of performing time series processing finishes working, confirming that the page is clearly distinguished from the page turning action before the page turning action occurs, and confirming the page turning action.
8. The AI vision-based reading ability auxiliary evaluation system according to claim 6, wherein the reading quantity acquisition module further comprises:
the single character size acquisition unit is used for randomly sampling information in a page range of reading content to serve as a candidate focus language feature point picture, cutting the candidate focus language feature point picture by using an image character cutting deep learning neural network and outputting the width size of each single character or word;
the line number and line width acquisition unit is used for positioning the information of the lines in the reading content by using a deep learning neural network for processing the static image to acquire the line number and the line width of each line;
the single-page character number calculating unit is used for calculating the width size of each output single character or word, estimating the character number of each line and further calculating the character number in the page corresponding to the reading content;
And the total text quantity accumulation unit is used for accumulating the text quantity on all the pages read each time to obtain the total text quantity read currently.
9. The AI vision-based reading ability auxiliary evaluation system according to claim 6, wherein the reading difficulty obtaining module further comprises:
the text difficulty setting unit is used for setting different article type difficulty coefficients aiming at the types of reading contents including articles, poems and scientific documents;
the uncommon word difficulty setting unit is used for setting different uncommon word difficulty values aiming at the vocabulary appearing in the reading content;
the reading difficulty calculating unit is used for identifying and scanning the reading content of the user through AI in the reading behavior process of the user, accumulating and calculating the reading difficulty at this time, and calculating the reading difficulty after the reading is finished, wherein the specific formula is as follows: reading difficulty ═ sigma rarely-used word difficulty cumulative value · article type difficulty coefficient.
10. A computer readable storage medium storing computer code which, when executed, performs the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010499710.9A CN111860121B (en) | 2020-06-04 | 2020-06-04 | Reading ability auxiliary evaluation method and system based on AI vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010499710.9A CN111860121B (en) | 2020-06-04 | 2020-06-04 | Reading ability auxiliary evaluation method and system based on AI vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111860121A true CN111860121A (en) | 2020-10-30 |
CN111860121B CN111860121B (en) | 2023-10-24 |
Family
ID=72985379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010499710.9A Active CN111860121B (en) | 2020-06-04 | 2020-06-04 | Reading ability auxiliary evaluation method and system based on AI vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111860121B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113986018A (en) * | 2021-12-30 | 2022-01-28 | 江西影创信息产业有限公司 | Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium |
CN115601825A (en) * | 2022-10-25 | 2023-01-13 | 扬州市职业大学(扬州开放大学)(Cn) | Method for evaluating reading capability based on visual positioning technology |
CN116755595A (en) * | 2023-08-11 | 2023-09-15 | 江苏中威科技软件系统有限公司 | Method for dynamically turning page based on distance between passing content and page frame |
CN117217624A (en) * | 2023-10-27 | 2023-12-12 | 广州道然信息科技有限公司 | Child reading level prediction method and system based on drawing book reading record |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095504A (en) * | 2015-08-28 | 2015-11-25 | 广东小天才科技有限公司 | Method, device and system for recommending learning content based on learning habit |
CN105204738A (en) * | 2015-09-18 | 2015-12-30 | 北京奇虎科技有限公司 | E-book reading quantity determining and ranking methods, terminal device and server |
CN106960245A (en) * | 2017-02-24 | 2017-07-18 | 中国科学院计算技术研究所 | A kind of individualized medicine evaluation method and system based on cognitive process chain |
CN108009630A (en) * | 2017-11-24 | 2018-05-08 | 华南师范大学 | Deep learning method, apparatus, the computer equipment of analysis prediction cultural conflict |
JP2018152063A (en) * | 2017-03-14 | 2018-09-27 | オムロン株式会社 | Device, method, and program for evaluating learning results |
CN108984531A (en) * | 2018-07-23 | 2018-12-11 | 深圳市悦好教育科技有限公司 | Books reading difficulty method and system based on language teaching material |
CN109567817A (en) * | 2018-11-19 | 2019-04-05 | 北京育铭天下科技有限公司 | A kind of reading ability appraisal procedure and system and its auxiliary device |
CN109710748A (en) * | 2019-01-17 | 2019-05-03 | 北京光年无限科技有限公司 | It is a kind of to draw this reading exchange method and system towards intelligent robot |
WO2019092672A2 (en) * | 2017-11-13 | 2019-05-16 | Way2Vat Ltd. | Systems and methods for neuronal visual-linguistic data retrieval from an imaged document |
RU2691214C1 (en) * | 2017-12-13 | 2019-06-11 | Общество с ограниченной ответственностью "Аби Продакшн" | Text recognition using artificial intelligence |
CN109977408A (en) * | 2019-03-27 | 2019-07-05 | 西安电子科技大学 | The implementation method of English Reading classification and reading matter recommender system based on deep learning |
CN110111610A (en) * | 2019-05-13 | 2019-08-09 | 上海乂学教育科技有限公司 | Chinese language structure reading method in adaptive learning based on AI algorithm |
-
2020
- 2020-06-04 CN CN202010499710.9A patent/CN111860121B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095504A (en) * | 2015-08-28 | 2015-11-25 | 广东小天才科技有限公司 | Method, device and system for recommending learning content based on learning habit |
CN105204738A (en) * | 2015-09-18 | 2015-12-30 | 北京奇虎科技有限公司 | E-book reading quantity determining and ranking methods, terminal device and server |
CN106960245A (en) * | 2017-02-24 | 2017-07-18 | 中国科学院计算技术研究所 | A kind of individualized medicine evaluation method and system based on cognitive process chain |
JP2018152063A (en) * | 2017-03-14 | 2018-09-27 | オムロン株式会社 | Device, method, and program for evaluating learning results |
WO2019092672A2 (en) * | 2017-11-13 | 2019-05-16 | Way2Vat Ltd. | Systems and methods for neuronal visual-linguistic data retrieval from an imaged document |
CN108009630A (en) * | 2017-11-24 | 2018-05-08 | 华南师范大学 | Deep learning method, apparatus, the computer equipment of analysis prediction cultural conflict |
RU2691214C1 (en) * | 2017-12-13 | 2019-06-11 | Общество с ограниченной ответственностью "Аби Продакшн" | Text recognition using artificial intelligence |
CN108984531A (en) * | 2018-07-23 | 2018-12-11 | 深圳市悦好教育科技有限公司 | Books reading difficulty method and system based on language teaching material |
CN109567817A (en) * | 2018-11-19 | 2019-04-05 | 北京育铭天下科技有限公司 | A kind of reading ability appraisal procedure and system and its auxiliary device |
CN109710748A (en) * | 2019-01-17 | 2019-05-03 | 北京光年无限科技有限公司 | It is a kind of to draw this reading exchange method and system towards intelligent robot |
CN109977408A (en) * | 2019-03-27 | 2019-07-05 | 西安电子科技大学 | The implementation method of English Reading classification and reading matter recommender system based on deep learning |
CN110111610A (en) * | 2019-05-13 | 2019-08-09 | 上海乂学教育科技有限公司 | Chinese language structure reading method in adaptive learning based on AI algorithm |
Non-Patent Citations (2)
Title |
---|
NGUYEN A M等: "Reading ability and reading engagement in older adults with glaucoma", 《 INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE》, vol. 55, no. 8, pages 5284 - 5290 * |
赵华等: "最好的阅读给最美的童年——长沙市高新区金桥小学"优+"阅读活动", 《湖南教育》, no. 9, pages 53 - 54 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113986018A (en) * | 2021-12-30 | 2022-01-28 | 江西影创信息产业有限公司 | Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium |
CN115601825A (en) * | 2022-10-25 | 2023-01-13 | 扬州市职业大学(扬州开放大学)(Cn) | Method for evaluating reading capability based on visual positioning technology |
CN115601825B (en) * | 2022-10-25 | 2023-09-19 | 扬州市职业大学(扬州开放大学) | Method for evaluating reading ability based on visual positioning technology |
CN116755595A (en) * | 2023-08-11 | 2023-09-15 | 江苏中威科技软件系统有限公司 | Method for dynamically turning page based on distance between passing content and page frame |
CN116755595B (en) * | 2023-08-11 | 2023-10-27 | 江苏中威科技软件系统有限公司 | Method for dynamically turning page based on distance between passing content and page frame |
CN117217624A (en) * | 2023-10-27 | 2023-12-12 | 广州道然信息科技有限公司 | Child reading level prediction method and system based on drawing book reading record |
CN117217624B (en) * | 2023-10-27 | 2024-03-01 | 广州道然信息科技有限公司 | Child reading level prediction method and system based on drawing book reading record |
Also Published As
Publication number | Publication date |
---|---|
CN111860121B (en) | 2023-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111860121A (en) | Reading ability auxiliary evaluation method and system based on AI vision | |
CN109271401B (en) | Topic searching and correcting method and device, electronic equipment and storage medium | |
CN109817046B (en) | Learning auxiliary method based on family education equipment and family education equipment | |
CN109635772A (en) | Dictation content correcting method and electronic equipment | |
CN110175609B (en) | Interface element detection method, device and equipment | |
CN111726689B (en) | Video playing control method and device | |
CN112329663B (en) | Micro-expression time detection method and device based on face image sequence | |
CN109189895B (en) | Question correcting method and device for oral calculation questions | |
CN112580557A (en) | Behavior recognition method and device, terminal equipment and readable storage medium | |
CN109710750A (en) | Question searching method and learning equipment | |
CN111325082B (en) | Personnel concentration analysis method and device | |
CN112464904B (en) | Classroom behavior analysis method and device, electronic equipment and storage medium | |
CN109410984B (en) | Reading scoring method and electronic equipment | |
CN113763348A (en) | Image quality determination method and device, electronic equipment and storage medium | |
CN109086431B (en) | Knowledge point consolidation learning method and electronic equipment | |
CN111026949A (en) | Question searching method and system based on electronic equipment | |
CN113065757A (en) | Method and device for evaluating on-line course teaching quality | |
CN110728193B (en) | Method and device for detecting richness characteristics of face image | |
CN111860122B (en) | Method and system for identifying reading comprehensive behaviors in real scene | |
CN114005019B (en) | Method for identifying flip image and related equipment thereof | |
CN111639630B (en) | Operation modifying method and device | |
CN111753715B (en) | Method and device for shooting test questions in click-to-read scene, electronic equipment and storage medium | |
CN111081079A (en) | Dictation control method and device based on dictation condition | |
CN111078921A (en) | Subject identification method and electronic equipment | |
Rule et al. | Restoring the context of interrupted work with desktop thumbnails |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |