CN114220110A - Learning auxiliary method and system based on visual recognition - Google Patents
Learning auxiliary method and system based on visual recognition Download PDFInfo
- Publication number
- CN114220110A CN114220110A CN202111496137.7A CN202111496137A CN114220110A CN 114220110 A CN114220110 A CN 114220110A CN 202111496137 A CN202111496137 A CN 202111496137A CN 114220110 A CN114220110 A CN 114220110A
- Authority
- CN
- China
- Prior art keywords
- words
- characters
- searched
- word
- student
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000000007 visual effect Effects 0.000 title claims abstract description 21
- 238000013519 translation Methods 0.000 claims abstract description 24
- 239000000284 extract Substances 0.000 claims abstract description 4
- 238000012937 correction Methods 0.000 claims abstract description 3
- 238000012552 review Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/226—Validation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/232—Orthographic correction, e.g. spell checking or vowelisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
- G06F40/58—Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/221—Announcement of recognition results
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Character Discrimination (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention discloses a learning auxiliary method based on visual identification.A student manually selects an input mode, scans and inputs characters in a designated area of a book or homework, identifies and outputs according to image information of the characters, and obtains corresponding identification characters; the method comprises the steps that a student reads aloud or reads with the characters while scanning, collects audio information of the student, identifies and extracts the audio information of the student, and obtains corresponding aloud characters; and comparing the recognition characters with the reading characters for correction, and displaying the corrected recognition characters. The method can obtain the scanned text information and the audio information, can quickly lock the words to be searched which need to be checked by the students through the comparison of the text information and the audio information, and has high recognition speed and high accuracy; meanwhile, when the whole sentence is read aloud, the translation can be carried out by combining the context, so that the translation is more accurate; the method can effectively strengthen the word spelling ability and sentence reading ability of students.
Description
Technical Field
The invention relates to the technical field of learning assistance, in particular to a learning assistance method and system based on visual identification.
Background
With the development of recognition technology, the electronic dictionary based on manual transmission initially exits the market; a large amount of learning-aid software capable of searching for problems by taking pictures is available on the market. There is also a multifunctional dictionary pen, which can recognize English or Chinese meaning and pronunciation by one stroke on the character, and it can read aloud by matching with loudspeaker. However, the existing identification device is slow in identification speed, complex in operation process and incapable of performing other interactions during identification.
The scanning width of the existing dictionary pen is constant, and when the existing dictionary pen is used for scanning characters, if the dictionary pen simultaneously scans images of other characters, the dictionary pen can generate misjudgment, so that the query efficiency is low. The students can only manually select or cover other contents for scanning again, the scanning and retrieving accuracy is low, and the students may need to repeat the operation for many times. When the scanning content is more, the scanning process is longer, meanwhile, the dictionary pen cannot accurately judge key words and sentences, and then the selection needs to be performed through manual operation.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a learning auxiliary method and a learning auxiliary system based on visual identification, which have accurate and quick identification and can drive students to read aloud.
In order to realize the technical purpose, the scheme of the invention is as follows: a learning assistance method based on visual recognition, the method comprising:
the student manually selects an input mode, scans and inputs characters in a designated area of a book or homework, identifies and outputs the characters according to image information of the characters, and obtains corresponding identified characters;
the method comprises the steps that a student reads aloud or reads with the characters while scanning, collects audio information of the student, identifies and extracts the audio information of the student, and obtains corresponding aloud characters;
and comparing the recognition characters with the reading characters for correction, and displaying the corrected recognition characters.
Preferably, in the identifying and extracting audio information of the student, the method further includes:
specifying pronunciation information and pause time of characters;
according to the initial setting of students, high-frequency single-meaning simple words in the reading words are removed, the words are identified as initial texts to compare and mark the difference of the reading words, and the difference and the homonymy of the pronunciation and the pronunciation information of the different words are analyzed.
Preferably, the recognition characters comprise adjacent words and words to be searched, when the student reads the words to be searched, the recognition characters and the words to be read are compared and corrected, the adjacent words are removed, and the meaning of the words to be searched is only displayed.
Preferably, when the student cannot accurately read the word to be searched, the whole sentence or part of the sentence is read aloud while the word to be searched is skipped, the recognized characters and the read aloud characters are compared and corrected, only the meaning of the word to be searched which is not read aloud in the recognized characters is displayed, and when the read aloud characters are complete sentences, the best meaning of the word to be searched in the sentences is preferentially displayed.
Preferably, when the pronunciation pause time of a certain word in the words to be read exceeds a threshold value, the word is classified into a review word bank;
and the words to be checked are also put into the review word stock after being displayed.
Preferably, when the adjacent words belong to the high-frequency single-meaning simple words in the initial setting of the students and do not belong to the phrases, the adjacent words are preferentially and actively removed and are not displayed;
when the adjacent words belong to the words in the review word bank and the query times are less than a threshold value, placing the adjacent words behind the word to be checked as alternative checking contents;
when the query word and the adjacent word form a phrase, the meaning of the phrase is preferentially displayed.
Preferably, when the characters are read aloud or the scanned file is the whole sentence, the meanings of the different words or the words to be checked are displayed preferentially, and the translation of the whole sentence is used as the alternative checking content.
Preferably, when the words to be searched are located in the same sentence and are not adjacent to each other and the words to be searched have a plurality of meaning results, the best meaning of the words to be searched in the sentence is preferentially displayed, the switching control is displayed at the same time, and students can check other meanings of the words to be searched through sliding.
A learning assistance system which adopts a learning assistance method based on visual recognition, comprising:
the camera module can scan and acquire the character information of a specified scanning area, and the character range of the scanning area is larger than the range of the words to be searched;
the voice recognition module can acquire audio information of students;
the audio processing module can identify the reading words, the pronunciation information and the pause time corresponding to the audio information;
the comparison analysis module is used for comparing the recognition characters with the reading characters to correct the recognition characters and displaying the corrected recognition characters;
and the display module is used for displaying the meaning or pronunciation of the words and sentences.
And the query result display module is configured to query the text to be queried to obtain a corresponding query result and display the query result.
The method has the advantages that the scanned character information and the scanned audio information can be obtained, words to be checked which need to be checked by students can be quickly locked through comparison of the character information and the audio information, the recognition speed is high, the accuracy rate is high, and the learning time can be saved; meanwhile, when the whole sentence is read aloud, the word to be searched is locally scanned, the word to be searched can be translated better by combining the context, the given meaning is closer to the real meaning in the sentence, and the translation is more accurate; the method can effectively strengthen the word spelling and reading ability of students and the reading ability of sentences, and can find words with inaccurate pronunciation and unskilled words more quickly.
Drawings
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments. The following detailed description is to be construed as exemplary and not limiting, and the terms "including" and "having" and their conventional variations are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Any minor modifications, equivalent replacements and improvements made to the following embodiments according to the technical essence of the present application shall be included in the protection scope of the technical solution of the present application.
The first embodiment is as follows:
as shown in FIG. 1, the invention discloses a learning auxiliary method based on visual recognition, when scanning characters, students can actively read words and sentences according to experience, actively try to spell out strange words which can be spelled out, and can skip words which cannot be spelled out. The method can lock the words to be searched which need to be searched more quickly, and meanwhile, students can participate actively to check whether the pronunciation is accurate or not. The method comprises the following steps:
s101, according to the initial setting of a student, eliminating high-frequency single-meaning simple words (definite articles, simple nouns such as the, a, if, that and applets) in the reading characters, manually selecting an input mode by the student as a reading mode (whole sentence scanning or partial sentence scanning), scanning and inputting characters in a designated area of a book, and identifying and outputting according to image information of the characters to obtain corresponding identification characters;
s102, the student reads the characters while scanning, collects audio information of the student, identifies and extracts the audio information of the student, and obtains corresponding read characters, pronunciation information of the appointed characters and pause time;
s103, the student skips over the word to be searched when the whole sentence or part of the sentence is not read aloud, compares the identification characters with the reading characters, and only displays the meaning of the word to be searched which is not read aloud in the identification characters, and preferentially displays the best meaning of the word to be searched in the sentence when the reading characters are complete sentences;
s104, when the pronunciation pause time of a certain word in the words to be read exceeds a threshold value, the word is classified into a review word bank; and the words to be checked are also put into the review word stock after being displayed.
Firstly, students who use the translation pen alone usually have a certain language and grammar basis and can recognize partial words, and the purpose of using the translation pen is to solve the problem that the meanings of partial words and sentences are unclear. Moreover, many students also have learned natural spelling, and if they encounter unknown words, they can read out their approximate pronunciation according to the spelling pattern. Because the translation pen is slow in scanning speed, students can actively read or follow up when the translation pen sweeps specified characters, the understanding of sentences or words can be enhanced through a reading and pronunciation mode, and the learning time can be utilized more efficiently (meanwhile, the spelling and reading capability can be milled). Taking If the speech is big end, the features don't count as an example, in the reading mode, the student reads along with the text swept by the translation pen, but the student can roughly pronounce speech of speech, and the end does not pronounce speech, without knowing speech and the end. When the translation pen recognizes the pronunciation of dream, obvious pause appears, and when the translation pen scans and does not read, the students judge that the two words are not mastered, the two words are displayed with the middle meanings, and the translation of the whole sentence is used as the alternative viewing content. And simultaneously, the two words are classified into a review word library.
Example two:
the invention discloses a learning auxiliary method based on visual recognition, as shown in fig. 2, through reading, students can more efficiently utilize scanning time, and meanwhile, the reading difficulty is low, the acceptance is easier, and the students can be better familiar with words to be searched through spoken language training, and the method comprises the following steps:
s201, a student manually selects an input mode as a follow-reading mode, generally, the input mode is whole sentence scanning, a translation pen starts to scan characters in a designated area (punctuation marks are used as segmentation areas), the translation pen plays identified characters, and the student follows reading;
s202, synchronously acquiring audio information of students by the translation pen, identifying and extracting the audio information of the students, and acquiring corresponding reading characters, pronunciation information of designated characters and pause time; (ii) a
S203, removing the high-frequency single-meaning simple words in the reading words, comparing and marking the differences of the reading words by using the recognized words as initial texts, and analyzing the similarities and differences between the pronunciation and pronunciation information of the different words;
s204, when the pronunciation pause time of a certain word in the words to be read exceeds a threshold value, the word is classified into a review word bank; and the words to be checked are also put into the review word stock after being displayed.
Under the reading mode, the student can follow the pronunciation of the broadcast of translation pen and follow the reading, according to the accuracy and the dwell time of the pronunciation of word, judges the proficiency of student to this word or phrase, carries out the not enough standard word of recognition word to be investigated and peace pronunciation. Besides, the follow-up reading can reduce the participation difficulty of students, and the follow-up reading can also strengthen understanding words and sentences and improve the learning effect.
Example three:
the invention discloses a learning auxiliary method based on visual recognition, which is shown in figure 3, and aims to not influence the whole reading process, and simultaneously can efficiently carry out local scanning recognition and quickly understand the meaning of a word and a sentence in the sentence. The method comprises the following steps:
s301, the student manually selects an input mode as a reading mode, the student reads sentences aloud, meanwhile, the translation pen collects audio information of the student, the audio information of the student is identified and extracted, and corresponding reading characters are obtained;
s302, when a word to be searched is encountered, stopping reading aloud or attempting spelling, scanning and inputting the word to be searched (adjacent words can be recognized because the scanning area may be larger than the word to be searched), and recognizing and outputting according to the image information of the words to obtain corresponding recognized words;
s303, identifying characters containing adjacent words and words to be searched, reading the whole sentence while skipping the words to be searched or pronunciation is inaccurate (or spelling pause time is too long), comparing and correcting the identified characters and the read characters, and eliminating the adjacent words to only show the meaning of the words to be searched;
s304, displaying the best meaning of the word to be searched in the sentence, and simultaneously displaying the switching control, so that students can check other meanings of the word to be searched through sliding.
Usually speaking speed will be slightly faster than the scanning speed of the translation pen, when the student reads the whole sentence or chapter. Starting a reading mode of the translation pen, enabling students to read normally, skipping and not reading if meeting words which do not understand meanings and can not be spelled, but scanning the words to be searched by using the translation pen, and checking word meanings after reading is finished.
Firstly, the translation pen continuously records audio information of students, the words to be searched can be translated through context or front and back words and sentences, and translation of the words to be searched is more accurate. And the reading speed is not influenced, and only a plurality of seconds need to be waited when the words are scanned. By adopting the method, the whole sentence does not need to be scanned, the input efficiency is high, and the understanding of the sentence is strengthened by the students through reading aloud.
Example four:
the invention discloses a learning auxiliary method based on visual identification, as shown in fig. 4, the method comprises the following steps:
s401, the student manually selects an input mode as a reading mode, the student reads sentences aloud, meanwhile, the translation pen collects audio information of the student, the audio information of the student is identified and extracted, and corresponding reading characters are obtained;
s402, the recognized characters comprise adjacent words and words to be searched, and when a student reads the words to be searched only by reading the words to be searched, but pronunciations of the words to be searched are inaccurate or spelling time exceeds a threshold value, the meanings of the words to be searched are displayed;
when the student reads the words, only the adjacent words are read, but the adjacent words are accurate in pronunciation and have no pause, the meanings of the words to be searched are displayed;
if the adjacent words and the word to be searched form a phrase, the display is the meaning of the phrase.
For example, "If the stem is big _ end, the features don't count", the student wants to inquire about the meaning of big _ end (big is a word nearby), and because big _ end constitutes a phrase, the meaning of big _ end is shown. The student wants to inquire about the meaning of dream (the and is are adjacent words), and since the and is are both high-frequency simple words with single meaning, the meaning of dream is shown.
A learning assistance system which adopts a learning assistance method based on visual recognition, comprising: the camera module can scan and acquire the character information of a specified scanning area, and the character range of the scanning area is larger than the range of the words to be searched; the voice recognition module can acquire audio information of students; the audio processing module can identify the reading words, the pronunciation information and the pause time corresponding to the audio information; the comparison analysis module is used for comparing the recognition characters with the reading characters to correct the recognition characters and displaying the corrected recognition characters; and the display module is used for displaying the meaning or pronunciation of the words and sentences. And the query result display module is configured to query the text to be queried to obtain a corresponding query result and display the query result.
In the specific embodiments of the present application, the size of the serial number of each process does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The functional elements described above may be implemented in software and sold or used as a stand-alone product, which may be stored in a memory accessible by a computer device. The technical solution of the present application, which is a part of or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, and including several requests for causing a computer device to perform part or all of the steps of the above-described methods of the various embodiments of the present application.
Claims (9)
1. A learning assistance method based on visual recognition, the method comprising:
the student manually selects an input mode, scans and inputs characters in a designated area of a book or homework, identifies and outputs the characters according to image information of the characters, and obtains corresponding identified characters;
the method comprises the steps that a student reads aloud or reads with the characters while scanning, collects audio information of the student, identifies and extracts the audio information of the student, and obtains corresponding aloud characters;
and comparing the recognition characters with the reading characters for correction, and displaying the corrected recognition characters.
2. The learning assistance method based on visual recognition according to claim 1, characterized in that: in the identifying and extracting audio information of the students, the method further comprises the following steps:
specifying pronunciation information and pause time of characters;
according to the initial setting of students, high-frequency single-meaning simple words in the reading words are removed, the words are identified as initial texts to compare and mark the difference of the reading words, and the difference and the homonymy of the pronunciation and the pronunciation information of the different words are analyzed.
3. The learning assistance method based on visual recognition according to claim 1 or 2, characterized in that: the recognition characters comprise adjacent words and words to be searched, when the students read the words to be searched in a reading mode, the recognition characters and the words to be read are compared and corrected, the adjacent words are removed, and the meaning of the words to be searched is only displayed.
4. The learning assistance method based on visual recognition according to claim 3, characterized in that: when the student can not read the word to be searched accurately, the whole sentence or part of the sentence skips the word to be searched, the recognized character and the read character are compared and corrected, only the meaning of the word to be searched which is not read in the recognized character is displayed, and when the read character is a complete sentence, the best meaning of the word to be searched in the sentence is preferentially displayed.
5. The learning assistance method based on visual recognition according to claim 3, characterized in that: when the pronunciation pause time of a certain word in the words to be read exceeds a threshold value, the word is classified into a review word library;
and the words to be checked are also put into the review word stock after being displayed.
6. The learning assistance method based on visual recognition according to claim 5, characterized in that: when the adjacent words belong to the high-frequency single-meaning simple words in the initial setting of the students and do not belong to the phrases, the adjacent words are preferentially and actively removed and are not displayed;
when the adjacent words belong to the words in the review word bank and the query times are less than a threshold value, placing the adjacent words behind the word to be checked as alternative checking contents;
when the query word and the adjacent word form a phrase, the meaning of the phrase is preferentially displayed.
7. The learning assistance method based on visual recognition according to claim 4, characterized in that: when words are read aloud or a scanned file is a whole sentence, the meanings of different words or words to be checked are preferentially displayed, and the translation of the whole sentence is used as alternative checking content.
8. The learning assistance method based on visual recognition according to claim 3, characterized in that: when the words to be searched are located in the same sentence and are not adjacent to each other and the words to be searched have a plurality of meaning results, the best meaning of the words to be searched in the sentence is preferentially displayed, the switching control is displayed at the same time, and students can check other meanings of the words to be searched through sliding.
9. A learning assistance system that employs the visual recognition-based learning assistance method according to any one of claims 1 to 8, comprising:
the camera module can scan and acquire the character information of a specified scanning area, and the character range of the scanning area is larger than the range of the words to be searched;
the voice recognition module can acquire audio information of students;
the audio processing module can identify the reading words, the pronunciation information and the pause time corresponding to the audio information;
the comparison analysis module is used for comparing the recognition characters with the reading characters to correct the recognition characters and displaying the corrected recognition characters;
the display module is used for displaying the meaning or pronunciation of words and sentences;
and the query result display module is configured to query the text to be queried to obtain a corresponding query result and display the query result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111496137.7A CN114220110A (en) | 2021-12-08 | 2021-12-08 | Learning auxiliary method and system based on visual recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111496137.7A CN114220110A (en) | 2021-12-08 | 2021-12-08 | Learning auxiliary method and system based on visual recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114220110A true CN114220110A (en) | 2022-03-22 |
Family
ID=80700433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111496137.7A Pending CN114220110A (en) | 2021-12-08 | 2021-12-08 | Learning auxiliary method and system based on visual recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114220110A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115019572A (en) * | 2022-06-29 | 2022-09-06 | 张磊 | System and method for improving learning and memory effects of electronic dictionary pen |
-
2021
- 2021-12-08 CN CN202111496137.7A patent/CN114220110A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115019572A (en) * | 2022-06-29 | 2022-09-06 | 张磊 | System and method for improving learning and memory effects of electronic dictionary pen |
CN115019572B (en) * | 2022-06-29 | 2024-04-05 | 张磊 | System and method for improving learning and memory effects of electronic dictionary pen |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109036464B (en) | Pronunciation error detection method, apparatus, device and storage medium | |
CN109817046B (en) | Learning auxiliary method based on family education equipment and family education equipment | |
Ellis et al. | The role of spelling in learning to read | |
CN106695826A (en) | Robot device with scanning and reading functions | |
CN108153915B (en) | Internet-based educational information rapid acquisition method | |
WO2021033865A1 (en) | Method and apparatus for learning written korean | |
CN108280065B (en) | Foreign text evaluation method and device | |
CN113205729A (en) | Foreign student-oriented speech evaluation method, device and system | |
JP2010282058A (en) | Method and device for supporting foreign language learning | |
Rokhman et al. | EFL learners’ phonemic awareness: A correlation between English phoneme identification skill toward word processing | |
CN114220110A (en) | Learning auxiliary method and system based on visual recognition | |
CN118069783A (en) | Text query method, text query device and dictionary pen | |
CN112163513A (en) | Information selection method, system, device, electronic equipment and storage medium | |
KR20090038335A (en) | Studying system using touch screen | |
KR20130058840A (en) | Foreign language learnning method | |
Hao et al. | The effect of second-language orthographic input on the phonological encoding of Mandarin words | |
KR20090054951A (en) | Method for studying word and word studying apparatus thereof | |
CN115168534A (en) | Intelligent retrieval method and device | |
CN109166356B (en) | English system dynamic part-of-speech structure expression training system and method thereof | |
KR20090096952A (en) | Method and system for providing speed reading training services used recognition word group | |
CN111859943A (en) | Study method and system for identifying source of examination words and sentences | |
US20160225285A1 (en) | Spanish Language Teaching Systems and Methods | |
Daulton | The effect of Japanese loanwords on written English production-A pilot study | |
Rytting et al. | ArCADE: An Arabic corpus of auditory dictation errors | |
Yu | Chinese text presentations and reading efficiency |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |