CN111027556B - Question searching method and learning device based on image preprocessing - Google Patents

Question searching method and learning device based on image preprocessing Download PDF

Info

Publication number
CN111027556B
CN111027556B CN201910178750.0A CN201910178750A CN111027556B CN 111027556 B CN111027556 B CN 111027556B CN 201910178750 A CN201910178750 A CN 201910178750A CN 111027556 B CN111027556 B CN 111027556B
Authority
CN
China
Prior art keywords
color
image
character
recognition
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910178750.0A
Other languages
Chinese (zh)
Other versions
CN111027556A (en
Inventor
徐杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910178750.0A priority Critical patent/CN111027556B/en
Publication of CN111027556A publication Critical patent/CN111027556A/en
Application granted granted Critical
Publication of CN111027556B publication Critical patent/CN111027556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A question searching method and learning equipment based on image preprocessing, the method includes: identifying a color of the specifier in the first image; identifying the color of the character in the first image; adjusting the color of the appointed object in the first image to be opposite to the color of the character so as to obtain a second image; and carrying out character recognition on the second image to obtain the topic content to be searched, and searching learning content corresponding to the topic content to be searched. By implementing the embodiment of the invention, the accuracy of character recognition in the image can be improved, so that the topics possibly needing searching by the user can be more accurately recognized, the accuracy of searching the topics can be improved, and the expected learning content can be more met for the feedback of the user.

Description

Question searching method and learning device based on image preprocessing
Technical Field
The invention relates to the technical field of education, in particular to a question searching method and learning equipment based on image preprocessing.
Background
At present, more and more learning devices (such as home education machines, learning plates and the like) have the function of searching questions. Most learning devices support image search questions: the user uses the learning device to shoot the questions to be searched, and the learning device can identify corresponding question contents from the shot images and search corresponding learning contents such as answers or solution questions and ideas according to the identified question contents.
However, in practice, it is found that when there is an object similar or close to the color of the character in the photographed image, there are many errors in the content of the topic identified from the photographed image, so that the searched learning content does not meet the requirement of the user, and the accuracy of searching the topic is low.
Disclosure of Invention
The embodiment of the invention discloses a question searching method and learning equipment based on image preprocessing, which can improve the accuracy of character recognition in images and improve the accuracy of question searching.
The first aspect of the embodiment of the invention discloses a question searching method based on image preprocessing, which comprises the following steps:
identifying a color of the specifier in the first image;
identifying the color of the character in the first image;
adjusting the color of the appointed object in the first image to be opposite to the color of the character so as to obtain a second image;
and carrying out character recognition on the second image to obtain the topic content to be searched, and searching learning content corresponding to the topic content to be searched.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the method further includes:
identifying a question keyword from the input voice information;
Identifying a first location coordinate specified by the specifier in the first image;
and performing character recognition on the second image to obtain the topic content to be searched, including:
determining a second coordinate position designated by the designated object in the second image according to the first coordinate position;
determining the range of a second search area in the second image according to the questioning keyword and the second position coordinates; the range of the second search area is part or all of the second image;
and carrying out character recognition on the second search area to recognize that the result is the topic content to be searched.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the color of the identification character includes:
determining a range of a color identification area taking the first position coordinate as a center according to a preset area height; the region height is used for indicating the number of character lines contained in the color recognition region;
identifying the color of the character in the color identification area;
and adjusting the color of the specified object in the first image to be opposite to the color of the character to obtain a second image, wherein the method comprises the following steps:
Judging whether the color difference of the color of the character in the color recognition area is lower than a preset threshold value or not;
and if the color difference is lower than the threshold value, adjusting the color of the appointed object in the first image to be opposite to the color of the character in the color recognition area so as to obtain a second image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the method further includes:
if the color difference is not lower than the threshold value, determining the range of a first search area in the first image according to the questioning keyword and the first position coordinate; the range of the first search area is part or all of the first image;
and performing character recognition on the first search area to identify the result as the topic content to be searched, and executing the step of searching the learning content corresponding to the topic content to be searched.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the identifying the color of the object specified in the first image, the method further includes:
when a preset voice wake-up word is detected, controlling a shooting module of the learning equipment to shoot a mirror image in the reflecting device as a first image; the light reflecting device is arranged on the learning equipment, and the mirror surface of the light reflecting device and the lens surface of the shooting module form a preset angle.
A second aspect of an embodiment of the present invention discloses a learning apparatus, including:
a first recognition unit configured to recognize a color of a specified object in the first image;
a second recognition unit configured to recognize a color of a character in the first image;
an adjustment unit configured to adjust a color of the specified object in the first image to a color opposite to a color of the character, to obtain a second image;
the third recognition unit is used for carrying out character recognition on the second image so as to obtain the topic content to be searched;
and the searching unit is used for searching the learning content corresponding to the topic content to be searched.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the method further includes:
a fourth recognition unit for recognizing a question keyword from the inputted voice information;
a fifth identifying unit configured to identify a first position coordinate specified by the specified object in the first image;
and, the third identifying unit includes:
a position determining subunit configured to determine a second coordinate position specified by the specified object in the second image according to the first coordinate position;
a range determining subunit, configured to determine a range of a second search area in the second image according to the question keyword and the second position coordinate; the range of the second search area is part or all of the second image;
And the character recognition subunit is used for carrying out character recognition on the second search area so as to recognize that the result is the topic content to be searched.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the second identifying unit includes:
a region determining subunit, configured to determine, according to a preset region height, a range of a color identification region centered on the first position coordinate; the region height is used for indicating the number of character lines contained in the color recognition region;
a color recognition subunit, configured to recognize a color of a character in the color recognition area;
and, the adjusting unit includes:
a judging subunit, configured to judge whether a color difference between the color of the specified object and the color of the character in the color recognition area is lower than a preset threshold;
and the adjusting subunit is used for adjusting the color of the appointed object in the first image to be opposite to the color of the character in the color recognition area when the judging subunit judges that the color difference is lower than the threshold value, so as to obtain a second image.
As an alternative implementation manner, in the second aspect of the embodiment of the present invention:
The range determining subunit is further configured to determine, when the judging subunit judges that the color difference is not lower than the threshold value, a range of a first search area in the first image according to the question keyword and the first position coordinate; the range of the first search area is part or all of the first image;
the character recognition subunit is further configured to perform character recognition on the first search area, so that a recognition result is the topic content to be searched.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the method further includes:
the control unit is used for controlling the shooting module of the learning equipment to shoot a mirror image in the reflecting device as a first image when a preset voice wake-up word is detected; the light reflecting device is arranged on the learning equipment, and the mirror surface of the light reflecting device and the lens surface of the shooting module form a preset angle.
A third aspect of an embodiment of the present invention discloses a learning apparatus, including:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform any of the methods disclosed in the first aspect of the embodiments of the present invention.
A fourth aspect of the invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to perform any of the methods disclosed in the first aspect of the embodiments of the invention.
A fifth aspect of an embodiment of the invention discloses a computer program product which, when run on a computer, causes the computer to perform any of the methods disclosed in the first aspect of the embodiment of the invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
after the color of the appointed object in the first image and the color of the character are identified, the color of the appointed object in the image is adjusted to be opposite to the color of the character, and a second image is obtained; the color difference between the color of the character in the second image and the color of the appointed object is larger after the color adjustment, so that the appointed object has smaller influence on the character recognition when the character recognition is carried out on the second image, thereby improving the accuracy of the character recognition in the image, more accurately recognizing the topics possibly required to be searched by the user, further improving the accuracy of the search of the topics, and more conforming to the expected learning content for the feedback of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for searching questions based on image preprocessing, disclosed in the embodiment of the invention;
FIG. 2 is an exemplary diagram of an image binarized according to an embodiment of the present invention;
FIG. 3 is an exemplary diagram of an image binarized according to another embodiment of the present invention;
FIG. 4 is an exemplary diagram of a histogram obtained after projecting characters onto the Y-axis, according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of another method for searching questions based on image preprocessing according to the embodiment of the invention;
FIG. 6 is a flow chart of another method for searching questions based on image preprocessing according to an embodiment of the present invention;
fig. 7 is an exemplary diagram of a photographing process of photographing an image by a learning device according to an embodiment of the present invention;
Fig. 8 is a schematic structural view of a learning apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural view of another learning apparatus disclosed in an embodiment of the present invention;
fig. 10 is a schematic structural view of still another learning apparatus disclosed in an embodiment of the present invention;
fig. 11 is a schematic structural view of still another learning apparatus disclosed in the embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments of the present invention and the accompanying drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a question searching method and learning equipment based on image preprocessing, which are used for improving the accuracy of character recognition in images and improving the accuracy of question searching. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for searching questions based on image preprocessing according to an embodiment of the invention. The method for searching questions based on image preprocessing described in fig. 1 is suitable for learning devices such as home teaching machines, learning plates and the like, and the embodiment of the invention is not limited. The operating system of the learning device may include, but is not limited to, an Android operating system, an IOS operating system, a Symbian operating system, a Black Berry operating system, a Windows Phone8 operating system, and the like, which are not limited to embodiments of the present invention. As shown in fig. 1, the method for searching questions based on image preprocessing may include the following steps:
101. the learning device identifies a color of the object specified in the first image.
In the embodiment of the invention, the first image may be an image shot by a shooting module of the learning device, or may be an image shot by a shooting module of an electronic device in communication connection with the learning device. For example, the electronic device in communication with the learning device may be an intelligent desk lamp equipped with a camera; when the intelligent desk lamp is placed on the desktop, the lens of the camera of the intelligent desk lamp faces towards the desktop, and images of the desktop and objects placed on the desktop can be shot. Therefore, the intelligent desk lamp can send the first image to the learning device through Wi-Fi, bluetooth, 4G, 5G or a wired data transmission mode after shooting the first image.
In addition, the object to be specified is an object used by the user when the user is used for specifying a certain content, and may be a preset specific object, such as a human hand or stationery such as a pen, a ruler and the like.
As an alternative implementation manner, the learning device may specifically perform the identification of the color of the specified object through two steps of positioning of the specified object and color statistics of the positioned area of the specified object. The positioning of the specified object can be achieved through feature matching, convolutional neural networks (Convolutional Neural Networks, CNN) and the like, the specified object is identified in the first image, and the position of the specified object in the first image is determined, so that the image area of the specified object can be determined in the first image to serve as the positioning area; by counting the pixel colors in the location area, the color of the designation can be identified. Preferably, when the object is a human hand, the image area where the hand is located may be identified from the first image directly by a skin color identification model such as RGB-based skin detection or elliptic skin model-based skin detection as the above-mentioned location area; the color of the hand is identified by counting the pixel colors in the location area.
As another alternative, if the specified object is a human hand, the color of the hand in the first image may also be directly set to a preset skin color.
102. The learning device recognizes the color of the character in the first image.
In the embodiment of the invention, the characters can comprise characters, punctuation marks, graphic symbols, numbers and the like of various countries such as Chinese characters, english characters and the like, and the embodiment of the invention is not limited.
As an alternative implementation manner, the learning device may specifically locate a text region (i.e. a region containing characters) in the first image by means of deep learning or the like, and determine a foreground portion (characters) and a background portion (background) in the text region by analyzing the duty ratio of different colors in the text region, so as to identify the colors of the characters. Under the scene of searching questions, most of characters are derived from learning materials such as books, test papers, exercise books and the like, and the colors of the characters on the learning materials are mostly black; alternatively, the character color in the first image may be set directly to black.
103. The learning device adjusts the color of the specified object in the first image to a color opposite to the color of the character to obtain a second image.
In the embodiment of the invention, it can be understood that in the color space of RGB, HSV and the like, a certain color can be represented by a unique numerical value; after the learning device recognizes the color of the character, the learning device can obtain the color opposite to the color of the character by reversing the color of the character; or, based on the correspondence between the complementary colors set in advance, the color complementary to the color of the character is found out as the color opposite to the color of the character.
Based on the positioning area of the specified object identified in the step 101, the outline of the specified object can be further corrected by means of edge detection and the like, so that the specified object in the image can be accurately selected; the color adjustment of the specifier can be accomplished by modifying the pixel color in the selected area to the color opposite the character color.
104. The learning device performs character recognition on the second image to obtain the topic content to be searched, and searches learning content corresponding to the topic content to be searched.
In the embodiment of the invention, character recognition can be performed through OCR. OCR generally comprises operations such as image preprocessing, character recognition, recognition result optimization and the like; wherein the image preprocessing generally comprises the following steps: graying, binarizing, noise reduction, tilt correction, character segmentation, and the like.
The principle of binarization is that by selecting an appropriate gray threshold, a pixel located on one side of the gray threshold is set to black and a pixel located on the other side of the gray threshold is set to white. When the colors of the appointed object and the character are similar, the appointed object and the character may be simultaneously positioned at the same side of the gray threshold value, and are set to be the same color during binarization; when the color of the specified object and the character are opposite, the specified object and the character are likely to be located on both sides of the gradation threshold value, respectively, and are set to the opposite colors at the time of binarization. Referring to fig. 2 and 3 together, fig. 2 is an exemplary diagram of an image obtained by binarizing the image according to an embodiment of the present invention, and fig. 3 is an exemplary diagram of an image obtained by binarizing the image according to another embodiment of the present invention. In fig. 2 and 3, the designated object is a human hand, and fig. 2 is a binarization result that may be obtained when the color of the hand is similar to that of the character; fig. 3 shows a possible binarization result when the color of the hand is opposite to that of the character, wherein the dotted line box in fig. 3 is used to show the position of the hand, and it is understood that the dotted line box does not exist in the actual binarization result.
Further, the character segmentation includes line segmentation and character segmentation. The line segmentation specifically comprises projecting characters to a Y axis, and accumulating projection values to obtain a histogram on the Y axis (shown in FIG. 4); the bottom of the histogram is background, and the peak of the histogram is the area where the characters are located, so that each character row can be identified. Continuing with fig. 2 and 3 as an example, it can be appreciated that if the binarization result is the image shown in fig. 2, the specified object (such as a hand) may affect the division of the character line, which may result in that the character line where the specified word "creation" is located cannot be divided from the character line below it; if the binarization result is an image as shown in fig. 3, division of character lines can be normally performed. Therefore, the color of the appointed object is adjusted to be the color opposite to the color of the character, the influence of the appointed object on character recognition can be reduced, particularly the influence of the appointed object on character segmentation is reduced, and therefore the learning device can more accurately recognize the topic content to be searched.
In addition, in the embodiment of the present invention, the content of the questions to be searched is the recognition result after character recognition, which may include, but is not limited to, questions of question answer, questions of choice questions, questions of composition questions, independent words, phrases, sentences, and the like.
In the method described in fig. 1, the color of the appointed object in the shot image is adjusted to be the color opposite to the color of the character, so that the accuracy of identifying the topic content to be searched from the image can be improved, the accuracy of searching the topic is improved, and the user is fed back to be more in accordance with the expected learning content.
Example two
Referring to fig. 5, fig. 5 is a flow chart of another method for searching questions based on image preprocessing according to an embodiment of the present invention. As shown in fig. 5, the method for searching questions based on image preprocessing includes the following steps:
501. the learning device recognizes a color of a specified object in the first image, a first position coordinate specified by the specified object in the first image, and a color of a character in the first image.
In the embodiment of the invention, after the positioning area of the specified object in the first image is identified, the position of the specific part of the specified object in the first image can be further identified. For example, the position of the tip of a finger or pen in the first image may be identified to obtain first position coordinates specified by the tip or pen in the first image.
502. The learning device recognizes a question keyword from the input voice information.
In the embodiment of the invention, the learning device can also have a voice input function, voice information can be input by a user, and preset question keywords can be identified from the voice information through a voice identification technology. The question keywords may include, but are not limited to, the following words and phrases: "words", "letters", "words", "sentences", "questions", "how to do", "how to read", "what meaning", "how to write".
503. The learning device adjusts the color of the specified object in the first image to a color opposite to the color of the character to obtain a second image.
504. The learning device determines a second coordinate position specified by the specified object in the second image based on the first coordinate position.
In this embodiment of the present invention, after color adjustment of a portion of pixels in the first image, a second image may be obtained, and as an alternative implementation manner, the value of the first coordinate position may be directly determined as the value of the second coordinate position, that is, the first coordinate position is the same as the second coordinate position.
505. The learning device determines a range of the second search area in the second image based on the question keyword and the second position coordinates.
In the embodiment of the invention, different question keywords can correspond to different search area ranges, and further, the image area above the second position coordinate can be searched by considering that the second position coordinate is the position designated by the designated object in the image.
For example, if the voice information input by the user is "how this question is done", the question keyword may be identified as "question", and the range of the corresponding second search area may be as follows: taking a line segment passing through the second position coordinates as the lower boundary of the second search area, wherein the line segment comprises character lines belonging to the same title; wherein the lower boundary is parallel to the character lines, and whether the two character lines belong to the same title can be identified through the interval between the character lines; if the distance between two adjacent character lines is smaller than the preset line distance, the two adjacent character lines can be considered to belong to the same title; otherwise, the two adjacent character lines may be considered to belong to different topics;
for another example, if the voice information input by the user is "how the word is read", the question keyword is recognized as "word", the range of the corresponding second search area may be as follows: taking a line segment passing through the second position coordinates as the lower boundary of the second search area, wherein the line segment comprises characters belonging to the same word; wherein the lower boundary is parallel to the character closest to the lower boundary; and, whether two characters belong to the same word can be recognized through the distance between the characters; if the distance between two adjacent characters is smaller than the preset word distance, the two adjacent characters can be considered to belong to the same word; otherwise, the two adjacent characters may be considered to belong to different words;
Further, if the voice information input by the user is "what meaning of this word", the question keyword may be recognized as "word", the range of the corresponding second search area may be as follows: taking a line segment passing through the second position coordinates as the lower boundary of the second search area, wherein the line segment comprises a character closest to the second position coordinates; the characters in the character row can be segmented into individual characters through character segmentation, so that one character closest to the second position coordinate can be determined based on the second position coordinate.
It is understood that, for the question keywords such as "letters", "words", "sentences", etc., the range of the corresponding second search area contains the number of characters corresponding to the language structure such as "letters", "words", "sentences", etc. It can be seen that the second search area may be in the range of part or all of the second image. By implementing step 505, the area range required for character recognition can be reduced, so that the calculation amount of character recognition is reduced, and the speed of character recognition can be improved; in addition, the object of character recognition can accurately contain the content appointed by the user, character recognition on the content which is not concerned by the user is reduced as much as possible, unnecessary interference factors can be reduced during searching, and the searching accuracy is improved.
Furthermore, in other possible embodiments, the second location coordinates may also define either the upper boundary or the two side boundaries of the second search area; specifically, it is possible to determine which one of the lower boundary, the upper boundary, the left boundary, and the right boundary of the second search area is defined by the second position coordinates by the specified direction input by the user. Further, the user can specify the direction through voice input. For example, the voice information input by the user may be "what the term means below" and the specified direction may be identified as being below, and then the second position coordinate defines the upper boundary of the second search area; if the user inputs speech information of "how read the word is to the left", the specified direction is recognized as being to the left, then the second position coordinates define the right boundary of the second search area. By identifying the specified direction of the user input, the scope of the second search area can be determined more accurately, thereby identifying the content specified by the user more accurately.
506. The learning device performs character recognition on the second search area to recognize that the result is the topic content to be searched.
507. The learning device searches for learning content corresponding to the topic content to be searched.
In the embodiment of the invention, the learning equipment takes all searched contents related to the subject content to be searched as corresponding learning contents.
As another alternative embodiment, the learning device may also recognize keywords related to the user's intention, such as "how to do", "how to read", "what meaning", "how to write", and the like, among the question keywords included in the voice information. If the question keywords contained in the voice information comprise 'how to do', question answers and/or question solving ideas corresponding to the question contents to be searched can be used as learning contents; if the question keywords contained in the voice information comprise 'how to read', the pronunciation corresponding to the question content to be searched can be used as learning content; if the question key words contained in the voice information comprise what meaning, word paraphrasing corresponding to the question content to be searched can be used as learning content; if the question keyword included in the voice information includes "how to write", the stroke order corresponding to the question content to be searched may be used as the learning content. That is, the learning apparatus may search for, as the learning content, content that is related to the topic content to be searched for and that corresponds to the user's intention.
It can be seen that, in the method shown in fig. 5, the color of the specified object is adjusted based on the color of the character, so that the influence of the color of the specified object on character recognition can be reduced, thereby improving the accuracy of character recognition and the accuracy of search questions; in addition, through recognizing the question keywords in the voice information and determining different second search area ranges according to different question keywords, character recognition on contents which are not concerned by users can be reduced as much as possible, unnecessary interference factors can be reduced during searching, the accuracy of searching questions can be further improved, the calculated amount of character recognition can be reduced, and the speed of character recognition can be improved.
Example III
Referring to fig. 6, fig. 6 is a flowchart of another method for searching questions based on image preprocessing according to an embodiment of the present invention. As shown in fig. 6, the method for searching questions based on image preprocessing includes the following steps:
601. when the learning device detects a preset voice wake-up word, the learning device controls the shooting module to shoot a mirror image in the reflecting device as a first image.
In the embodiment of the invention, the reflecting device is arranged on the learning equipment, and the mirror surface of the reflecting device and the lens surface of the shooting module form a preset angle. Referring to fig. 7, fig. 7 is a diagram illustrating an example of a photographing process of an image photographed by the learning apparatus. As shown in fig. 7, the manner in which the learning device controls the photographing module to photograph the mirror image in the light reflecting device as the first image may be: the learning device 10 may be provided with a photographing module 20, the photographing module 20 being used for photographing to obtain an image; a light reflecting device 30 (e.g., a reflector, a prism, or a convex lens) may be further disposed right in front of the photographing module 20, where the light reflecting device 30 is used for changing the light path of the photographing module, so that the photographing module 20 photographs the carrier map 40. By making the photographing module 20 of the learning apparatus 10 photograph the image of the carrier map 40 in the light reflecting device 30 without manually changing the placement mode of the learning apparatus 10, the photographing process can be simplified, and the photographing efficiency can be improved. The carrier chart 40 may be a book, an exercise book, a drawing book, a test paper, etc. placed on a desktop, which is not limited in the embodiment of the present invention.
In addition, the voice wake-up word can be set as a word with lower use frequency in daily dialogue, so that false triggering of a shooting function can be reduced; and after the voice wake-up word is detected, the shooting module is started to shoot, so that the shooting module does not need to be kept in a normally open state, and the power consumption can be reduced.
602. The learning device recognizes a question keyword from the input voice information.
603. The learning device identifies a color of the specified object in the first image, and a first position coordinate of the specified object specified in the first image.
604. The learning device determines a range of a color recognition area centered on the first position coordinates according to a preset area height, and recognizes colors of characters in the color recognition area.
In the embodiment of the present invention, the above-mentioned region height is used to indicate the number of character lines included in the color recognition region. Assuming that the region height indicates that the color recognition region includes N character lines (N is a positive integer, which may be manually set based on experience), then N/2 character lines above the first position coordinate that are closest to the first position coordinate and N/2 character lines below the first position coordinate that are closest to the first position coordinate may be selected as the color recognition region; or if the character line below the first position coordinate is blocked by the appointed object, it can be considered that character recognition is not needed to be carried out on the character line below the first position coordinate, 1 character line closest to the first position coordinate below the first position coordinate and N-1 character lines closest to the first position coordinate above the first position coordinate can be selected as a color recognition area; the color of the character in the 1 character row closest to the first position coordinate below is identified, whether the color of the appointed object is similar to the color of the character in the 1 character row closest to the first position coordinate below can be judged, and therefore the color of the appointed object can be adjusted when the colors are similar, and further when two adjacent character rows closest to the first position coordinate above and below the first position coordinate are segmented, the influence of the color of the appointed object on segmentation is reduced.
By implementing the embodiment, only the characters in a certain range near the appointed object are required to be subjected to color recognition, the range of the characters for carrying out color recognition is reduced, and the time required for character recognition can be shortened.
605. The learning device determines whether the color of the specified object and the color difference of the color of the character in the color recognition area are lower than a preset threshold, if so, steps 606 to 608 are performed, and if not, steps 609 to 610 are performed.
In the embodiment of the present invention, if the color difference is lower than the threshold value, it may be considered that the color of the specified object is similar to the color of the character, then steps 606 to 608 are executed to adjust the color of the specified object, and character recognition is performed on the image generated after adjustment; otherwise, the color of the appointed object is considered to be different from the color of the character more, so that the color of the appointed object can be directly recognized by the first image without adjusting, thereby reducing operation steps and further shortening the time required by character recognition. The preset threshold value can be set according to the gray threshold value in binarization.
606. The learning device adjusts the color of the specified object in the first image to a color opposite to the color of the character in the color recognition area to obtain a second image.
607. The learning device determines a second coordinate position of the specified object in the second image according to the first coordinate position, and determines a range of a second search area in the second image according to the identified question keyword and the second position coordinates.
608. The learning device performs character recognition on the second search area to recognize the result as the topic content to be searched, and directly performs step 611.
609. The learning device determines a range of the first search area in the first image based on the question keyword and the first position coordinates.
In the embodiment of the present invention, the specific implementation manner of step 609 is the same as that of step 505 in the second embodiment, and the learning device corresponds to different search area ranges based on different question keywords, which will not be described in detail below.
610. The learning device performs character recognition on the first search area to recognize the result as the topic content to be searched, and performs step 611.
611. The learning device searches for learning content corresponding to the topic content to be searched.
It can be seen that, in the method described in fig. 6, the color of the specified object is adjusted based on the color of the character, so that the influence of the color of the specified object on the character recognition can be reduced, thereby improving the accuracy of the character recognition; the method can also correspond to different search area ranges based on different question keywords, can reduce unnecessary interference factors, further improve the accuracy of searching questions, can reduce the calculated amount of character recognition and improve the speed of character recognition. In addition, the method described in fig. 6 may further start the shooting module to shoot after detecting the voice wake-up word, so that the shooting module does not need to keep a normally open state, and power consumption may be reduced; further, the shooting module is used for shooting images in the light reflecting device without manually changing the placement mode of the learning equipment, so that the shooting process can be simplified, and the shooting efficiency can be improved. Furthermore, the range of performing color recognition on the characters can be narrowed by setting the color recognition area, and the first image can be directly subjected to character recognition when the color difference between the appointed object and the characters is larger, so that the time required by character recognition can be shortened, the response speed of the learning equipment can be improved, and the user experience can be improved.
Example IV
Referring to fig. 8, fig. 8 is a schematic structural diagram of a learning device according to an embodiment of the present invention. As shown in fig. 8, the learning device may include:
a first identifying unit 801 for identifying a color of a specified object in the first image;
in the embodiment of the present invention, the first recognition unit 801 may acquire, as the first image, an image captured by a capturing module of the learning device, or may acquire, as the first image, an image captured by an electronic device in communication with the learning device; the appointed object is an object used by a user when the user is used for appointing a certain content, and can be a preset specific object, such as a human hand or stationery such as a pen, a ruler and the like; specifically, the first recognition unit 801 can perform recognition of the color of the specified object through two steps of positioning of the specified object and color statistics of the positioned area of the specified object. Alternatively, if the specified object is a human hand, the first recognition unit 801 may directly set the color of the hand in the first image to be a preset skin color;
a second recognition unit 802 for recognizing the color of the character in the first image;
in the embodiment of the present invention, the second recognition unit 802 specifically may locate a text region (i.e. a region containing characters) in the first image by means of deep learning or the like, and determine a foreground portion (characters) and a background portion (background) in the text region by analyzing the duty ratio of different colors in the text region, so as to recognize the colors of the characters; alternatively, the second recognition unit 802 may also set the character color in the first image to black directly;
An adjusting unit 803 for adjusting the color of the specified object in the first image recognized by the first recognizing unit 801 to a color opposite to the color of the character recognized by the second recognizing unit 802 to obtain a second image;
in the embodiment of the present invention, the adjusting unit 803 may obtain a color opposite to the color of the character by inverting the color of the character; or, based on the corresponding relation between the preset complementary colors, searching out the color complementary to the color of the character as the color opposite to the color of the character;
a third recognition unit 804, configured to perform character recognition on the second image to obtain the topic content to be searched;
a search unit 805 for searching for learning content corresponding to topic content to be searched.
Therefore, by implementing the learning device shown in fig. 8, the color of the appointed object in the shot image can be adjusted to be the color opposite to the color of the character, and the accuracy of identifying the topic content to be searched from the image can be improved, so that the accuracy of searching the topic is improved, and the user is fed back to more expected learning content.
Example five
Referring to fig. 9, fig. 9 is a schematic structural diagram of another learning device according to an embodiment of the present invention. The learning device shown in fig. 9 is optimized by the learning device shown in fig. 8. As shown in fig. 9, the learning device may further include:
A fourth recognition unit 806 for recognizing a question keyword from the input voice information; the question keywords may include, but are not limited to, the following words and phrases: "words", "letters", "words", "sentences", "questions", "how to do", "how to read", "what meaning", "how to write";
a fifth identifying unit 807 for identifying first position coordinates of the specified object in the first image; wherein the position of the specific part of the specified object in the first image can be identified as a first position coordinate;
accordingly, the third identifying unit 804 may specifically include:
a position determination subunit 8041 configured to determine a second coordinate position specified by the specified object in the second image according to the first coordinate position; wherein the value of the first coordinate position can be directly determined as the value of the second coordinate position, i.e. the first coordinate position is identical to the second coordinate position;
a range determining subunit 8042, configured to determine a range of the second search area in the second image according to the question keyword identified by the fourth identifying unit 806 and the second position coordinate determined by the position determining subunit 8041;
in the embodiment of the invention, different question keywords can correspond to different search area ranges, and the range of the second search area corresponding to each question keyword comprises the number of characters corresponding to the question keyword; for example, the range of the second search area corresponding to the question keyword "question" uses the line segment passing through the second position coordinate as the lower boundary of the second search area, and includes character lines belonging to the same question; it can be seen that the second search area may be in the range of part or all of the second image;
Optionally, the range determining subunit 8042 may further identify a specified direction included in the question keyword identified by the fourth identifying unit 806, and determine, according to the specified direction, which boundary, from the lower boundary, the upper boundary, the left boundary, and the right boundary, of the second search area is defined by the second location coordinate;
a character recognition subunit 8043, configured to perform character recognition on the second search area, so that a recognition result is the topic content to be searched;
alternatively, in the learning apparatus shown in fig. 9, the manner in which the search unit 805 searches for learning content corresponding to the topic content to be searched for may specifically be:
a search unit 805 for identifying a keyword related to a user's intention among the question keywords included in the voice information; content related to the topic content to be searched and corresponding to the user intention is searched as learning content.
As can be seen, implementing the learning device shown in fig. 9, adjusting the color of the specified object based on the color of the character can reduce the influence of the color of the specified object on character recognition, thereby improving the accuracy of character recognition and the accuracy of search questions; in addition, through recognizing the question keywords in the voice information and determining different second search area ranges according to different question keywords, character recognition on contents which are not concerned by users can be reduced as much as possible, unnecessary interference factors can be reduced during searching, the accuracy of searching questions can be further improved, the calculated amount of character recognition can be reduced, and the speed of character recognition can be improved.
Example six
Referring to fig. 10, fig. 10 is a schematic structural diagram of another learning device according to an embodiment of the present invention. The learning device shown in fig. 10 is optimized by the learning device shown in fig. 9. As shown in fig. 10, in the learning apparatus:
the second identifying unit 802 may specifically include:
a region determination subunit 8021 for determining, based on a preset region height, a range of the color recognition region centered on the first position coordinate recognized by the fifth recognition unit 807; the above-mentioned area height is used for indicating the number of character lines contained in the color recognition area; assuming that the region height indicates that the color recognition region includes N character lines (N is a positive integer, which may be manually set based on experience), the region determination subunit 8021 may specifically select, as the color recognition region, N/2 character lines above the first position coordinate that are closest to the first position coordinate and N/2 character lines below the first position coordinate that are closest to the first position coordinate; or when the character line below the first position coordinate is blocked by the appointed object, selecting 1 character line closest to the first position coordinate below the first position coordinate and N-1 character lines closest to the first position coordinate above the first position coordinate as color recognition areas;
A color recognition subunit 8022 for recognizing the color of the character in the color recognition area;
accordingly, the adjusting unit 803 may specifically include:
a judging subunit 8031, configured to judge whether the color difference between the color of the specified object and the color of the character in the color recognition area is lower than a preset threshold;
an adjustment subunit 8032 is configured to adjust, when the determination subunit 8031 determines that the color difference is lower than the threshold value, the color of the object specified in the first image to a color opposite to the color of the character in the color recognition area, so as to obtain a second image.
It may be appreciated that, after the foregoing adjustment subunit 8032 performs color adjustment on the first image to obtain the second image, the position determining subunit 8041 may be triggered to perform an operation of determining, according to the first coordinate position, a second coordinate position specified by the object specified in the second image, so that the range determining subunit 8042 determines, according to the question keyword identified by the fourth identifying unit 806 and the second position coordinate determined by the position determining subunit 8041, a range of a second search area in the second image, and triggers the character identifying subunit 8043 to perform character identification on the second search area, so that the identification result is the subject content to be searched.
In addition, the above-mentioned range determining subunit 8042 is further configured to determine, when the determining subunit 8031 determines that the color difference is not lower than the threshold value, the range of the first search area in the first image according to the first position coordinate identified by the fourth identifying unit 806 and the fifth identifying unit 807; the range of the first search area is part or all of the first image;
the character recognition subunit 8043 is further configured to perform character recognition on the first search area, so as to identify the result as the topic content to be searched.
Optionally, the learning device shown in fig. 10 may further include:
a control unit 808, configured to, when a preset voice wake-up word is detected, control a shooting module of the learning device to shoot a mirror image in the light reflecting device as a first image, so as to trigger the first recognition unit 801 to recognize a color of a specified object in the first image, trigger the second recognition unit 802 to recognize a color of a character in the first image, and trigger the fifth recognition unit 807 to recognize a first position coordinate of the specified object specified in the first image;
the reflecting device is arranged on the learning equipment, and the mirror surface of the reflecting device and the lens surface of the shooting module form a preset angle. Imaging in the reflecting device is shot through the shooting module, the placement mode of the learning equipment is not changed manually, the shooting process can be simplified, and the shooting efficiency is improved.
As can be seen, implementing the learning device shown in fig. 10, the color of the specified object can be adjusted based on the color of the character, so as to reduce the influence of the color of the specified object on character recognition, thereby improving the accuracy of character recognition; the method can also correspond to different search area ranges based on different question keywords, can reduce unnecessary interference factors, further improve the accuracy of search questions, reduce the calculated amount of character recognition and improve the speed of character recognition. Further, after the voice wake-up word is detected, the shooting module is started to shoot, so that power consumption is reduced; the imaging in the reflecting device is shot through the shooting module without manually changing the placement mode of the learning equipment, so that the shooting process can be simplified, and the shooting efficiency can be improved; furthermore, the range of carrying out color recognition on the characters can be reduced by setting the color recognition area, and the first image can be directly subjected to character recognition when the color difference between the appointed object and the characters is larger, so that the time required by character recognition can be shortened, the response speed of learning equipment can be improved, and the user experience can be improved
Example seven
Referring to fig. 11, fig. 11 is a schematic structural diagram of another learning device according to an embodiment of the present invention. As shown in fig. 11, the learning device may include:
A memory 901 storing executable program code;
a processor 902 coupled to the memory 901;
the processor 902 invokes executable program codes stored in the memory 901, and performs any of the question searching methods based on image preprocessing shown in fig. 1, 5 and 6.
It should be noted that, the learning device shown in fig. 11 may further include components not shown, such as a power supply, an input key, a speaker, a microphone, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the invention discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any of the problem searching methods based on image preprocessing shown in fig. 1, 5 and 6.
Embodiments of the present invention disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform any of the image preprocessing-based question searching methods shown in fig. 1, 5 and 6.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments and that the acts and modules referred to are not necessarily required for the present invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the foregoing processes do not imply that the execution sequences of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation of the embodiments of the present invention.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-accessible memory. Based on this understanding, the technical solution of the present invention, or a part contributing to the prior art or all or part of the technical solution, may be embodied in the form of a software product stored in a memory, comprising several requests for a computer device (which may be a personal computer, a server or a network device, etc., in particular may be a processor in a computer device) to execute some or all of the steps of the above-mentioned method of the various embodiments of the present invention.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
The above describes in detail a method for searching questions and learning equipment based on image preprocessing disclosed in the embodiments of the present invention, and specific examples are applied to illustrate the principles and embodiments of the present invention, and the above description of the embodiments is only used to help understand the method and core ideas of the present invention. Meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. The method for searching the questions based on the image preprocessing is characterized by comprising the following steps:
identifying a color of the specifier in the first image;
identifying the color of the character in the first image;
adjusting the color of the appointed object in the first image to be opposite to the color of the character so as to obtain a second image;
performing character recognition on the second image to obtain the topic content to be searched, and searching learning content corresponding to the topic content to be searched;
the adjusting the color of the specified object in the first image to be opposite to the color of the character to obtain a second image comprises:
judging whether the color difference of the color of the character in the color identification area is lower than a preset threshold value or not;
and if the color difference is lower than the threshold value, adjusting the color of the appointed object in the first image to be opposite to the color of the character in the color recognition area so as to obtain a second image.
2. The method according to claim 1, wherein the method further comprises:
identifying a question keyword from the input voice information;
identifying a first location coordinate specified by the specifier in the first image;
And performing character recognition on the second image to obtain the topic content to be searched, including:
determining a second position coordinate designated by the designated object in the second image according to the first position coordinate;
determining the range of a second search area in the second image according to the questioning keyword and the second position coordinates; the range of the second search area is part or all of the second image;
and carrying out character recognition on the second search area to recognize that the result is the topic content to be searched.
3. The method of claim 2, wherein the identifying the color of the character in the first image comprises:
determining a range of a color identification area taking the first position coordinate as a center according to a preset area height; the region height is used for indicating the number of character lines contained in the color recognition region;
and recognizing the color of the character in the color recognition area.
4. A method according to claim 3, characterized in that the method further comprises:
if the color difference is not lower than the threshold value, determining the range of a first search area in the first image according to the questioning keyword and the first position coordinate; the range of the first search area is part or all of the first image;
And performing character recognition on the first search area to identify the result as the topic content to be searched, and executing the step of searching the learning content corresponding to the topic content to be searched.
5. The method of any one of claims 1-4, wherein prior to said identifying the color of the designation in the first image, the method further comprises:
when a preset voice wake-up word is detected, controlling a shooting module of the learning equipment to shoot a mirror image in the reflecting device as a first image; the light reflecting device is arranged on the learning equipment, and the mirror surface of the light reflecting device and the lens surface of the shooting module form a preset angle.
6. A learning apparatus, characterized by comprising:
a first recognition unit configured to recognize a color of a specified object in the first image;
a second recognition unit configured to recognize a color of a character in the first image;
an adjustment unit configured to adjust a color of the specified object in the first image to a color opposite to a color of the character, to obtain a second image;
the third recognition unit is used for carrying out character recognition on the second image so as to obtain the topic content to be searched;
A search unit configured to search learning content corresponding to the topic content to be searched;
the adjusting unit includes:
a judging subunit, configured to judge whether a color difference between the color of the specified object and the color of the character in the color recognition area is lower than a preset threshold;
and the adjusting subunit is used for adjusting the color of the appointed object in the first image to be opposite to the color of the character in the color recognition area when the judging subunit judges that the color difference is lower than the threshold value, so as to obtain a second image.
7. The learning device of claim 6, further comprising:
a fourth recognition unit for recognizing a question keyword from the inputted voice information;
a fifth identifying unit configured to identify a first position coordinate specified by the specified object in the first image;
and, the third identifying unit includes:
a position determining subunit, configured to determine, according to the first position coordinate, a second position coordinate specified by the specified object in the second image;
a range determining subunit, configured to determine a range of a second search area in the second image according to the question keyword and the second position coordinate; the range of the second search area is part or all of the second image;
And the character recognition subunit is used for carrying out character recognition on the second search area so as to recognize that the result is the topic content to be searched.
8. The learning apparatus of claim 7, wherein the second recognition unit includes:
a region determining subunit, configured to determine, according to a preset region height, a range of a color identification region centered on the first position coordinate; the region height is used for indicating the number of character lines contained in the color recognition region;
and the color recognition subunit is used for recognizing the colors of the characters in the color recognition area.
9. The learning device of claim 8 wherein:
the range determining subunit is further configured to determine, when the judging subunit judges that the color difference is not lower than the threshold value, a range of a first search area in the first image according to the question keyword and the first position coordinate; the range of the first search area is part or all of the first image;
the character recognition subunit is further configured to perform character recognition on the first search area, so that a recognition result is the topic content to be searched.
10. The learning device according to any one of claims 6 to 9, characterized by further comprising:
The control unit is used for controlling the shooting module of the learning equipment to shoot a mirror image in the reflecting device as a first image when a preset voice wake-up word is detected; the light reflecting device is arranged on the learning equipment, and the mirror surface of the light reflecting device and the lens surface of the shooting module form a preset angle.
CN201910178750.0A 2019-03-11 2019-03-11 Question searching method and learning device based on image preprocessing Active CN111027556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910178750.0A CN111027556B (en) 2019-03-11 2019-03-11 Question searching method and learning device based on image preprocessing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910178750.0A CN111027556B (en) 2019-03-11 2019-03-11 Question searching method and learning device based on image preprocessing

Publications (2)

Publication Number Publication Date
CN111027556A CN111027556A (en) 2020-04-17
CN111027556B true CN111027556B (en) 2023-12-22

Family

ID=70203435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910178750.0A Active CN111027556B (en) 2019-03-11 2019-03-11 Question searching method and learning device based on image preprocessing

Country Status (1)

Country Link
CN (1) CN111027556B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628196A (en) * 2021-08-16 2021-11-09 广东艾檬电子科技有限公司 Image content extraction method, device, terminal and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021122A1 (en) * 1997-10-22 1999-04-29 Ascent Technology, Inc. Voice-output reading system with gesture-based navigation
JP2002366899A (en) * 2001-06-06 2002-12-20 Toppan Printing Co Ltd Method and device for character information recognition
CN1924777A (en) * 2005-08-01 2007-03-07 索尼株式会社 Information processing apparatus and method, and program
CN101599124A (en) * 2008-06-03 2009-12-09 汉王科技股份有限公司 A kind of from video image the method and apparatus of separating character
CN102782680A (en) * 2010-02-26 2012-11-14 乐天株式会社 Information processing device, information processing method, and recording medium that has recorded information processing program
CN105096347A (en) * 2014-04-24 2015-11-25 富士通株式会社 Image processing device and method
CN106610761A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Icon color adjusting method and device
CN107992483A (en) * 2016-10-26 2018-05-04 深圳超多维科技有限公司 The method, apparatus and electronic equipment of translation are given directions for gesture
CN108073922A (en) * 2017-12-21 2018-05-25 广东小天才科技有限公司 A kind of information search method and electronic equipment limited based on color
CN109192204A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of sound control method and smart machine based on smart machine camera
CN109327657A (en) * 2018-07-16 2019-02-12 广东小天才科技有限公司 A kind of taking pictures based on camera searches topic method and private tutor's equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999021122A1 (en) * 1997-10-22 1999-04-29 Ascent Technology, Inc. Voice-output reading system with gesture-based navigation
JP2002366899A (en) * 2001-06-06 2002-12-20 Toppan Printing Co Ltd Method and device for character information recognition
CN1924777A (en) * 2005-08-01 2007-03-07 索尼株式会社 Information processing apparatus and method, and program
CN101599124A (en) * 2008-06-03 2009-12-09 汉王科技股份有限公司 A kind of from video image the method and apparatus of separating character
CN102782680A (en) * 2010-02-26 2012-11-14 乐天株式会社 Information processing device, information processing method, and recording medium that has recorded information processing program
CN105096347A (en) * 2014-04-24 2015-11-25 富士通株式会社 Image processing device and method
CN106610761A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 Icon color adjusting method and device
CN107992483A (en) * 2016-10-26 2018-05-04 深圳超多维科技有限公司 The method, apparatus and electronic equipment of translation are given directions for gesture
CN108073922A (en) * 2017-12-21 2018-05-25 广东小天才科技有限公司 A kind of information search method and electronic equipment limited based on color
CN109327657A (en) * 2018-07-16 2019-02-12 广东小天才科技有限公司 A kind of taking pictures based on camera searches topic method and private tutor's equipment
CN109192204A (en) * 2018-08-31 2019-01-11 广东小天才科技有限公司 A kind of sound control method and smart machine based on smart machine camera

Also Published As

Publication number Publication date
CN111027556A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111753767A (en) Method and device for automatically correcting operation, electronic equipment and storage medium
EP1953675A1 (en) Image processing apparatus, image processing method, and storage medium
CN111652223A (en) Certificate identification method and device
CN111353501A (en) Book point-reading method and system based on deep learning
CN111563512B (en) Method and device for automatically smearing answers, electronic equipment and storage medium
CN111126394A (en) Character recognition method, reading aid, circuit and medium
CN111652141A (en) Question segmentation method, device, equipment and medium based on question number and text line
CN111027556B (en) Question searching method and learning device based on image preprocessing
CN112434640B (en) Method, device and storage medium for determining rotation angle of document image
CN111081103A (en) Dictation answer obtaining method, family education equipment and storage medium
Ovodov Optical braille recognition using object detection neural network
CN107147786B (en) Image acquisition control method and device for intelligent terminal
CN111079736B (en) Dictation content identification method and electronic equipment
CN110795918B (en) Method, device and equipment for determining reading position
CN111090343B (en) Method and device for identifying click-to-read content in click-to-read scene
CN111079726B (en) Image processing method and electronic equipment
CN112163513A (en) Information selection method, system, device, electronic equipment and storage medium
CN111753168A (en) Method and device for searching questions, electronic equipment and storage medium
CN111711758B (en) Multi-pointing test question shooting method and device, electronic equipment and storage medium
CN111432131B (en) Photographing frame selection method and device, electronic equipment and storage medium
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN115984859A (en) Image character recognition method and device and storage medium
CN111027353A (en) Search content extraction method and electronic equipment
CN111079769B (en) Identification method of writing content and electronic equipment
CN111563511B (en) Method and device for intelligent frame questions, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant