CN113673405B - Problem correction method and system based on problem recognition and intelligent home education learning machine - Google Patents

Problem correction method and system based on problem recognition and intelligent home education learning machine Download PDF

Info

Publication number
CN113673405B
CN113673405B CN202110933419.2A CN202110933419A CN113673405B CN 113673405 B CN113673405 B CN 113673405B CN 202110933419 A CN202110933419 A CN 202110933419A CN 113673405 B CN113673405 B CN 113673405B
Authority
CN
China
Prior art keywords
frame
image
perspective
determining
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110933419.2A
Other languages
Chinese (zh)
Other versions
CN113673405A (en
Inventor
曹宝军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kuaiyidian Education Science & Technology Co ltd
Original Assignee
Shenzhen Kuaiyidian Education Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kuaiyidian Education Science & Technology Co ltd filed Critical Shenzhen Kuaiyidian Education Science & Technology Co ltd
Priority to CN202110933419.2A priority Critical patent/CN113673405B/en
Publication of CN113673405A publication Critical patent/CN113673405A/en
Application granted granted Critical
Publication of CN113673405B publication Critical patent/CN113673405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30176Document

Abstract

The application relates to a problem correction method and system based on problem identification and an intelligent home education learning machine, which comprise the steps of determining a basic image; determining a work frame based on the base image; determining a perspective frame based on the operation frame; and (3) carrying out image correction on the basic image based on the perspective frame, and determining a final image. The perspective frame is used as a template to carry out image correction on the basic image, so that the shape of the basic image is converted to correspond to the shape of the perspective frame, a final image can be obtained, the shape of the final image is closer to the actual shape of the operation carrier, the problem originally shot on the operation carrier can be reflected, the picture and text recognition can be carried out more quickly and accurately in the follow-up problem correction step, and the user experience is improved.

Description

Problem correction method and system based on problem recognition and intelligent home education learning machine
Technical Field
The application relates to the field of job correction, in particular to a problem correction method and system based on problem identification and an intelligent home education learning machine.
Background
With the continuous advancement of computer technology and education informatization, computer technology has been gradually applied to daily education and teaching activities, such as in existing intelligent terminal products, and there are many software for solving the problem of correction in correction work or test paper. In the use process, a user can shoot the problems which are already done, and upload the pictures of the problems to software; the software carries out picture character recognition on the photo, recognizes the questions and the answers in the photo, and then carries out on-line search on the corresponding questions and correct answers from the questions library to compare, thereby judging whether the answers in the photo are correct or not to finish the correction of the problems.
However, when a user photographs a problem, usually, a handheld photographing device photographs a job carrier such as a exercise book or a test paper, the photographing device and the problem carrier are inclined, and the content in an image may have a large inclination problem and a small inclination problem, so that the image content is not easily identified in the subsequent picture character identification process, the accuracy of an identification result is reduced, and the user experience is affected.
Disclosure of Invention
The first purpose of the application is to provide a problem correction method based on problem recognition, which has the characteristic of improving the accuracy of picture character recognition.
The first object of the present invention is achieved by the following technical solutions:
the image correction method based on problem correction optionally comprises the following steps:
determining a basic image; wherein, the content of the basic image comprises a job carrier for recording problems;
determining a work frame based on the base image; wherein the job frame is capable of reflecting a geometry of the job carrier in the base image;
determining a perspective frame based on the operation frame; the perspective frame is used for reflecting the geometric shape of the operation frame after perspective conversion, and the geometric shape of the perspective frame is matched with the geometric shape of the operation carrier;
And (3) carrying out image correction on the basic image based on the perspective frame, and determining a final image.
By adopting the technical scheme, the basic image is an image acquired based on the operation carrier, the image content of the basic image comprises the operation carrier and problems on the operation carrier, and the shape of the operation carrier in the basic image can be extracted by extracting the operation frame from the basic image. The user is likely to tilt when photographing the work carrier, and the work carrier in the base image has a large tilt angle and a small tilt angle due to the perspective problem, so that the shape of the work frame is often different from the actual shape of the work carrier. By utilizing the geometric features of the outline of the operation frame, a perspective frame corresponding to the operation frame can be generated, the perspective frame can reflect the shape of the operation frame after graphic correction according to geometric structure perspective, and the shape of the perspective frame corresponds to the actual shape of the operation carrier. Therefore, the perspective frame is used as a template to carry out image correction on the basic image, so that the imaging content of the corresponding operation carrier, problem and answer in the basic image is subjected to inclination correction, a final image is obtained, the shape of the final image is closer to the actual shape of the operation carrier, the problem originally shot on the operation carrier can be reflected, the picture and character recognition can be carried out more rapidly and accurately in the follow-up problem correction step, and the user experience is improved.
Optionally, in a specific method for determining a perspective frame based on a job frame, the method includes:
determining the upper bottom of the frame, the lower bottom of the frame and the height of the frame based on the operation frame; the operation frame is quadrilateral, the upper frame bottom can reflect one side of the operation frame, the lower frame bottom can reflect the opposite side of the upper frame bottom in the operation frame, and the frame height can reflect the distance between the upper frame bottom and the lower frame bottom;
determining a perspective broadside and a perspective long side based on the upper frame bottom, the lower frame bottom and the frame height; wherein the geometry of the work carrier is rectangular; the perspective broadside can reflect the width of the perspective frame; the perspective long side can reflect the length of the perspective frame;
and determining the perspective frame based on the perspective broadside and the perspective long side.
By adopting the technical scheme, the geometric shape of the operation carrier is rectangular, and the rectangular shape and the quadrangle can be mutually converted through geometric perspective transformation. The geometric shape of the operation frame is quadrilateral, the perspective broadside and the perspective long side can be constructed through geometric structure perspective through two opposite sides of the quadrilateral and the distance between the two opposite sides, and the perspective frame can be obtained by combining the perspective broadside and the perspective long side.
Optionally, the upper bottom of the frame, the lower bottom of the frame and the frame height can be combined to form a trapezoid reflecting the geometric shape of the operation frame, and the length of the upper bottom of the frame is greater than that of the lower bottom of the frame;
the specific method for determining the perspective broadside and the perspective long side based on the upper frame bottom, the lower frame bottom and the frame height comprises the following steps:
determining a perspective broadside based on the upper bottom of the frame;
and determining the perspective long side based on the upper bottom of the frame, the lower bottom of the frame and the height of the frame.
By adopting the above technical scheme, when shooting is performed based on the operation carrier, if shooting is performed from the obliquely upper side of one side of the operation carrier, the operation carrier is in a trapezoid shape in the whole base image, namely, the whole operation frame is in a trapezoid shape. The longer side of the operation frame is the side closest to the shooting area when the operation carrier shoots, namely the frame upper bottom of the operation frame is more matched with the actual shape of the operation carrier, so that the perspective broadside can be directly determined through the frame upper bottom, and the perspective long side can be determined through the geometric structure perspective of the frame upper bottom, the frame lower bottom and the frame height. By utilizing the perspective of the geometric figure, the trapezoid can be directly converted into the corresponding rectangle, the rectangle is more accurate and convenient, the upper bottom of the frame is determined to be the perspective broadside, the matching degree between the perspective frame and the operation carrier can be improved, and the actual shape of the operation carrier can be embodied by the perspective frame. When shooting the shooting operation carrier, the placement position of the operation carrier and the shooting area of the operation carrier can be fixed, so that the overall trapezoidal operation frame can be stably extracted from the basic image, and the calculation processing efficiency is improved.
Optionally, in a specific method for determining a job frame based on the base image, the method includes:
determining an initial frame based on the base image;
frame detection, judging whether the number of vertexes of the initial frame is larger than an approximation value, if so, executing a frame approximation step, otherwise, executing a frame determination step;
frame approximation, determining a fuzzy point pair of an initial frame, determining an approximation point based on the fuzzy point pair, replacing the fuzzy point pair with the approximation point, and returning to the frame detection step; wherein the fuzzy point pair comprises two vertexes closest to each vertex of the initial frame;
and determining the frame, wherein the initial frame is determined to be a working frame.
By adopting the technical scheme, the approximation value is the maximum vertex number of the initial frame, and when the vertex number of the initial frame is larger than the approximation value, the initial frame is a polygon with the edge number larger than the approximation value, and part of the vertices in the initial frame need to be removed. The basic shape of the initial frame is determined, the distribution of all the top points of the initial frame is regular under the condition of normally extracting the initial frame, and particularly, the edges of the initial frame are formed between two adjacent top points under the condition that the preset shape of the initial frame is quadrilateral; in the case of an initial frame extraction anomaly, the number of vertices of the initial frame is greater, wherein the distance between the redundant vertices and the neighboring vertices should be smaller. Therefore, two vertexes closest to each vertex of the initial frame can be extracted as fuzzy point pairs, and the fuzzy point pairs can be analyzed to obtain approaching points which can be used for fuzzy point pairs and are closer to the preset shape of the initial frame. When the initial frame extracts the abnormality, as long as the initial frame is in a reasonable processing range, the fuzzy point pairs are continuously replaced by the approximation points, so that the number of the vertexes of the initial frame can be reduced, the initial frame extracted abnormally can approach to the preset shape of the initial frame, and then 1, the initial frame can still accurately finish geometric structure perspective transformation in the subsequent steps, and the processing efficiency is improved.
Optionally, in a specific method of frame approximation, the method includes:
determining fuzzy point pairs based on each vertex of the initial frame;
determining a first reference edge and a second reference edge based on the positions of the fuzzy point pairs; the first reference edge and the second reference edge are two edges which can form an included angle in the initial frame, and are close to the fuzzy point pair;
determining a point of approach based on an intersection between the first reference edge and the second reference edge;
and eliminating the fuzzy point pairs from the initial frame, and taking the approximation points as the vertexes of the initial frame.
By adopting the technical scheme, the first reference edge and the second reference edge are edges with the closest two vertexes in the fuzzy point pair respectively, the intersection point between the first reference edge and the second reference edge is taken as an approximation point, so that the position of the approximation point is more close to the actual boundary of the operation carrier, the shape of the perspective frame obtained later is more matched with the actual shape of the operation carrier, and the accuracy of the subsequent processing result is improved.
Optionally, in the specific method for performing image correction on the base image based on the perspective frame to determine the final image, the specific method includes:
Based on the perspective frame, carrying out image correction on the basic image, and determining an initial correction image;
shadow removal and contour enhancement are sequentially carried out on the basis of the initial correction image, and an optimized correction image is determined;
and fusing the initial correction image and the optimized correction image to determine a final image.
By adopting the technical scheme, most shadows in the image are removed, the background of the image can be integrally whitened, the image is subjected to contour enhancement on the basis, and contour details in the image can be obtained, so that the basic contour of the image content can be reserved by optimizing the corrected image, and the initial corrected image and the optimized corrected image are fused to obtain the final image. The final image has the outline highlighted in the optimized correction image and the content in the initial correction image is reserved, so that the detail of the final image can be highlighted, the content loss is reduced, and the accuracy of the subsequent processing result is improved.
Optionally, in the specific method for determining the operation frame based on the base image, the method further includes:
judging whether the basic image meets the correction condition, if so, determining a perspective frame based on the operation frame; if not, executing a failure detection step;
When the basic image cannot extract the operation frame, the basic image can repeatedly extract the operation frame, and when the operation times of repeatedly extracting the operation frame of the basic image reach the upper limit, the basic image does not meet the correction condition;
and a failure detection step, judging whether the basic image is subjected to histogram processing, if not, carrying out the histogram processing on the basic image, and returning to the determination of the basic image.
By adopting the technical scheme, when the basic image does not meet the correction condition, a failure detection step is triggered, which means that the current basic image can not be extracted normally temporarily. In order to see if the base images also have the potential to extract the job borders, each base image has the opportunity to perform histogram processing to image enhance the base image itself. For each initial image, the basic image has the opportunity of repeatedly attempting to extract the operation frame fairly, and single operation frame extraction failure can not directly determine to discard the basic image, so that the frequency of the need of a user to re-shoot the basic image is effectively reduced, and the user experience is improved.
The second purpose of the application is to provide a method for modifying works based on topic identification, which has the characteristic of improving the accuracy of picture character identification.
The second object of the present invention is achieved by the following technical solutions:
the method for correcting the works based on the topic identification comprises the method for correcting the problems based on the topic identification, and the method for correcting the works further comprises the following steps:
performing format conversion based on the final image to determine an output image;
and sending the output image to an correction server for searching and judging so as to obtain correction results.
The third object of the application is to provide a system for correcting works based on topic identification, which has the characteristic of improving the accuracy of picture character identification.
The third object of the present invention is achieved by the following technical solutions:
the object acquisition sub-module is used for determining a basic image; wherein, the content of the basic image comprises a job carrier for recording problems;
the frame extraction submodule is used for determining a working frame based on the basic image; wherein the job frame is capable of reflecting a geometry of the job carrier in the base image;
the template construction submodule is used for determining a perspective frame based on the operation frame; the perspective frame is used for reflecting the geometric shape of the operation frame after perspective conversion, and the geometric shape of the perspective frame is matched with the geometric shape of the operation carrier;
And the correction processing sub-module is used for carrying out image correction on the basic image based on the perspective frame and determining a final image.
By adopting the technical scheme, the basic image is an image acquired based on the operation carrier, the image content of the basic image comprises the operation carrier and problems on the operation carrier, and the shape of the operation carrier in the basic image can be extracted by extracting the operation frame from the basic image. The user is likely to tilt when photographing the work carrier, and the work carrier in the base image has a large tilt angle and a small tilt angle due to the perspective problem, so that the shape of the work frame is often different from the actual shape of the work carrier. By utilizing the geometric features of the outline of the operation frame, a perspective frame corresponding to the operation frame can be generated, the perspective frame can reflect the shape of the operation frame after graphic correction according to geometric structure perspective, and the shape of the perspective frame corresponds to the actual shape of the operation carrier. Therefore, the perspective frame is used as a template to carry out image correction on the basic image, so that the imaging content of the corresponding operation carrier, problem and answer in the basic image is subjected to inclination correction, a final image is obtained, the shape of the final image is closer to the actual shape of the operation carrier, the problem originally shot on the operation carrier can be reflected, the picture and character recognition can be carried out more rapidly and accurately in the follow-up problem correction step, and the user experience is improved.
Optionally, the object acquisition submodule includes:
an image acquisition unit capable of performing image capturing based on a capturing area to acquire a base image;
the reflector can reflect light rays, and the light rays from the shooting area can enter the image acquisition unit after being reflected by the reflector.
Through adopting above-mentioned technical scheme, the user is when needs shoot the operation carrier, can place the operation carrier and shoot in shooting region, through the light reflection of reflector, and the formation of image of operation carrier in basic image is trapezoidal, makes the operation frame that the operation carrier corresponds change through geometry perspective more easily, can carry out picture character recognition more fast, accurately in follow-up problem correction step, improves user experience.
The fourth purpose of the application is to provide an intelligent home education learning machine, which has the characteristic of improving the accuracy of picture character recognition.
The fourth object of the present invention is achieved by the following technical solutions:
an intelligent home education learning machine comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and execute any one of the image correction methods or the method work correction methods.
The fifth purpose of the present application is to provide a computer storage medium, which can store a corresponding program, and has the characteristic of improving the accuracy of recognizing the picture and the text.
The fifth object of the present invention is achieved by the following technical solutions:
a computer-readable storage medium storing a computer program capable of being loaded by a processor and executing any one of the image correction methods or method work correction methods described above.
Drawings
Fig. 1 is a flowchart of an image correction method based on problem correction according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a shooting area in a shooting state.
Fig. 3 is a schematic sub-flowchart of an image correction method according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of step S24 in the image correction method according to the first embodiment of the present application.
Fig. 5 is a schematic diagram of approximation point generation and substitution of fuzzy point pairs.
Fig. 6 is a schematic flow chart of steps S3 and S4 in the image correction method according to the first embodiment of the present application.
FIG. 7 is a schematic diagram of a job frame generation perspective frame process.
Fig. 8 is a schematic view of the work carrier in a shooting state.
FIG. 9 is a flow chart of a problem correction method based on problem identification according to the second embodiment of the present application.
Fig. 10 is a schematic diagram of a submodule of an image correction module based on problem correction according to the third embodiment of the present application.
Fig. 11 is an exploded schematic view of a mirror and an image acquisition unit of the third embodiment of the present application.
Fig. 12 is a schematic block diagram of a system for modifying works based on topic identification according to the fourth embodiment of the present application.
Fig. 13 is a schematic block diagram of an intelligent home teaching learning machine according to a fifth embodiment of the present application.
Fig. 14 is a schematic view of the structure of the intelligent home teaching learning machine according to the sixth embodiment of the present application when the mirror is mounted.
Fig. 15 is an exploded schematic view of an intelligent home teaching learning machine and a mirror according to a sixth embodiment of the present application.
In the figure, 1, an object acquisition sub-module; 11. an image acquisition unit; 12. a reflective mirror; 121. a lens; 122. a dust-proof light-transmitting plate; 123. an optical channel; 2. a frame extraction sub-module; 3. constructing a sub-module of the template; 4. a correction processing sub-module; 5. an image conversion module; 6. and a job correcting module.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The reference numerals of the steps in this embodiment are only for convenience of description, and do not represent limitation of the execution sequence of the steps, and the execution sequence of the steps may be adjusted or simultaneously performed according to the needs in practical application, and these adjustments or substitutions all belong to the protection scope of the present invention.
Embodiments of the present application are described in further detail below in conjunction with figures 1-15 of the specification.
Embodiment one:
the embodiment of the application provides an image correction method based on problem correction, and the main flow of the method is described as follows.
Referring to fig. 1, a base image is determined S1.
The content of the basic image comprises a job carrier, wherein the job carrier can be a carrier recorded with problems such as a job book, a test paper and the like, and the job carrier is recorded with problems and answers of corresponding problems written on the job carrier by a user. In this embodiment, the overall shape of the work carrier is rectangular.
In step S1, it includes:
referring to fig. 2 and 3, an initial image is acquired S11.
Wherein the initial image refers to an image obtained by photographing the work carrier based on the image obtaining unit 11.
In this embodiment, the image acquisition unit 11 refers to a front camera disposed on an intelligent mobile terminal, which may be a tablet computer or a tablet home teaching machine, and the image acquisition unit 11 is provided with a reflective mirror 12 in a matching manner. When the user finishes writing the answer to the problem on the operation carrier, the user can shoot the operation carrier, and during shooting, the reflector 12 can be installed at the shooting end of the front camera, and the operation carrier is placed on the shooting area of the intelligent mobile terminal. Under the light reflection effect of the reflector 12, the light passing through the work carrier on the shooting area can enter the shooting end of the front camera, so that the work carrier is shot to acquire an initial image.
In this embodiment, the reflective mirror 12 and the front camera are both located at one side of the shooting area, because the front camera is not just shooting the operation carrier, based on the principle of geometric perspective, the imaging of the whole operation carrier in the initial image has an inclination problem, the imaging length of one side of the operation carrier, which is close to the reflective mirror 12, in the initial image is longer, the imaging length of one side of the operation carrier, which is far away from the reflective mirror 12, in the initial image is shorter, the imaging shape of the whole operation carrier in the initial image is trapezoidal, the influence of the inclination problem is provided, and correspondingly, the imaging of the problems and answers recorded on the operation carrier in the initial image is also influenced by the inclination.
S12, performing image processing based on the initial image, and determining a basic image.
The image processing refers to format conversion, gray level conversion and file scaling of an initial image in sequence. In this embodiment, the initial image is a bitmap image acquired based on android when acquired, and in the subsequent image conversion process, the image needs to be edited by opencv, so that the bitmap image needs to be converted into a mat image and into a gray scale image through format conversion. The file scaling refers to scaling the file size to enable the size of the image file to be within a preset range so as to improve the subsequent calculation processing speed; in this embodiment, the file scaling will adjust the image file size to within 100K. And sequentially performing format conversion, gray level conversion and file scaling on the initial image to obtain a basic image.
S2, determining a working frame based on the basic image, judging whether the basic image meets correction conditions, if so, executing S3, and if not, executing S5.
The operation frame refers to a closed graph formed by the outline of the operation carrier in the basic image, and the operation frame can reflect the geometric shape of the operation carrier in the basic image. Because the whole imaging of the operation carrier in the basic image is trapezoid, the whole operation frame is quadrilateral under the condition of normal extraction; in the preferred case, the entire working frame is trapezoidal, and in both cases, the base image satisfies the correction condition.
In the case of light abnormality extraction, the operation frame may be a polygon having a number of sides greater than 4, and a part of the operation frames in the above abnormality can be converted from the operation frame shape extracted from the abnormality to the operation frame shape extracted normally by image processing, and therefore, the base image can satisfy the correction condition.
Under the condition of serious abnormal extraction, the operation frame may be a polygon with the edge number larger than 4, the operation frame may be too small in area, the operation frame may not be extracted into a closed shape, the operation frame needs to be extracted again from the basic image at the moment, and when the operation frame still cannot be extracted normally for multiple times, the basic image does not meet the correction condition.
Referring to fig. 3, in step S2, it includes:
s21, judging whether the error parameter is equal to a processing threshold, if so, executing S22, and if so, executing S5.
The error parameters refer to the operation times of re-extracting the operation frame of the basic image, and the numerical value of the error parameters can be increased along with the operation times of re-extracting the operation frame of the corresponding basic image; the processing threshold refers to a threshold of the number of operations preset by the system. In this embodiment, the initial value of the error parameter is 0, and the value of the error parameter is added with 1 every time the operation of extracting the operation frame is performed once again for the current base image; the processing threshold is set to 4, which is equivalent to that a basic image can only re-extract the operation frame for 4 times at most.
When the operation times of the basic image re-extraction operation frame are equal to the processing threshold value, the operation times of the basic image re-extraction operation frame reach the upper limit, the basic image extraction operation frame fails, the basic image does not meet the correction condition, and S5 is executed; when the number of operations of re-extracting the operation frame of the base image is smaller than the processing threshold, it is indicated that the base image has just started to extract the operation frame, or the number of operations of repeatedly extracting the operation frame is smaller, and S22 is executed.
S22, determining an initial frame based on the basic image.
In the specific method of step S22, the method includes:
s221, judging whether the basic image can extract the alternative frames, if so, executing S222, and if not, executing S28.
The candidate frame value refers to a frame which can be extracted from the base image. In this embodiment, the formation conditions of the alternative frame are: the candidate frame is polygonal, the number of vertexes of the candidate frame is not less than 4, and the number of vertexes of the candidate frame is not greater than a point threshold, namely the candidate frame is provided with at least 4 vertexes, and the candidate frame can be provided with at most the point threshold. If the number of the vertexes of the frame extracted from the basic image is less than 4, the frame does not form an alternative frame; if the number of the vertexes of the frame extracted from the basic image is larger than the point threshold value, the shape of the frame is larger than the shape of the job carrier imaging, the processing amount is larger in the follow-up frame approximation, the processing efficiency is seriously affected, and the frame is not used as an alternative frame.
When the basic image can extract at least one alternative frame, the basic image is described to have the basic condition that the outline of the operation carrier can be extracted, and S222 is executed; when the base image cannot extract the alternative frame, S28 is performed.
Preferably, in order to enable the base image to more easily extract the outline of the frame, the base image needs to be preprocessed before determining whether the base image can extract the alternative frame.
The specific method for pretreatment comprises the following steps:
firstly, carrying out Gaussian filtering on a basic image so as to carry out noise reduction treatment and smoothing treatment on the basic image;
then, performing edge detection operation on the basic image to convert the basic image into a black-and-white image with only outline;
and finally, binarizing the basic image, wherein the type of the binarization is THRESH_OTSU, so as to further deepen the outline in the basic image. The image obtained after the basic image is preprocessed can reflect the outline of the image more accurately, the alternative frame is obtained more easily, the probability that the frame needs to be extracted again due to the fact that the alternative frame is hidden is effectively reduced, and the processing efficiency is improved.
S222, determining an alternative frame set based on the basic image.
The candidate frame set comprises all candidate frames extracted from the basic image.
S223, determining the candidate frame with the largest area in the candidate frame set as an initial frame.
The area of each candidate frame is calculated, and the candidate frame with the largest area is selected as the initial frame. Since the user takes the original image mainly based on the work carrier, the maximum contour of the frame that can be formed in the base image should be the contour of the work carrier.
S23, detecting frames, judging whether the number of the vertexes of the initial frames is larger than an approximation value, if so, executing S24, otherwise, executing S25.
The approximation value refers to a threshold value of the number of vertices of the initial frame, determines the maximum number of vertices that the initial frame can have, and also determines the maximum number of edges that the initial frame can have. In the present embodiment, since the normal imaging of the work carrier in the image is trapezoidal, the approximation value is set to 4.
When the number of vertices of the initial frame is equal to 4, the initial frame is quadrilateral, and S24 can be executed; when the number of vertices of the initial frame is greater than 4, the initial frame is a polygon having a number of sides greater than 4, for example, a pentagon, and the shape of the initial frame needs to be approximated from the pentagon to the quadrangle, and S25 is executed.
S24, frame approximation, determining a fuzzy point pair based on the initial frame, determining an approximation point based on the fuzzy point pair, replacing the fuzzy point pair with the approximation point, and returning to S23.
The initial frame is provided with at least 5 vertexes, wherein every two vertexes form a group of vertex pairs, the distance between the two vertexes in each vertex pair is compared, and the vertex pair with the smallest distance can be obtained, and the vertex pair is a fuzzy point pair. The distance between two vertexes in the fuzzy point pair is smaller than the distance between any other two vertexes in the initial frame.
Since the work carrier actually has only 4 vertices, when the initial frame has at least 5 vertices, it is generally the case that one actual vertex of the work carrier is not recognized, but other points located near the vertex are erroneously recognized as vertices; meanwhile, the operation carrier is rectangular, and the distance between 4 vertexes after the operation carrier images is longer, so that two vertexes closest to the original frame can be determined to be the vertexes which are incorrectly recognized, and a approaching point which is more approximate to the actual shape of the operation carrier images is estimated through the two vertexes, namely through a fuzzy point pair.
The approximation points are replaced with the fuzzy point pairs, so that the initial frame gradually approximates to the quadrangle relative to the reduction of the number of the vertexes of the initial frame, and meanwhile, the actual shape imaged to the operation carrier is gradually approximated.
Referring to fig. 4 and 5, in a specific method of step S24, it includes:
s241, determining fuzzy point pairs based on the vertex pairs of each group of the initial frame.
Each group of vertex pairs are provided with two vertexes, the distances between the two vertexes in each vertex pair are calculated and compared, and the vertex pair with the minimum distance between the two vertexes is determined to be a fuzzy point pair.
S242, determining a first reference edge and a second reference edge based on the positions of the fuzzy point pairs.
The first reference edge and the second reference edge are edges in the initial frame, and an included angle can be formed between the first reference edge and the second reference edge. The first reference edge is the edge closest to one vertex of the fuzzy point pair, and the second reference edge is the edge closest to the other vertex of the fuzzy point pair.
S243, determining a approaching point based on the intersection point between the first reference edge and the second reference edge.
The approaching point is an intersection point between the first reference edge and the second reference edge or an intersection point between an extension line of the first reference edge and an extension line of the second reference edge. Since each vertex of the work carrier should be the intersection point between two adjacent sides in the imaging of the work carrier, the initial frame after the approach point is added can be made to be closer to the imaging shape of the work carrier by determining the first reference side and the second reference side to estimate the position of the approach point.
S244, eliminating the fuzzy point pairs from the initial frame, taking the approximation points as the vertexes of the initial frame, and returning to S23.
And then, adding the approximation points as new vertexes into the initial frame to enable the initial frame to approach towards the quadrangle.
Referring to fig. 3, S25, the secondary detection is performed to determine whether the number of vertices of the initial frame is equal to the approximation value, if so, S26 is performed, otherwise S28 is performed.
After the fuzzy point pairs are removed and approaching points are added, the shape and the edge number of the initial frame are changed, if the vertex of the changed initial frame is not 4, a quadrangle may not be formed in the initial frame, and the initial frame needs to be selected again.
S26, area detection, namely judging that the area of the initial frame is larger than or equal to an area threshold, if yes, executing S27, otherwise, executing S28.
Referring to fig. 2 and 3, wherein the area threshold refers to the minimum value of the initial bezel area. In the present embodiment, the wide angle setting of the front camera is fixed, the position between the mirror 12 and the front camera is also fixed, and the position between the photographing area and the mirror 12 is also fixed, so that when the work carrier is photographed and imaged in the photographing area, the ratio between the actual area of the work carrier and the area imaged in the photograph is substantially unchanged, and therefore, the actual area of the work carrier can be converted from the area ratio by the area imaged in the photograph by the work carrier. When a user shoots the operation carrier, the area occupied by the operation carrier in the initial image is larger, so that the area of the imaging outline of the operation carrier has the minimum value, and when the area of the initial frame is too small, the imaging outline representing the operation carrier is too small, and the imaging outline is difficult to identify in the subsequent image character recognition, so that the calculation accuracy is affected.
When the area of the initial frame is larger than or equal to the area threshold, the initial frame meets the screening requirement of the area condition, and the next step can be executed; when the area of the initial frame is smaller than the area threshold, the initial frame does not meet the screening requirement of the area condition, and the initial frame may not normally frame the outline of the work carrier, so that the initial frame needs to be selected again.
S27, determining the frame, wherein the initial frame is determined to be the operation frame.
The initial frame in the step can be determined to be the operation frame because the initial frame meets the screening conditions of frame detection and the screening conditions of area detection.
And S28, accumulating based on the error parameters, and returning to S21.
When the initial frame does not meet the screening condition of the number of the vertexes or the screening condition of the area threshold value, the initial frame needs to be extracted from the current basic image again, so that the value of the error parameter is increased by 1, the current basic image is extracted once but fails to be extracted, and after multiple failures, the operation frame is abandoned from the basic image.
The error parameter can reflect the number of times that the current basic image has repeatedly extracted the operation frame, and when the error parameter is 0, the error parameter represents that the current basic image has not been repeatedly extracted; when the error parameter is 1, the current basic image is repeatedly extracted for 1 time; when the error parameter is 2, the current basic image is repeatedly extracted for 2 times; when the current base image loses the opportunity for re-extraction, the error parameters are cleared.
S3, determining the perspective frame based on the operation frame.
The perspective frame is a frame obtained after the geometrical structure perspective conversion of the operation frame, and is rectangular.
The whole operation frame obtained in the step is trapezoid, and the shape outline of the operation frame is matched with the shape outline of the operation carrier imaged in the basic image; the perspective frame can reflect the shape obtained after the geometrical structure perspective conversion of the operation frame, so that the shape outline of the perspective frame is matched with the actual shape outline of the operation carrier.
If the photographing end of the image obtaining unit 11 is just opposite to the operation carrier to photograph and obtain an image, the outline shape of the image formed by the operation carrier in the image should be matched with the shape outline of the perspective frame.
In step S3, it includes:
referring to fig. 6 and 7, S31, a frame upper bottom, a frame lower bottom, and a frame height are determined based on the operation frame.
The operation frame is integrally trapezoidal or nearly trapezoidal, and the operation frame is provided with a lower bottom, an upper bottom and two waists, wherein the length of the upper bottom of the operation frame is larger than that of the lower bottom of the operation frame.
In this embodiment, the upper bottom of the frame refers to the upper bottom in the operation frame, and the end points at two ends of the upper bottom of the frame are respectively an end point A1 and an end point A2; the lower bottom of the frame refers to the lower bottom in the operation frame, and the endpoints at the two ends of the lower bottom of the frame are an endpoint B1 and an endpoint B2 respectively; the frame height refers to the distance between the upper frame bottom and the lower frame bottom, i.e., the height of the work frame.
S32, determining a perspective broadside and a perspective long side based on the upper frame bottom, the lower frame bottom and the frame height.
The perspective frame is rectangular as a whole, the perspective wide side and the perspective long side are two adjacent sides on the perspective frame respectively, and an included angle between the perspective wide side and the perspective long side is a right angle; the perspective broadside corresponds to the width of the perspective frame and the perspective long side corresponds to the length of the perspective frame.
According to the principle of geometric structure perspective, the trapezoid operation frame can be converted into the rectangular perspective frame, and in the perspective conversion process, the length and width of the perspective frame are calculated through the lower bottom, the upper bottom and the height of the operation frame.
S321, determining perspective broadsides based on the upper bottom of the frame.
Referring to fig. 7 and 8, in which, according to the principle of geometric perspective, in the actual work carrier, the length of the work carrier imaged in the image on the side close to the mirror 12 is greater and the length of the work carrier imaged in the image on the side remote from the mirror 12 is smaller. Therefore, as the longer bottom edge in the operation frame, the actual length of the operation carrier approaching to one side of the reflector 12 can be accurately reflected by the upper bottom of the frame, the length of the upper bottom of the frame can be taken as the length of the perspective broadside, and the end points of the two ends of the upper bottom of the frame are taken as the end points of the two ends of the perspective broadside, namely the end point A1 'and the end point A2'.
Referring to fig. 6 and 7, S322, a perspective long side is determined based on the frame upper bottom, the frame lower bottom, and the frame height.
According to the geometrical perspective principle, perspective conversion can be performed through the upper bottom of the frame, the lower bottom of the frame and the height of the frame, so that the length of the perspective long side can be calculated.
The specific method for perspective conversion in the step is as follows: substituting the length of the upper bottom of the frame, the length of the lower bottom of the frame, the length of the high side of the frame and the perspective proportion coefficient into the length of the perspective long side obtained through calculation in the formula (1). The perspective scaling factor is a parameter value preset by the system, and in this embodiment, the perspective scaling factor is 1.
(1)
Wherein a is the length of the upper bottom of the frame, b is the length of the lower bottom of the frame, c is the length of the high side of the frame, y is the length of the perspective long side, and k is the proportionality coefficient.
Since the perspective broadside is already determined, the endpoint B1' of the perspective long side can be obtained based on the endpoint A1' or the endpoint A2' of the perspective broadside according to the length calculated by the perspective long side.
S33, determining the perspective frame based on the perspective wide side and the perspective long side.
The perspective broadside and the perspective long side are used as two adjacent sides in the perspective frame, three vertexes of the perspective frame are determined, positions of the vertexes A1', A2' and B1 'are determined, and the last vertex B2' of the perspective frame can be calculated to obtain the complete perspective frame.
And S4, carrying out image correction on the basic image based on the perspective frame to determine a final image.
The perspective frame can reflect the outline shape of the operation frame obtained after the geometrical structure perspective transformation, so that the operation frame is subjected to the perspective transformation based on the perspective frame, the outline of the operation frame is converted from a trapezoid to a quadrilateral with equal opposite sides, and the basic image is corrected to obtain a final image.
Referring to fig. 6 and 7, in step S4, it includes:
s41, performing image correction on the basic image based on the perspective frame, and determining an initial correction image.
Wherein, based on four vertexes of perspective frame: the vertexes A1', A2', B1 'and B2' determine four vertexes of the rectangular frame after the operation frame is subjected to perspective conversion, so that based on the four vertexes of the perspective frame, perspective conversion can be performed on imaging contents corresponding to the operation frame in a basic image, perspective conversion can also be performed on imaging of problems and answers on an operation carrier in the basic image, and therefore influences of inclination angle problems in the image on the problems and answers are offset, and an initial correction image is obtained.
S42, shadow removal and contour enhancement are sequentially carried out based on the initial correction image, and the optimized correction image is determined.
The shadow removal means that most of shadows in the initial correction image are removed through high threshold filtering and division operation, so that the whole background of the initial correction image is whitened. Contour enhancement refers to performing Gaussian filtering processing through a preset and performing subtraction operation, so that contour details in an initial correction image are richer, a deepened contour is obtained, and the contour is more continuous.
S43, fusing the initial correction image and the optimized correction image to determine a final image.
The initial correction image has the problem content and answer content which are relatively finished in the operation carrier, the optimized correction image has the outline with deep correction, the initial correction image and the optimized correction image are overlapped, and the advantages of the initial correction image and the optimized correction image can be combined to obtain the integral enhancement of the image, so that the final image is obtained. The problem content and the answer content in the final image are clearer and more continuous, and the corresponding content can be extracted more rapidly and accurately in the subsequent image character recognition algorithm.
S5, failure detection is carried out, whether the current basic image is subjected to histogram processing is judged, and if yes, S6 is executed; if not, resetting error parameters, carrying out histogram processing on the basic image, and returning to S12.
When the basic image does not meet the correction condition, a failure detection step is triggered, which means that the current basic image temporarily cannot be normally extracted from the operation frame. In order to determine whether the base image has the potential of extracting the operation frame, each base image has the opportunity of performing histogram processing to enhance the base image itself, and if the base image has been subjected to the histogram processing, the operation frame still cannot be extracted, a failure processing step is performed.
Referring to fig. 3, S6, failure processing, based on the remodelling point, determines a job frame, and executes S31.
Wherein, the remodelling point refers to the vertex determined by the user through the manual frame point. In this embodiment, the initial image is displayed to the user through the intelligent mobile terminal, and the user can manually determine four remodelling points in the initial image, and the four remodelling points form the operation frame as four vertices of the operation frame to execute subsequent steps.
The implementation principle of the image correction method based on problem correction in the embodiment of the application is as follows: the basic image is an image acquired based on the operation carrier, the image content of the basic image comprises the operation carrier, problems on the operation carrier and answers on the operation carrier, the shape of the operation carrier in the basic image can be extracted by extracting the operation frame from the basic image, and the problems and the answers are also arranged in the basic image corresponding to the contents of the operation frame. However, since the user is likely to tilt when photographing the work carrier, and the work carrier in the base image has a large tilt angle and a small tilt angle due to the perspective problem, the shape of the work frame is often different from the actual shape of the work carrier. By utilizing the geometric features of the outline of the operation frame, a perspective frame corresponding to the operation frame can be generated, the perspective frame can reflect the shape of the operation frame after graphic correction according to geometric structure perspective, and the shape of the perspective frame corresponds to the actual shape of the operation carrier.
Therefore, the perspective frame is used as a template to carry out image correction on the basic image, so that the imaging content of the corresponding operation carrier, problem and answer in the basic image is subjected to inclination correction, a final image is obtained, the shape of the final image is closer to the actual shape of the operation carrier, the problem originally shot on the operation carrier can be reflected, the picture and character recognition can be carried out more rapidly and accurately in the follow-up problem correction step, and the user experience is improved.
In the process of acquiring the operation frame, the precursor of the operation frame, namely the initial frame, is screened for multiple times, and the accuracy of the subsequent perspective conversion step is improved by ensuring that the shape of the operation frame accords with a trapezoid; through limiting the minimum area value of the operation frame, the loss of problem contents or answer contents in the image is reduced, and the accuracy of the character recognition of the subsequent image is improved. In addition, for each initial image, the initial image has the opportunity of repeatedly attempting to extract the operation frame fairly, and single operation frame extraction failure cannot directly determine to discard the initial image, so that the times of shooting the initial image again by a user are effectively reduced, and the user experience is improved.
Embodiment two:
the embodiment of the application provides a problem correction method based on problem identification, wherein the method comprises the whole content of the image correction method in the first embodiment, and further comprises the following steps:
referring to fig. 9, S7, format conversion is performed based on the final image to determine an output image.
The format conversion refers to converting the final image from the mat image to the bitmap image, so that the android can upload the output image to the correction server.
And S8, sending the output image to an correction server for searching and judging so as to obtain correction results.
The output image is uploaded to the correction server, all questions and all answers corresponding to the output image can be identified through OCR picture characters, standard answers corresponding to the questions are searched, judgment results of the questions can be obtained through comparing the answers corresponding to the questions with the standard answers, and then correction results of all the questions on the operation carrier are obtained.
The problem correction method provided in the present embodiment can achieve the same technical effects as those of the first embodiment, and the principle analysis can refer to the related description of the steps of the method, which is not described herein.
Embodiment III:
In one embodiment, the problem correction based image correction module corresponds to the problem correction based image correction method in the first embodiment, and referring to fig. 10, the image correction module includes an object acquisition sub-module 1, a frame extraction sub-module 2, a template construction sub-module 3, and a correction processing sub-module 4. The functional submodules are described in detail as follows:
the object acquisition sub-module 1 is configured to determine a base image, and send base image information to the frame extraction sub-module 2. Wherein, the content of the basic image comprises a job carrier for recording problems.
The frame extraction submodule 2 is used for determining the operation frame based on the basic image and sending the operation frame information to the template construction submodule 3. The operation frame can reflect the geometric shape of the operation carrier in the basic image.
The template construction submodule 3 is used for determining a perspective frame based on the operation frame and sending template construction information to the correction processing submodule 4. The perspective frame is used for reflecting the geometric shape of the operation frame after perspective conversion, and the geometric shape of the perspective frame is matched with the geometric shape of the operation carrier.
And the correction processing sub-module 4 is used for carrying out image correction on the basic image based on the perspective frame to determine a final image.
Referring to fig. 10 and 11, specifically, the object acquisition sub-module 1 includes:
the image acquisition unit 11, using a monocular camera, is capable of performing image capturing based on a capturing area to acquire a base image.
The reflector 12 comprises a shell, a light channel 123 is arranged in the shell, a lens 121 for reflecting light is fixedly arranged in the light channel 123, a dustproof light-transmitting plate 122 is further arranged at the opening of the light channel, and light from a shooting area can enter the image acquisition unit 11 after being reflected by the lens 121.
The image correction module provided in this embodiment can achieve the same technical effects as those of the first embodiment because of the functions of the modules and the logic connection between the modules, and therefore, the principle analysis can refer to the related descriptions of the steps of the method, which are not described here.
Embodiment four:
in one embodiment, a system for correcting works based on topic identification is provided, which corresponds to the problem correction method based on topic identification in the second embodiment, and referring to fig. 12, the system for correcting works includes an image correction module, an image conversion module 5, and a job correction module 6. The functional modules are described in detail as follows:
The image correction module comprises an object acquisition sub-module 1, a frame extraction sub-module 2, a template construction sub-module 3 and a correction processing sub-module 4. The object acquisition sub-module 1 includes an image acquisition unit 11 and a mirror 12.
An image conversion module 5 for performing format conversion based on the final image, determining an output image, and transmitting the output image information to the job modifying module 6.
And the job modifying module 6 is used for sending the output image to the modifying server for searching and judging so as to obtain modifying results.
The system for correcting works provided in this embodiment can achieve the same technical effects as those of the second embodiment because of the functions of the modules and the logic connection between the modules, and therefore, principle analysis can refer to the related descriptions of the steps of the method, which are not described here.
Fifth embodiment:
in one embodiment, an intelligent home teaching learning machine is provided. Referring to fig. 13, the intelligent home teaching learning machine includes a memory, a processor, and a computer program stored on the memory and executable on the processor. The processor is configured to provide computing and control capabilities, and when executing the computer program, the processor performs the steps of:
S11, acquiring an initial image.
S12, performing image processing based on the initial image, and determining a basic image.
S21, judging whether the error parameter is equal to a processing threshold, if so, executing S22, and if so, executing S5.
S221, judging whether the basic image can extract the alternative frames, if so, executing S222, and if not, executing S28.
S222, determining an alternative frame set based on the basic image.
S223, determining the candidate frame with the largest area in the candidate frame set as an initial frame.
S23, detecting frames, judging whether the number of the vertexes of the initial frames is larger than an approximation value, if so, executing S24, otherwise, executing S25.
S241, determining fuzzy point pairs based on the vertex pairs of each group of the initial frame.
S242, determining a first reference edge and a second reference edge based on the positions of the fuzzy point pairs.
S243, determining a approaching point based on the intersection point between the first reference edge and the second reference edge.
S244, eliminating the fuzzy point pairs from the initial frame, taking the approximation points as the vertexes of the initial frame, and returning to S23.
S25, performing secondary detection, judging whether the number of the vertexes of the initial frame is equal to an approximation value, if so, executing S26, otherwise, executing S28.
S26, area detection, namely judging that the area of the initial frame is larger than or equal to an area threshold, if yes, executing S27, otherwise, executing S28.
S27, determining the frame, wherein the initial frame is determined to be the operation frame.
And S28, accumulating based on the error parameters, and returning to S21.
S31, determining the upper bottom of the frame, the lower bottom of the frame and the height of the frame based on the operation frame.
S32, determining a perspective broadside and a perspective long side based on the upper frame bottom, the lower frame bottom and the frame height.
S321, determining perspective broadsides based on the upper bottom of the frame.
S322, determining the perspective long side based on the upper bottom of the frame, the lower bottom of the frame and the height of the frame.
S41, performing image correction on the basic image based on the perspective frame, and determining an initial correction image.
S42, shadow removal and contour enhancement are sequentially carried out based on the initial correction image, and the optimized correction image is determined.
S43, fusing the initial correction image and the optimized correction image to determine a final image.
S5, failure detection is carried out, whether the current basic image is subjected to histogram processing is judged, and if yes, S6 is executed; if not, resetting error parameters, carrying out histogram processing on the basic image, and returning to S12.
S6, failure processing, namely determining a work frame based on the remodelling point, and executing S31.
S7, performing format conversion based on the final image, and determining an output image.
And S8, sending the output image to an correction server for searching and judging so as to obtain correction results.
The intelligent home teaching learning machine provided in this embodiment can achieve the same technical effects as the foregoing embodiments because the steps of the foregoing embodiments are implemented after the computer program in the memory runs on the processor, and the principle analysis can refer to the related descriptions of the foregoing method steps, which are not further described herein.
Example six:
the present embodiment provides an intelligent home teaching learning machine, and the difference between this embodiment and the fifth embodiment is that:
referring to fig. 14, the intelligent home teaching learning machine may acquire a base image through a built-in front camera. The operation face of intelligent family education learning machine is provided with the shooting region. The intelligent home education learning machine is further provided with a reflector 12, the reflector 12 comprises a shell, a light channel 123 is arranged inside the shell, a lens 121 for reflecting light is fixedly installed in the light channel 123, a dustproof light-transmitting plate 122 is further installed at the opening of the light channel, and light from a shooting area can enter the image acquisition unit 11 after being reflected by the lens 121.
The reflector 12 is also provided with a magnetic attraction piece, and the intelligent home education learning machine is fixedly provided with a magnetic force part for attracting the magnetic attraction piece, so that the reflector 12 can be detachably fixed at the position of the front camera.
The intelligent home teaching learning machine provided in this embodiment can achieve the same technical effects as those of the fifth embodiment because the steps of the foregoing embodiment are implemented after the computer program in the memory runs on the processor, and the principle analysis can refer to the related description of the steps of the foregoing method, which is not further described herein.
Embodiment seven:
in one embodiment, a computer readable storage medium is provided, storing a computer program capable of being loaded by a processor and executing the problem correction method based on problem identification, the computer program implementing the following steps when executed by the processor:
s11, acquiring an initial image.
S12, performing image processing based on the initial image, and determining a basic image.
S21, judging whether the error parameter is equal to a processing threshold, if so, executing S22, and if so, executing S5.
S221, judging whether the basic image can extract the alternative frames, if so, executing S222, and if not, executing S28.
S222, determining an alternative frame set based on the basic image.
S223, determining the candidate frame with the largest area in the candidate frame set as an initial frame.
S23, detecting frames, judging whether the number of the vertexes of the initial frames is larger than an approximation value, if so, executing S24, otherwise, executing S25.
S241, determining fuzzy point pairs based on the vertex pairs of each group of the initial frame.
S242, determining a first reference edge and a second reference edge based on the positions of the fuzzy point pairs.
S243, determining a approaching point based on the intersection point between the first reference edge and the second reference edge.
S244, eliminating the fuzzy point pairs from the initial frame, taking the approximation points as the vertexes of the initial frame, and returning to S23.
S25, performing secondary detection, judging whether the number of the vertexes of the initial frame is equal to an approximation value, if so, executing S26, otherwise, executing S28.
S26, area detection, namely judging that the area of the initial frame is larger than or equal to an area threshold, if yes, executing S27, otherwise, executing S28.
S27, determining the frame, wherein the initial frame is determined to be the operation frame.
And S28, accumulating based on the error parameters, and returning to S21.
S31, determining the upper bottom of the frame, the lower bottom of the frame and the height of the frame based on the operation frame.
S32, determining a perspective broadside and a perspective long side based on the upper frame bottom, the lower frame bottom and the frame height.
S321, determining perspective broadsides based on the upper bottom of the frame.
S322, determining the perspective long side based on the upper bottom of the frame, the lower bottom of the frame and the height of the frame.
S41, performing image correction on the basic image based on the perspective frame, and determining an initial correction image.
S42, shadow removal and contour enhancement are sequentially carried out based on the initial correction image, and the optimized correction image is determined.
S43, fusing the initial correction image and the optimized correction image to determine a final image.
S5, failure detection is carried out, whether the current basic image is subjected to histogram processing is judged, and if yes, S6 is executed; if not, resetting error parameters, carrying out histogram processing on the basic image, and returning to S12.
S6, failure processing, namely determining a work frame based on the remodelling point, and executing S31.
S7, performing format conversion based on the final image, and determining an output image.
And S8, sending the output image to an correction server for searching and judging so as to obtain correction results.
The readable storage medium according to the present embodiment, in which the computer program is loaded and executed on the processor, implements the steps of the foregoing embodiment, so that the same technical effects as those of the foregoing embodiment can be achieved, and the principle analysis can refer to the related description of the foregoing method steps, which will not be further described herein.
The computer-readable storage medium includes, for example: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the present invention are all preferred embodiments of the present application, and are not intended to limit the scope of the present application in this way, therefore: all equivalent changes according to the method and principle of the present application should be covered in the protection scope of the present application.

Claims (9)

1. The image correction method based on problem correction is characterized by comprising the following steps:
determining a basic image; wherein, the content of the basic image comprises a job carrier for recording problems;
determining a work frame based on the base image; wherein the job frame is capable of reflecting a geometry of the job carrier in the base image, comprising:
determining an initial frame based on the base image;
frame detection, judging whether the number of vertexes of the initial frame is larger than an approximation value, if so, executing a frame approximation step, otherwise, executing a frame determination step;
frame approximation, determining a fuzzy point pair of an initial frame, determining an approximation point based on the fuzzy point pair, replacing the fuzzy point pair with the approximation point, and returning to the frame detection step; the fuzzy point pair comprises two vertexes closest to each vertex of the initial frame, and the two vertexes comprise:
determining fuzzy point pairs based on each vertex of the initial frame;
Determining a first reference edge and a second reference edge based on the positions of the fuzzy point pairs; the first reference edge and the second reference edge are two edges which can form an included angle in the initial frame, and are close to the fuzzy point pair;
determining a point of approach based on an intersection between the first reference edge and the second reference edge;
removing the fuzzy point pairs from the initial frame, and taking the approximation points as vertexes of the initial frame;
the frame determination, namely determining the initial frame as an operation frame;
determining a perspective frame based on the operation frame; the perspective frame is used for reflecting the geometric shape of the operation frame after perspective conversion, the geometric shape of the perspective frame is matched with the geometric shape of the operation carrier, and the operation carrier comprises:
determining the upper bottom of the frame, the lower bottom of the frame and the height of the frame based on the operation frame; the operation frame is quadrilateral, the upper frame bottom can reflect one side of the operation frame, the lower frame bottom can reflect the opposite side of the upper frame bottom in the operation frame, and the frame height can reflect the distance between the upper frame bottom and the lower frame bottom;
Determining a perspective broadside and a perspective long side based on the upper frame bottom, the lower frame bottom and the frame height; wherein the geometry of the work carrier is rectangular; the perspective broadside can reflect the width of the perspective frame; the perspective long side can reflect the length of the perspective frame;
determining a perspective frame based on the perspective broadside and the perspective long side;
and (3) carrying out image correction on the basic image based on the perspective frame, and determining a final image.
2. The image correction method according to claim 1, characterized in that: the upper frame bottom, the lower frame bottom and the frame height can be combined to form a trapezoid reflecting the geometric shape of the operation frame, and the length of the upper frame bottom is greater than that of the lower frame bottom;
the specific method for determining the perspective broadside and the perspective long side based on the upper frame bottom, the lower frame bottom and the frame height comprises the following steps:
determining a perspective broadside based on the upper bottom of the frame;
and determining the perspective long side based on the upper bottom of the frame, the lower bottom of the frame and the height of the frame.
3. The method for correcting an image according to claim 1, wherein the specific method for correcting an image of a base image based on a perspective frame and determining a final image comprises the following steps:
Based on the perspective frame, carrying out image correction on the basic image, and determining an initial correction image;
shadow removal and contour enhancement are sequentially carried out on the basis of the initial correction image, and an optimized correction image is determined;
and fusing the initial correction image and the optimized correction image to determine a final image.
4. The image correction method according to claim 1, wherein in the specific method of determining the work frame based on the base image, further comprising:
judging whether the basic image meets the correction condition, if so, determining a perspective frame based on the operation frame; if not, executing a failure detection step;
when the basic image cannot extract the operation frame, the basic image can repeatedly extract the operation frame, and when the operation times of repeatedly extracting the operation frame of the basic image reach the upper limit, the basic image does not meet the correction condition;
and a failure detection step, judging whether the basic image is subjected to histogram processing, if not, carrying out the histogram processing on the basic image, and returning to the determination of the basic image.
5. A method for modifying a work based on topic identification, comprising the method for correcting an image according to any one of claims 1 to 3, further comprising:
Performing format conversion based on the final image to determine an output image;
and sending the output image to an correction server for searching and judging so as to obtain correction results.
6. An image correction module based on problem correction according to claim 1, wherein the image correction module is configured to implement the problem correction-based image correction method, and the image correction module includes:
an object acquisition sub-module (1) for determining a base image; wherein, the content of the basic image comprises a job carrier for recording problems;
the frame extraction submodule (2) is used for determining a working frame based on the basic image; wherein the job frame is capable of reflecting a geometry of the job carrier in the base image;
the template construction submodule (3) is used for determining a perspective frame based on the operation frame; the perspective frame is used for reflecting the geometric shape of the operation frame after perspective conversion, and the geometric shape of the perspective frame is matched with the geometric shape of the operation carrier;
and the correction processing sub-module (4) is used for carrying out image correction on the basic image based on the perspective frame to determine a final image.
7. The image correction module of claim 6, wherein the object acquisition submodule includes:
An image acquisition unit (11) capable of performing image capturing based on a capturing area to acquire a base image;
the reflector (12) can reflect light rays, and the light rays from the shooting area can enter the image acquisition unit (11) after being reflected by the reflector (12).
8. Intelligent home teaching learning machine, characterized in that it comprises a memory and a processor, said memory having stored thereon a computer program capable of being loaded by the processor and of executing the method according to any of claims 1 to 5.
9. Computer readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs the method according to any of claims 1 to 5.
CN202110933419.2A 2021-08-14 2021-08-14 Problem correction method and system based on problem recognition and intelligent home education learning machine Active CN113673405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110933419.2A CN113673405B (en) 2021-08-14 2021-08-14 Problem correction method and system based on problem recognition and intelligent home education learning machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110933419.2A CN113673405B (en) 2021-08-14 2021-08-14 Problem correction method and system based on problem recognition and intelligent home education learning machine

Publications (2)

Publication Number Publication Date
CN113673405A CN113673405A (en) 2021-11-19
CN113673405B true CN113673405B (en) 2024-03-29

Family

ID=78542854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110933419.2A Active CN113673405B (en) 2021-08-14 2021-08-14 Problem correction method and system based on problem recognition and intelligent home education learning machine

Country Status (1)

Country Link
CN (1) CN113673405B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2184865A1 (en) * 1995-09-06 1997-03-07 Mirco Carraro Structure for rimless spectacles of the type fashioned from wire
JP2007268281A (en) * 2006-03-08 2007-10-18 Yasuyoshi Morosawa Superimposing picture frame
CN101147174A (en) * 2004-10-15 2008-03-19 微软公司 System and method for managing communication and/or storage of image data
CN102165469A (en) * 2008-08-08 2011-08-24 实耐宝公司 Image-based inventory control system with automatic calibration and image correction
CN103745104A (en) * 2013-12-31 2014-04-23 广东工业大学 Examination paper marking method based on augmented reality technology
DE102013005462A1 (en) * 2013-03-28 2014-10-02 Ernst-Markus Baumgärtner Frame support frame, kit for a door frame with such Zargenstützgestell, door frame with such Zargenstützgestell and method for creating a door frame
CN104889959A (en) * 2008-08-08 2015-09-09 实耐宝公司 Image-based inventory control system and method
CN109327668A (en) * 2018-10-29 2019-02-12 维沃移动通信有限公司 A kind of method for processing video frequency and device
CN109559282A (en) * 2017-09-25 2019-04-02 德克萨斯仪器股份有限公司 The method and system of efficient process for general geometric correction engine
CN109741273A (en) * 2018-12-26 2019-05-10 江苏优胜信息技术有限公司 A kind of mobile phone photograph low-quality images automatically process and methods of marking
CN110110714A (en) * 2019-04-28 2019-08-09 重庆学析优科技有限公司 Method and system are corrected automatically on a kind of line of papery operation
CN110555813A (en) * 2019-08-27 2019-12-10 成都数之联科技有限公司 rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN110866871A (en) * 2019-11-15 2020-03-06 深圳市华云中盛科技股份有限公司 Text image correction method and device, computer equipment and storage medium
CN111950557A (en) * 2020-08-21 2020-11-17 珠海奔图电子有限公司 Error problem processing method, image forming apparatus and electronic device
CN112434544A (en) * 2020-12-09 2021-03-02 广东烟草阳江市有限责任公司 Cigarette carton code detection and identification method and device
CN112528740A (en) * 2020-11-06 2021-03-19 广东电网有限责任公司中山供电局 Pressing plate state identification method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7249843B2 (en) * 2004-05-14 2007-07-31 Isl Technologies, Llc Adjustable tensioning system for rimless eyewear
US9172139B2 (en) * 2009-12-03 2015-10-27 Apple Inc. Bezel gap antennas
US8378508B2 (en) * 2010-03-05 2013-02-19 Authentec, Inc. Integrally molded die and bezel structure for fingerprint sensors and the like

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2184865A1 (en) * 1995-09-06 1997-03-07 Mirco Carraro Structure for rimless spectacles of the type fashioned from wire
CN101147174A (en) * 2004-10-15 2008-03-19 微软公司 System and method for managing communication and/or storage of image data
JP2007268281A (en) * 2006-03-08 2007-10-18 Yasuyoshi Morosawa Superimposing picture frame
CN102165469A (en) * 2008-08-08 2011-08-24 实耐宝公司 Image-based inventory control system with automatic calibration and image correction
CN104889959A (en) * 2008-08-08 2015-09-09 实耐宝公司 Image-based inventory control system and method
DE102013005462A1 (en) * 2013-03-28 2014-10-02 Ernst-Markus Baumgärtner Frame support frame, kit for a door frame with such Zargenstützgestell, door frame with such Zargenstützgestell and method for creating a door frame
CN103745104A (en) * 2013-12-31 2014-04-23 广东工业大学 Examination paper marking method based on augmented reality technology
CN109559282A (en) * 2017-09-25 2019-04-02 德克萨斯仪器股份有限公司 The method and system of efficient process for general geometric correction engine
CN109327668A (en) * 2018-10-29 2019-02-12 维沃移动通信有限公司 A kind of method for processing video frequency and device
CN109741273A (en) * 2018-12-26 2019-05-10 江苏优胜信息技术有限公司 A kind of mobile phone photograph low-quality images automatically process and methods of marking
CN110110714A (en) * 2019-04-28 2019-08-09 重庆学析优科技有限公司 Method and system are corrected automatically on a kind of line of papery operation
CN110555813A (en) * 2019-08-27 2019-12-10 成都数之联科技有限公司 rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN110866871A (en) * 2019-11-15 2020-03-06 深圳市华云中盛科技股份有限公司 Text image correction method and device, computer equipment and storage medium
CN111950557A (en) * 2020-08-21 2020-11-17 珠海奔图电子有限公司 Error problem processing method, image forming apparatus and electronic device
CN112528740A (en) * 2020-11-06 2021-03-19 广东电网有限责任公司中山供电局 Pressing plate state identification method
CN112434544A (en) * 2020-12-09 2021-03-02 广东烟草阳江市有限责任公司 Cigarette carton code detection and identification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于火柴棒模型的图像水平倾斜矫正算法;吴震宇等;《扬州大学学报(自然科学版)》;20091115;66-69 *
视觉文档图像识别预处理;田大增;《博士学位论文信息科技辑》;1-126 *

Also Published As

Publication number Publication date
CN113673405A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
RU2601185C2 (en) Method, system and computer data medium for face detection
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
US7035461B2 (en) Method for detecting objects in digital images
JP2835274B2 (en) Image recognition device
JP4234381B2 (en) Method and computer program product for locating facial features
US8577099B2 (en) Method, apparatus, and program for detecting facial characteristic points
US20080193020A1 (en) Method for Facial Features Detection
US20060017825A1 (en) Method and apparatus for effecting automatic red eye reduction
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN110191287B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN108573184B (en) Two-dimensional code positioning method, module and computer readable storage medium
CN111626941A (en) Document correction method based on deep learning semantic segmentation
CN109064505A (en) A kind of depth estimation method extracted based on sliding window tensor
CN110378351A (en) Seal discrimination method and device
CN113033558A (en) Text detection method and device for natural scene and storage medium
Leal et al. Smartphone camera document detection via Geodesic Object Proposals
CN114612418A (en) Method, device and system for detecting surface defects of mouse shell and electronic equipment
CN113673405B (en) Problem correction method and system based on problem recognition and intelligent home education learning machine
CN113223023A (en) Image processing method and device, electronic device and storage medium
CN116403226A (en) Unconstrained fold document image correction method, system, equipment and storage medium
US20210281742A1 (en) Document detections from video images
Gasparini et al. A review of redeye detection and removal in digital images through patents
CN112634298B (en) Image processing method and device, storage medium and terminal
JP4789526B2 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant