CN112818797B - Consistency detection method and storage device for online examination answer document images - Google Patents

Consistency detection method and storage device for online examination answer document images Download PDF

Info

Publication number
CN112818797B
CN112818797B CN202110102061.9A CN202110102061A CN112818797B CN 112818797 B CN112818797 B CN 112818797B CN 202110102061 A CN202110102061 A CN 202110102061A CN 112818797 B CN112818797 B CN 112818797B
Authority
CN
China
Prior art keywords
image
target block
singular
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110102061.9A
Other languages
Chinese (zh)
Other versions
CN112818797A (en
Inventor
苏松志
李明月
谢作源
洪学敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110102061.9A priority Critical patent/CN112818797B/en
Publication of CN112818797A publication Critical patent/CN112818797A/en
Application granted granted Critical
Publication of CN112818797B publication Critical patent/CN112818797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a consistency detection method and storage equipment for on-line examination answer document images. The method for detecting the consistency of the answer sheet document image for the online examination comprises the following steps: acquiring a first image and a second image; locating the singular point positions of the two images, and carrying out pixel level alignment on the two images according to the singular point positions; intercepting a first target block of a first image and intercepting a second target block of the second image; comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image. In the steps, the pixel level alignment of the first image and the second image is realized by utilizing the singular points, and the singular points of the target blocks for comparison are not smaller than the preset number, so that the richness of information on the target blocks for comparison is ensured, and the accuracy of the image comparison result is ensured.

Description

Consistency detection method and storage device for online examination answer document images
Technical Field
The invention relates to the technical field of image processing, in particular to a consistency detection method and storage equipment for on-line examination answer document images.
Background
With the development of networks, online tests are becoming popular, so-called online tests are that test papers are displayed on a computer, students still answer on paper, and after the test time is up, the students shoot and submit answers on the paper. In order to monitor whether students cheat, a common online examination environment comprises two machine positions, wherein one machine position is a computer camera and is positioned in the front direction of an examinee; the two cameras are mobile equipment cameras and are positioned on the sides of the examinees. After the examination time is over, the examinee firstly shoots the answer sheet on one computer, the image is used as a reserved image, then takes off the two computer to shoot the answer sheet again, and submits the test paper. In order to avoid that the answer content of the answer sheet is modified by the examinee without permission in the clearance between the answer sheets, so that the answer sheet finally submitted by the examinee is not the answer sheet completed by the examinee in the set time, it is necessary to perform consistency detection on the answer sheets twice in sequence.
The consistency detection algorithm commonly used at present is generally oriented to high-definition color images and has a good matching effect, but does not have good robustness on low-quality handwritten document images, and completely inconsistent handwritten document images can even obtain high similarity under the system. Moreover, feature extraction of image consistency detection is often oriented to global images, tiny detail differences in the images are ignored, document image consistency detection in an online examination environment has high requirements on detail changes, and the existing detection effect is difficult to meet.
Disclosure of Invention
Therefore, it is necessary to provide a method for detecting consistency of answer sheet document images for online examination, which is used for solving the technical problem of low consistency detection precision of handwritten documents in an online examination environment, and the specific technical scheme is as follows:
a consistency detection method for on-line examination answer sheet document images comprises the following steps:
acquiring a first image and a second image;
locating singular point positions of the first image and the second image, and aligning the first image and the second image in pixel level according to the singular point positions;
intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not smaller than a preset number, and the number of singular points contained in the second target block is not smaller than the preset number;
comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image;
and if the first image and the second image are inconsistent, judging that the first image and the second image are inconsistent.
Further, the "locating the singular point positions of the first image and the second image" specifically further includes the steps of:
step 1: carrying out convolution operation on an input image and l convolution kernels with different sizes, outputting l element subgraphs, carrying out differential operation between every two adjacent element subgraphs to obtain (l-1) subgraphs, traversing pixel points in the middle layer of the subgraphs, and making differences with pixel points in the space adjacent region, wherein if the difference value of a certain pixel point is constant positive or constant negative, the pixel point is a potential singular point;
step 2: downsampling the input image, and repeating the step 1 until all potential singular points are found;
step 3: the absolute value of the second order taylor expansion of the difference function at the potential singular point is compared to 0.025, if greater than 0.025, then the position coordinates of the singular point are found by surface fitting, if less than 0.025, and the points are considered to be low-contrast points and are eliminated from the potential singular point.
Further, the "and pixel-level alignment of the first image and the second image according to the singular point position" specifically further includes the steps of:
calibrating the characteristic information of the singular points, and carrying out scanning pairing on the singular points of the first image and the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through the matrix by using the robust singular points;
and carrying out pixel level alignment on the first image and the second image according to the transformation matrix.
Further, the number of the first target blocks is more than two, and the number of the second target blocks is more than two.
Further, the "comparing whether the first target block is consistent with the second target block" specifically further includes the steps of:
and inputting the first target block and the second target block to a preset network module, outputting feature description vectors of the first target block and the second target block, calculating similarity of the feature description vectors, and judging consistency of the first image and the second image according to the similarity.
Further, the' using robust singular points to calculate a transformation matrix of the first image and the second image through a matrix; performing pixel level alignment on the first image and the second image according to the transformation matrix, and specifically further comprising the steps of:
generating a 256-dimensional feature vector for each singular point by using the direction derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
dividing paired singular points into two sets of linear pairs and nonlinear pairs according to the corresponding relation, filtering the nonlinear pairs, determining a mapping matrix T by using the linear pairs, evaluating a calculation result by using all singular point pairs, iterating until the calculation error of the point-to-mapping relation is less than 0.6%, obtaining the position of a current point by using the original point through the mapping matrix T, and aligning the first image and the second image at the pixel level.
Further, the step of inputting the first target block and the second target block to a preset network module to output the feature description vectors of the first target block and the second target block specifically further includes the steps of:
respectively inputting the first target block and the second target block to a head network, an inter-network and a rear network;
the head network includes seven convolutional layers and four pooling layers.
In order to solve the technical problems, the invention also provides a storage device, which comprises the following specific technical scheme:
a storage device having stored therein a set of instructions for performing any of the steps mentioned above.
The beneficial effects of the invention are as follows: the first equipment shoots a test paper to obtain a first image by responding to a test time ending instruction; the second equipment shoots the test paper to obtain a second image; locating singular point positions of the first image and the second image, and aligning the first image and the second image in pixel level according to the singular point positions; intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not smaller than a preset number, and the number of singular points contained in the second target block is not smaller than the preset number; comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image; and if the first image and the second image are inconsistent, judging that the first image and the second image are inconsistent. In the steps, the pixel level alignment of the first image and the second image is realized by utilizing the singular points, and the singular points of the target blocks for comparison are not smaller than the preset number, so that the richness of information on the target blocks for comparison is ensured, and the accuracy of the image comparison result is ensured.
Drawings
Fig. 1 is a flowchart of a method for detecting consistency of an answer sheet document image for an online examination according to an embodiment;
FIG. 2 is a schematic diagram of an inter-network structure according to an embodiment;
FIG. 3 is a schematic diagram of a network structure of a consistency detection network according to an embodiment;
fig. 4 is a schematic block diagram of a memory device according to an embodiment.
Reference numerals illustrate:
400. a storage device.
Detailed Description
In order to describe the technical content, constructional features, achieved objects and effects of the technical solution in detail, the following description is made in connection with the specific embodiments in conjunction with the accompanying drawings.
Referring to fig. 1 to 3, in the present embodiment, the first image is from a first camera, the first camera is a computer camera and is located in the front direction of the examinee, and the second camera is a mobile device camera and is located at the side of the examinee. After the examination time is over, the examinee firstly shoots the answer sheet on the first machine position of the computer to obtain a first image, and then takes down the second machine position to shoot the answer sheet again to obtain a second image.
Therefore, the core technical idea of the present application is as follows: the consistency of the first image and the second image submitted sequentially is compared to assist in judging whether an examinee has the rule-breaking action of changing the answer content by using the cross-over gap borrower, and the first image and the second image are respectively processed by the special network module by introducing the singular point, so that smaller fine document change marks can be detected, and the fine change of the cross-over gap examinee operation is assisted in judgment. The specific technical scheme is as follows:
step S101: a first image and a second image are acquired. In the process, the quality of the first image and the quality of the second image are subjected to preliminary evaluation, including evaluation of definition, illumination, angle and the like, and real-time feedback is performed on an examinee, so that the quality of the photographed image is ensured.
Step S102: and locating the singular point positions of the first image and the second image. The "locating the singular point positions of the first image and the second image" specifically further includes the steps of:
step 1: the input image and l convolution kernels with different sizes are subjected to convolution operation, l element subgraphs with different blurring degrees are output, difference operation is carried out between every two adjacent element subgraphs to obtain (l-1) sub subgraphs, pixel points in the middle layer of the sub subgraphs are traversed, difference is carried out between the pixel points and the pixel points in the space adjacent region, and if the difference value of a certain pixel point is constant positive or constant negative, the pixel point is a potential singular point.
Step 2: downsampling the input image, and repeating the step 1 until all potential singular points are found;
step 3: the absolute value of the second order taylor expansion of the difference function at the potential singular point is compared to 0.025, if greater than 0.025, then the position coordinates of the singular point are found by surface fitting, if less than 0.025, and the points are considered to be low-contrast points and are eliminated from the potential singular point.
Step S103: and performing pixel level alignment on the first image and the second image according to the singular point positions. The method specifically comprises the following steps:
calibrating the characteristic information of the singular points, and carrying out scanning pairing on the singular points of the first image and the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through the matrix by using the robust singular points;
and carrying out pixel level alignment on the first image and the second image according to the transformation matrix. The method specifically comprises the following steps:
step a: generating a 256-dimensional feature vector for each singular point by using the direction derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
step b: the n pairs of pairing singular points obtained in the step a are { (a) 1 ,b 1 ),(a 2 ,b 2 ),…,(a n ,b n ) The } is divided into linear pairs { (a) according to the corresponding relation 11 ,b 11 ),(a 12 ,b 12 ),…,(a 1p ,b 1p ) Non-linear pair { (a) 21 ,b 21 ),(a 22 ,b 22 ),…,(a 2q ,b 2q ) The two sets are filtered, nonlinear pairs are filtered, a mapping matrix T is determined by using the linear pairs, the calculation result is evaluated by using all singular point pairs, and iteration is continued until the calculation error of the singular point pair mapping relation is smaller than 0.6%. The original corner point obtains the position of the present corner point through the mapping matrix T, thereby realizing the alignment of the document pixel level.
Step S104: intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not smaller than a preset number, and the number of singular points contained in the second target block is not smaller than the preset number. The method comprises the following steps: searching for the area with discrimination. Because the singular points have rich characteristic information, most text areas and correction areas in document images are often covered, a plurality of rectangular blocks with certain specification size are intercepted as preparation blocks for consistency detection, and each rectangular block contains no less than 30 singular points. Combining the mapping matrix T obtained in step b above ensures that the preliminary blocks appear in pairs. Thus, in the present application, the number of the first target blocks is more than two, and the number of the second target blocks is more than two.
And inputting the first target block and the second target block to a preset network module, outputting feature description vectors of the first target block and the second target block, calculating similarity of the feature description vectors, and judging consistency of the first image and the second image according to the similarity.
The step of inputting the first target block and the second target block to a preset network module to output the feature description vectors of the first target block and the second target block specifically further includes the steps of:
respectively inputting the first target block and the second target block to a head network, an inter-network and a rear network;
the head network includes seven convolutional layers and four pooling layers. The method comprises the following steps:
first network: comprising 7 convolutional layers and 4 pooling layers, the input layer size being 64 x 64 pixels, the convolution layer Conv1-Conv7 convolution kernel sizes are 3 x 1,3 x 32,3 x 3 x 64,3 x 3 x 64,3 x 3 x 128,3 x 3 x 128,3 x 3 x 256, the step sizes are all 1. The pooling layers all adopt the maximum pooling, and the step length is 2. The parameters of the layers of the first network are shown in table 1 below. The output image is X.
TABLE 1
Inter-network: let the input image be X and the image obtained after convolution transformation be Z. X epsilon R W×H×C ,Z∈R W'×H'×C' . The change process is shown in formula 1:
wherein f c Is a three-dimensional space convolution kernel,is a two-dimensional space convolution kernel->Representing a convolution operation. z c I.e. the image of Z in a single pass. For image Z, to obtain its global information, a common is used
Carrying out average pooling in formula 2 to generate a pixel statistical point t, t epsilon R C×1 :
And then, activating t to acquire the correlation among the channels. The activation operation is as shown in equation 3:
l=σ 2 (g(t,w))=σ 2 (w 2 σ 1 (w 1 t)) (3)
in sigma 1 Representing the activation function ReLu, sigma 2 Representing the activation function Sigmoid, w 1 ,w 2 ∈R C×C . And (3) outputting an image after recalibration:
x c '=G(z c ,l c )=l c ·z c (4)
wherein X' = [ X ] 1 ',x 2 ',…,x c ']Is a multi-channel image after recalibration of the features. G represents the convolved image z c And scalar l c Is a product function of (a). The image X 'is subjected to global average pooling (AvgP) and maximum pooling (MaxP) using a 5X 5 convolution kernel to increase the local feature information, resulting in the final generated feature map F (X'). As shown in equation 5:
F(X')=σ 2 (f 5*5 ([AvgP(X');MaxP(X')])) (5)
a schematic diagram of the network structure of the intermediate network is shown in fig. 2.
Post network: and straightening the output characteristic diagram to generate an initial description vector. In order to further refine and simplify the feature vector, two full connection layers are constructed to reduce the dimension of the initial description vector. And finally, carrying out normalization operation to obtain the feature description vector with the module length of 1. Wherein the parameters are shown in table 2 below:
TABLE 2
Loss function
Adopting the cosine of the included angle of the feature description vector as the similarity of the target blocks, as shown in a formula 6, M 1 ,M 2 The larger the cosine value is, the smaller the included angle between the feature description vectors is, and the more similar the object blocks are.
R=cos<M 1 ,M 2 >=M 1 T ·M 2
Since the dimensions of the output feature description vector are positive numbers, the target block similarity S epsilon [0,1] has a threshold range corresponding to the network tag, and therefore an error function is constructed based on cross entropy. The network channels are arranged in parallel, and network training is carried out based on the model, so that the similarity between consistent target blocks tends to be 1, and the similarity between inconsistent target blocks tends to be 0.
Step S105: is the first target block consistent with the second target block? The method comprises the following steps: and (3) inputting the to-be-detected pair first target block and second target block obtained in the step S104 into a trained network, and outputting the similarity of the first target block and the second target block. If the inconsistent block exists, judging that the two images of the answer sheet are inconsistent, and the examinee makes illegal changes to the answer sheet to assist in judging the illegal behaviors of the examinee.
If so, step S106 is performed: and judging that the first image and the second image are consistent.
If not, step S107 is executed: and judging that the first image and the second image are inconsistent.
The network structure diagram of the consistency detection network is shown in fig. 3.
The first equipment shoots a test paper to obtain a first image by responding to a test time ending instruction; the second equipment shoots the test paper to obtain a second image; locating singular point positions of the first image and the second image, and aligning the first image and the second image in pixel level according to the singular point positions; intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not smaller than a preset number, and the number of singular points contained in the second target block is not smaller than the preset number; comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image; and if the first image and the second image are inconsistent, judging that the first image and the second image are inconsistent. In the steps, the pixel level alignment of the first image and the second image is realized by utilizing the singular points, and the singular points of the target blocks for comparison are not smaller than the preset number, so that the richness of information on the target blocks for comparison is ensured, and the accuracy of the image comparison result is ensured.
Referring to fig. 4, in this embodiment, the storage device 400 includes, but is not limited to: the steps of the method for detecting the consistency of the answer sheet document image for the online examination are the same as those described above, so that repeated description is omitted.
It should be noted that, although the foregoing embodiments have been described herein, the scope of the present invention is not limited thereby. Therefore, based on the innovative concepts of the present invention, alterations and modifications to the embodiments described herein, or equivalent structures or equivalent flow transformations made by the present description and drawings, apply the above technical solution, directly or indirectly, to other relevant technical fields, all of which are included in the scope of the invention.

Claims (6)

1. A consistency detection method for an online examination answer sheet document image is characterized by comprising the following steps:
acquiring a first image and a second image;
locating singular point positions of the first image and the second image, and aligning the first image and the second image in pixel level according to the singular point positions;
intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not smaller than a preset number, and the number of singular points contained in the second target block is not smaller than the preset number;
comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image;
if the first image and the second image are inconsistent, judging that the first image and the second image are inconsistent;
and performing pixel level alignment on the first image and the second image according to the singular point position, and specifically further comprising the steps of:
calibrating the characteristic information of the singular points, and carrying out scanning pairing on the singular points of the first image and the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through the matrix by using the robust singular points;
performing pixel level alignment on the first image and the second image according to the transformation matrix;
the transformation matrix of the first image and the second image is calculated through the matrix by using the robust singular points; and performing pixel level alignment on the first image and the second image according to the transformation matrix, and specifically further comprising the steps of:
generating a 256-dimensional feature vector for each singular point by using the direction derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
dividing paired singular points into two sets of linear pairs and nonlinear pairs according to the corresponding relation, filtering the nonlinear pairs, determining a mapping matrix T by using the linear pairs, evaluating a calculation result by using all singular point pairs, iterating until the calculation error of the point-to-mapping relation is less than 0.6%, obtaining the position of a current point by using the original point through the mapping matrix T, and aligning the first image and the second image at the pixel level.
2. The method for detecting the consistency of the answer document images for the online examination according to claim 1, wherein the step of locating the singular point positions of the first image and the second image specifically further comprises the steps of:
step 1: carrying out convolution operation on an input image and l convolution kernels with different sizes, outputting l element subgraphs, carrying out differential operation between every two adjacent element subgraphs to obtain (l-1) subgraphs, traversing pixel points in the middle layer of the subgraphs, and making differences with pixel points in the space adjacent region, wherein if the difference value of a certain pixel point is constant positive or constant negative, the pixel point is a potential singular point;
step 2: downsampling the input image, and repeating the step 1 until all potential singular points are found;
step 3: the absolute value of the second order taylor expansion of the difference function at the potential singular point is compared to 0.025, if greater than 0.025, then the position coordinates of the singular point are found by surface fitting, if less than 0.025, and the points are considered to be low-contrast points and are eliminated from the potential singular point.
3. The method for detecting the consistency of the answer sheet document image for the online examination according to claim 1, wherein the number of the first target blocks is more than two, and the number of the second target blocks is more than two.
4. The method for detecting the consistency of an online examination answer document image according to claim 1, wherein the comparing whether the first target block is consistent with the second target block is specifically further comprising the steps of:
and inputting the first target block and the second target block to a preset network module, outputting feature description vectors of the first target block and the second target block, calculating similarity of the feature description vectors, and judging consistency of the first image and the second image according to the similarity.
5. The method for detecting the consistency of the answer sheet document image for the online examination according to claim 4, wherein the step of inputting the first target block and the second target block to a preset network module to output feature description vectors of the first target block and the second target block comprises the following steps:
respectively inputting the first target block and the second target block to a head network, an inter-network and a rear network;
the head network includes seven convolutional layers and four pooling layers.
6. A storage device having stored therein a set of instructions for performing the steps of any of claims 1 to 5.
CN202110102061.9A 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images Active CN112818797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110102061.9A CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110102061.9A CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Publications (2)

Publication Number Publication Date
CN112818797A CN112818797A (en) 2021-05-18
CN112818797B true CN112818797B (en) 2024-03-01

Family

ID=75859588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110102061.9A Active CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Country Status (1)

Country Link
CN (1) CN112818797B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187570B (en) * 2022-07-27 2023-04-07 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
CN101145196A (en) * 2006-09-13 2008-03-19 中国科学院自动化研究所 Quick fingerprint identification method based on strange topology structure
CN101271577A (en) * 2008-03-28 2008-09-24 北京工业大学 Wrong-page on-line fast image detection method of binder
CN102855495A (en) * 2012-08-22 2013-01-02 苏州多捷电子科技有限公司 Method for implementing electronic edition standard answer, and application system thereof
WO2015132191A1 (en) * 2014-03-03 2015-09-11 Advanced Track & Trace Methods and devices for identifying and recognizing objects
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN110175506A (en) * 2019-04-08 2019-08-27 复旦大学 Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again
CN110879965A (en) * 2019-10-12 2020-03-13 中国平安财产保险股份有限公司 Automatic reading and amending method of test paper objective questions, electronic device, equipment and storage medium
CN110991374A (en) * 2019-12-10 2020-04-10 电子科技大学 Fingerprint singular point detection method based on RCNN
CN111986126A (en) * 2020-07-17 2020-11-24 浙江工业大学 Multi-target detection method based on improved VGG16 network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4409035B2 (en) * 2000-03-22 2010-02-03 本田技研工業株式会社 Image processing apparatus, singular part detection method, and recording medium recording singular part detection program

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
CN101145196A (en) * 2006-09-13 2008-03-19 中国科学院自动化研究所 Quick fingerprint identification method based on strange topology structure
CN101271577A (en) * 2008-03-28 2008-09-24 北京工业大学 Wrong-page on-line fast image detection method of binder
CN102855495A (en) * 2012-08-22 2013-01-02 苏州多捷电子科技有限公司 Method for implementing electronic edition standard answer, and application system thereof
WO2015132191A1 (en) * 2014-03-03 2015-09-11 Advanced Track & Trace Methods and devices for identifying and recognizing objects
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN110175506A (en) * 2019-04-08 2019-08-27 复旦大学 Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again
CN110879965A (en) * 2019-10-12 2020-03-13 中国平安财产保险股份有限公司 Automatic reading and amending method of test paper objective questions, electronic device, equipment and storage medium
CN110991374A (en) * 2019-12-10 2020-04-10 电子科技大学 Fingerprint singular point detection method based on RCNN
CN111986126A (en) * 2020-07-17 2020-11-24 浙江工业大学 Multi-target detection method based on improved VGG16 network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Perspective invariant binary feature descriptor based image matching algorithm;Geng Li-chuan;Su Song-zhi et al.;《Journal on Communications》;第36卷(第4期);正文全文 *
基于奇异点区域方向场的指纹检索;赵琪 等;《微计算机信息》;正文第1页右列第2段,第3页第2节 *
基于频域匹配的输电线路故障定位算法;樊浩;郝宁;;自动化技术与应用;20201023(第10期);正文全文 *

Also Published As

Publication number Publication date
CN112818797A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN110427937B (en) Inclined license plate correction and indefinite-length license plate identification method based on deep learning
US8509536B2 (en) Character recognition device and method and computer-readable medium controlling the same
US8401333B2 (en) Image processing method and apparatus for multi-resolution feature based image registration
CN104067312B (en) There are the method for registering images and system of robustness to noise
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
DE102016013274A1 (en) IMAGE PROCESSING DEVICE AND METHOD FOR RECOGNIZING AN IMAGE OF AN OBJECT TO BE DETECTED FROM ENTRY DATA
Wang et al. Recognition and location of the internal corners of planar checkerboard calibration pattern image
CN101147159A (en) Fast method of object detection by statistical template matching
CN111507976B (en) Defect detection method and system based on multi-angle imaging
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
CN108765476A (en) A kind of polarization image method for registering
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN112183325B (en) Road vehicle detection method based on image comparison
CN110490924B (en) Light field image feature point detection method based on multi-scale Harris
Alemán-Flores et al. Line detection in images showing significant lens distortion and application to distortion correction
KR102195826B1 (en) Keypoint identification
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN112818797B (en) Consistency detection method and storage device for online examination answer document images
CN112734832A (en) Method for measuring real size of on-line object in real time
CN116258663A (en) Bolt defect identification method, device, computer equipment and storage medium
CN112102379B (en) Unmanned aerial vehicle multispectral image registration method
CN113688846A (en) Object size recognition method, readable storage medium, and object size recognition system
CN113642397A (en) Object length measuring method based on mobile phone video
CN112991159A (en) Face illumination quality evaluation method, system, server and computer readable medium
CN115239801B (en) Object positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant