CN112818797A - Consistency detection method and storage device for answer sheet document images of online examination - Google Patents

Consistency detection method and storage device for answer sheet document images of online examination Download PDF

Info

Publication number
CN112818797A
CN112818797A CN202110102061.9A CN202110102061A CN112818797A CN 112818797 A CN112818797 A CN 112818797A CN 202110102061 A CN202110102061 A CN 202110102061A CN 112818797 A CN112818797 A CN 112818797A
Authority
CN
China
Prior art keywords
image
target block
singular
point
answer sheet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110102061.9A
Other languages
Chinese (zh)
Other versions
CN112818797B (en
Inventor
苏松志
李明月
谢作源
洪学敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110102061.9A priority Critical patent/CN112818797B/en
Publication of CN112818797A publication Critical patent/CN112818797A/en
Application granted granted Critical
Publication of CN112818797B publication Critical patent/CN112818797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/418Document matching, e.g. of document images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a consistency detection method and storage equipment for answer sheet document images of online examination. The consistency detection method for the image of the answer sheet document of the online examination comprises the following steps: acquiring a first image and a second image; the singular point positions of the two images are positioned, and pixel level alignment is carried out on the two images according to the singular point positions; intercepting a first target block of a first image, and intercepting a second target block of a second image; and comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image. In the above steps, pixel-level alignment of the first image and the second image is realized by using singular points, and the number of the singular points of the target block for comparison is not less than the preset number, so that the richness of information on the target block for comparison is ensured, and the accuracy of an image comparison result is ensured.

Description

Consistency detection method and storage device for answer sheet document images of online examination
Technical Field
The invention relates to the technical field of image processing, in particular to a consistency detection method and storage equipment for answer sheet document images of online examination.
Background
With the development of networks, online examinations are more popular, that is, examination papers are displayed on a computer, students still answer on paper, and after the examination time is up, the students take pictures of the answers on the paper and submit the answers. In order to supervise whether students cheat, a common online examination environment comprises two machine positions, wherein one machine position is a computer camera and is positioned in the front direction of the students; the second machine position is a mobile equipment camera and is positioned on the side of the examinee. After the examination time is over, the examinee firstly takes the picture of the answer sheet of one machine position of the computer as a background image, then takes the two machine positions down to take the picture again, and submits the examination sheet. In order to avoid that the examinee modifies the answering content on the answer sheet without permission in the delivery interval, so that the answer sheet finally submitted by the examinee is not the answer sheet finished by the examinee within the specified time, consistency detection needs to be carried out on the answer sheets twice in sequence.
At present, a commonly used consistency detection algorithm is generally oriented to high-definition color images and has a good matching effect, but the consistency detection algorithm does not have good robustness on low-quality handwritten document images, and completely inconsistent handwritten document images can even obtain high similarity under the system. Moreover, the feature extraction of the image consistency detection is usually oriented to the global image, the tiny detail difference in the image is ignored, the document image consistency detection in the online examination environment has high requirements on detail change, and the existing detection effect is difficult to meet.
Disclosure of Invention
Therefore, a consistency detection method for answer sheet document images in online examination is needed to be provided, so as to solve the technical problem of low consistency detection accuracy of handwritten documents in an online examination environment, and the specific technical scheme is as follows:
a consistency detection method for on-line examination paper answering document images comprises the following steps:
acquiring a first image and a second image;
positioning singular point positions of the first image and the second image, and performing pixel level alignment on the first image and the second image according to the singular point positions;
intercepting a first target block of the first image, and intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not less than a preset number, and the number of singular points contained in the second target block is not less than the preset number;
comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image;
and if the first image and the second image are not consistent, judging that the first image and the second image are not consistent.
Further, the "positioning the singular point positions of the first image and the second image" specifically includes the steps of:
step 1: performing convolution operation on an input image and convolution kernels with different sizes, outputting one element sub-image, performing difference operation between every two adjacent element sub-images to obtain (l-1) sub-images, traversing pixel points in the middle layer of the sub-images, performing difference on the pixel points in the spatial neighborhood, and determining a potential singular point if the difference value of a certain pixel point is constant positive or constant negative;
step 2: down-sampling the input image, and repeating the step 1 until all potential singular points are found;
and step 3: and comparing the absolute value of the second-order Taylor expansion of the difference function at the potential singular points with 0.025, if the absolute value is more than 0.025, reserving the difference function, if the absolute value is less than 0.025, regarding the difference function as a point with low contrast, removing the point from the potential singular points, and finding the position coordinates of the singular points through surface fitting.
Further, the "performing pixel level alignment on the first image and the second image according to the position of the singular point" specifically includes the following steps:
calibrating the characteristic information of the singular point, and scanning and pairing the singular point of the first image and the singular point of the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through a matrix by using the steady singular points;
and performing pixel level alignment on the first image and the second image according to the conversion matrix.
Further, the number of the first target blocks is two or more, and the number of the second target blocks is two or more.
Further, the comparing whether the first target block is consistent with the second target block specifically includes the steps of:
inputting the first target block and the second target block to a preset network module to output feature description vectors of the first target block and the second target block, calculating the similarity of the feature description vectors, and judging the consistency of the first image and the second image according to the similarity.
Further, the conversion matrixes of the first image and the second image are obtained through matrix calculation by using the robust singular points; and performing pixel level alignment on the first image and the second image according to the transformation matrix, specifically comprising the steps of:
generating a 256-dimensional feature vector for each singular point by using a directional derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
dividing paired singular points into two sets of linear pairs and nonlinear pairs according to the corresponding relation, filtering the nonlinear pairs, determining a mapping matrix T by using the linear pairs, evaluating the calculation results by using all the singular point pairs, continuously iterating until the calculation error of the point pair mapping relation is less than 0.6%, obtaining the current corner point position by the original corner point through the mapping matrix T, and aligning the first image and the second image at the pixel level.
Further, the step of inputting the first target block and the second target block to a preset network module and outputting the feature description vectors of the first target block and the second target block specifically includes the steps of:
inputting the first target block and the second target block to a first network, an inter-network and a post-network respectively;
the first network contains seven convolutional layers and four pooling layers.
In order to solve the technical problem, the storage device is further provided, and the specific technical scheme is as follows:
a storage device having stored therein a set of instructions for performing any of the steps mentioned above.
The invention has the beneficial effects that: the first device shoots the test paper to obtain a first image by responding to the examination time ending instruction; the second equipment shoots the test paper to obtain a second image; positioning singular point positions of the first image and the second image, and performing pixel level alignment on the first image and the second image according to the singular point positions; intercepting a first target block of the first image, and intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not less than a preset number, and the number of singular points contained in the second target block is not less than the preset number; comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image; and if the first image and the second image are not consistent, judging that the first image and the second image are not consistent. In the above steps, pixel-level alignment of the first image and the second image is realized by using singular points, and the number of the singular points of the target block for comparison is not less than the preset number, so that the richness of information on the target block for comparison is ensured, and the accuracy of an image comparison result is ensured.
Drawings
Fig. 1 is a flowchart of a method for consistency detection of an image of an on-line examination paper-answering document according to an embodiment;
FIG. 2 is a schematic diagram of an inter-network architecture according to an embodiment;
FIG. 3 is a schematic diagram of a network structure of a conformance detection network according to an embodiment;
fig. 4 is a block diagram of a storage device according to an embodiment.
Description of reference numerals:
400. a storage device.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1 to 3, in the present embodiment, the first image is from a first machine position, the first machine position is a computer camera and is located in the front direction of the examinee, and the second machine position is a mobile device camera and is located on the side of the examinee. After the examination time is over, the examinee firstly shoots the answer sheet on the first machine position of the computer to obtain a first image, and then takes down the second machine position to shoot the answer sheet again to obtain a second image.
Therefore, the core technical idea of the application is as follows: whether the examinee has the violation behavior of changing the answer sheet content by using the cross-paper clearance borrowing machine or not is judged in an auxiliary mode by comparing the consistency of the first image and the second image which are submitted successively, and the singular point and the specific network module are introduced to process the first image and the second image respectively, so that smaller detailed document changing traces can be detected, and the auxiliary judgment is carried out on the fine changing of the operation of the cross-paper clearance examinee. The specific technical scheme is as follows:
step S101: a first image and a second image are acquired. In the process, the quality of the first image and the quality of the second image are preliminarily evaluated, including the evaluation of definition, illumination, angle and the like, and the examinee is fed back in real time to ensure the quality of the shot images.
Step S102: and positioning the positions of the singular points of the first image and the second image. The positioning of the singular point positions of the first image and the second image specifically comprises the following steps:
step 1: performing convolution operation on an input image and convolution kernels with different sizes, outputting element sub-images with different fuzzy degrees, performing difference operation between every two adjacent element sub-images to obtain (l-1) sub-images, traversing pixel points in the middle layer of the sub-images, performing difference on the pixel points in the spatial neighborhood, and determining a potential singular point if the difference value of a certain pixel point is constant positive or constant negative.
Step 2: down-sampling the input image, and repeating the step 1 until all potential singular points are found;
and step 3: and comparing the absolute value of the second-order Taylor expansion of the difference function at the potential singular points with 0.025, if the absolute value is more than 0.025, reserving the difference function, if the absolute value is less than 0.025, regarding the difference function as a point with low contrast, removing the point from the potential singular points, and finding the position coordinates of the singular points through surface fitting.
Step S103: and performing pixel-level alignment on the first image and the second image according to the singular point position. The method specifically comprises the following steps:
calibrating the characteristic information of the singular point, and scanning and pairing the singular point of the first image and the singular point of the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through a matrix by using the steady singular points;
and performing pixel level alignment on the first image and the second image according to the conversion matrix. The method specifically comprises the following steps:
step a: generating a 256-dimensional feature vector for each singular point by using a directional derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
step b: c, matching the n pairs of singular points { (a) obtained in the step a1,b1),(a2,b2),…,(an,bn) Divide it into linear pairs (a) according to their corresponding relations11,b11),(a12,b12),…,(a1p,b1p) } nonlinear pair { (a)21,b21),(a22,b22),…,(a2q,b2q) And (4) filtering the two sets, determining a mapping matrix T by using linear pairs, evaluating the calculation result by using all singular point pairs, and continuously iterating until the singular point pair mapping relation calculation error is less than 0.6%. And the original corner point obtains the position of the current corner point through the mapping matrix T, so that the document pixel level alignment is realized.
Step S104: intercepting a first target block of the first image, intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not less than the preset number, and the number of singular points contained in the second target block is not less than the preset number. The method specifically comprises the following steps: the area with discrimination is sought. Since the singular points have rich characteristic information and usually cover most character areas and correction areas in the document image, a plurality of rectangular blocks with certain specification sizes are intercepted as preparation blocks for consistency detection, and the number of the singular points contained in each rectangular block is not less than 30. And c, combining the mapping matrix T obtained in the step b to ensure that the preparation blocks appear in pairs. Therefore, in the present application, the number of the first target blocks is two or more, and the number of the second target blocks is two or more.
Inputting the first target block and the second target block to a preset network module to output feature description vectors of the first target block and the second target block, calculating the similarity of the feature description vectors, and judging the consistency of the first image and the second image according to the similarity.
The step of inputting the first target block and the second target block to a preset network module to output the feature description vectors of the first target block and the second target block specifically comprises the steps of:
inputting the first target block and the second target block to a first network, an inter-network and a post-network respectively;
the first network contains seven convolutional layers and four pooling layers. The method specifically comprises the following steps:
a first network: the convolutional layers comprise 7 convolutional layers and 4 pooling layers, the input layer size is 64 × 64 pixels, the convolutional layers Conv1-Conv7 convolutional kernel sizes are 3 × 3 × 1, 3 × 3 × 32, 3 × 3 × 64, 3 × 3 × 64, 3 × 3 × 128, 3 × 3 × 128 and 3 × 3 × 256 respectively, and the step sizes are all 1. The pooling layers are all subjected to maximum pooling, and the step length is 2. The first network layer parameters are shown in table 1 below. The output image is X.
TABLE 1
Figure BDA0002916335410000071
Network switching: let the input image be X, and the image obtained after convolution transformation be Z. X is formed by RW×H×C,Z∈RW'×H'×C'. The variation process is shown in formula 1:
Figure BDA0002916335410000072
wherein f iscIs a spatial convolution kernel in three dimensions and,
Figure BDA0002916335410000074
is a two-dimensional spatial convolution kernel,
Figure BDA0002916335410000075
representing a convolution operation. z is a radical ofcI.e. the image of Z under a single channel. For image Z, in order to obtain its global information, a public is used
Formula 2 is averaged and pooled to generate a pixel statistic point t, t belongs to RC×1:
Figure BDA0002916335410000073
And then activating t to acquire the correlation among the channels. The activation operation is shown in equation 3:
l=σ2(g(t,w))=σ2(w2σ1(w1t)) (3)
in the formula, σ1Representing the activation function ReLu, σ2Representative of the activation function Sigmoid, w1,w2∈RC×C. Outputting an image after recalibration:
xc'=G(zc,lc)=lc·zc (4)
wherein X ═ X1',x2',…,xc']The multi-channel image after the features are re-calibrated. G denotes the convolved image zcAnd a scalar lcA product function of. Use of image XA 5 × 5 convolution kernel performs global average pooling (AvgP) and maximum pooling (MaxP) on the image to add local feature information, resulting in the final generated feature map F (X'). As shown in equation 5:
F(X')=σ2(f5*5([AvgP(X');MaxP(X')])) (5)
a schematic network structure diagram of the intermediate network is shown in fig. 2.
A back network: and straightening the output characteristic diagram to generate an initial description vector. In order to further refine and simplify the feature vector, two full-connection layers are constructed to reduce the dimension of the initial description vector. Finally, normalization operation is carried out to obtain a feature description vector with the modular length of 1. Wherein the parameters are shown in the following table 2:
TABLE 2
Figure BDA0002916335410000081
Loss function
The cosine of the included angle of the feature description vector is used as the similarity of the target block, as shown in formula 6, M1,M2The larger the cosine value is, the smaller the included angle between the feature description vectors is, and the more similar the object blocks are.
R=cos<M1,M2>=M1 T·M2
Figure BDA0002916335410000082
Because all dimensions of the output feature description vector are positive numbers, the similarity S of the target block belongs to [0,1], and the threshold range corresponds to the network label, an error function is constructed based on the cross entropy. The network channels are arranged in parallel, and network training is carried out based on the model, so that the similarity between consistent target blocks tends to 1, and the similarity between inconsistent target blocks tends to 0.
Step S105: is the first target block and the second target block consistent? The method specifically comprises the following steps: and (5) inputting the paired first target block and second target block to be detected obtained in the step (S104) into the trained network, and outputting the similarity of the first target block and the second target block. If the inconsistent blocks exist, the two images of the answer sheet are judged to be inconsistent, and the examinee makes illegal change on the answer sheet to assist in judging the illegal behavior of the examinee.
If yes, go to step S106: and judging that the first image and the second image are consistent.
If not, go to step S107: determining that the first image and the second image are inconsistent.
The network structure of the consistency check network is schematically shown in fig. 3.
The first device shoots the test paper to obtain a first image by responding to the examination time ending instruction; the second equipment shoots the test paper to obtain a second image; positioning singular point positions of the first image and the second image, and performing pixel level alignment on the first image and the second image according to the singular point positions; intercepting a first target block of the first image, and intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not less than a preset number, and the number of singular points contained in the second target block is not less than the preset number; comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image; and if the first image and the second image are not consistent, judging that the first image and the second image are not consistent. In the above steps, pixel-level alignment of the first image and the second image is realized by using singular points, and the number of the singular points of the target block for comparison is not less than the preset number, so that the richness of information on the target block for comparison is ensured, and the accuracy of an image comparison result is ensured.
Referring to fig. 4, in the present embodiment, the storage device 400 includes, but is not limited to: the method for detecting consistency of answer sheet documents in online examinations comprises the steps of a personal computer, a server, a general-purpose computer, a special-purpose computer, a network device, an embedded device, a programmable device, an intelligent mobile terminal and the like, wherein the steps of the method for detecting consistency of answer sheet documents and images in online examinations are the same as those in the above steps, so that repeated descriptions are not provided herein.
It should be noted that, although the above embodiments have been described herein, the invention is not limited thereto. Therefore, based on the innovative concepts of the present invention, the technical solutions of the present invention can be directly or indirectly applied to other related technical fields by making changes and modifications to the embodiments described herein, or by using equivalent structures or equivalent processes performed in the content of the present specification and the attached drawings, which are included in the scope of the present invention.

Claims (8)

1. A consistency detection method for an image of an answer sheet document of an online examination is characterized by comprising the following steps:
acquiring a first image and a second image;
positioning singular point positions of the first image and the second image, and performing pixel level alignment on the first image and the second image according to the singular point positions;
intercepting a first target block of the first image, and intercepting a second target block of the second image, wherein the number of singular points contained in the first target block is not less than a preset number, and the number of singular points contained in the second target block is not less than the preset number;
comparing whether the first target block is consistent with the second target block, and if so, judging that the first image is consistent with the second image;
and if the first image and the second image are not consistent, judging that the first image and the second image are not consistent.
2. The method for detecting consistency of an image of an answer sheet document in an online examination, according to claim 1, wherein the step of locating the singular point position of the first image and the second image further comprises the steps of:
step 1: performing convolution operation on an input image and convolution kernels with different sizes, outputting one element sub-image, performing difference operation between every two adjacent element sub-images to obtain (l-1) sub-images, traversing pixel points in the middle layer of the sub-images, performing difference on the pixel points in the spatial neighborhood, and determining a potential singular point if the difference value of a certain pixel point is constant positive or constant negative;
step 2: down-sampling the input image, and repeating the step 1 until all potential singular points are found;
and step 3: and comparing the absolute value of the second-order Taylor expansion of the difference function at the potential singular points with 0.025, if the absolute value is more than 0.025, reserving the difference function, if the absolute value is less than 0.025, regarding the difference function as a point with low contrast, removing the point from the potential singular points, and finding the position coordinates of the singular points through surface fitting.
3. The method according to claim 1, wherein said "pixel-level aligning said first image and said second image according to said singular point position" further comprises the steps of:
calibrating the characteristic information of the singular point, and scanning and pairing the singular point of the first image and the singular point of the second image according to the characteristic information;
calculating a transformation matrix of the first image and the second image through a matrix by using the steady singular points;
and performing pixel level alignment on the first image and the second image according to the conversion matrix.
4. The method for detecting consistency of an image of an answer sheet document in an online examination, according to claim 1, wherein the number of the first target blocks is two or more, and the number of the second target blocks is two or more.
5. The method for detecting consistency of an image of an answer sheet document in an online examination, according to claim 1, wherein said comparing whether the first target block is consistent with the second target block further comprises:
inputting the first target block and the second target block to a preset network module to output feature description vectors of the first target block and the second target block, calculating the similarity of the feature description vectors, and judging the consistency of the first image and the second image according to the similarity.
6. The method for detecting the consistency of the image of the answer sheet document in the online examination, according to claim 3, wherein the conversion matrix of the first image and the second image is obtained by matrix calculation by using a robust singular point; and performing pixel level alignment on the first image and the second image according to the transformation matrix, specifically comprising the steps of:
generating a 256-dimensional feature vector for each singular point by using a directional derivative histogram, and pairing the two singular points if the distance between the two feature vectors is smaller than a specific threshold value;
dividing paired singular points into two sets of linear pairs and nonlinear pairs according to the corresponding relation, filtering the nonlinear pairs, determining a mapping matrix T by using the linear pairs, evaluating the calculation results by using all the singular point pairs, continuously iterating until the calculation error of the point pair mapping relation is less than 0.6%, obtaining the current corner point position by the original corner point through the mapping matrix T, and aligning the first image and the second image at the pixel level.
7. The method for detecting consistency of an image of an answer sheet document in an online examination according to claim 5, wherein the step of inputting the first target block and the second target block to a preset network module and outputting a feature description vector of the first target block and the second target block comprises the following steps:
inputting the first target block and the second target block to a first network, an inter-network and a post-network respectively;
the first network contains seven convolutional layers and four pooling layers.
8. A storage device having a set of instructions stored therein, wherein the set of instructions is adapted to perform the steps of any of claims 1 to 7.
CN202110102061.9A 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images Active CN112818797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110102061.9A CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110102061.9A CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Publications (2)

Publication Number Publication Date
CN112818797A true CN112818797A (en) 2021-05-18
CN112818797B CN112818797B (en) 2024-03-01

Family

ID=75859588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110102061.9A Active CN112818797B (en) 2021-01-26 2021-01-26 Consistency detection method and storage device for online examination answer document images

Country Status (1)

Country Link
CN (1) CN112818797B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187570A (en) * 2022-07-27 2022-10-14 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010024514A1 (en) * 2000-03-22 2001-09-27 Shinichi Matsunaga Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
CN101145196A (en) * 2006-09-13 2008-03-19 中国科学院自动化研究所 Quick fingerprint identification method based on strange topology structure
CN101271577A (en) * 2008-03-28 2008-09-24 北京工业大学 Wrong-page on-line fast image detection method of binder
CN102855495A (en) * 2012-08-22 2013-01-02 苏州多捷电子科技有限公司 Method for implementing electronic edition standard answer, and application system thereof
WO2015132191A1 (en) * 2014-03-03 2015-09-11 Advanced Track & Trace Methods and devices for identifying and recognizing objects
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN110175506A (en) * 2019-04-08 2019-08-27 复旦大学 Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again
CN110879965A (en) * 2019-10-12 2020-03-13 中国平安财产保险股份有限公司 Automatic reading and amending method of test paper objective questions, electronic device, equipment and storage medium
CN110991374A (en) * 2019-12-10 2020-04-10 电子科技大学 Fingerprint singular point detection method based on RCNN
CN111986126A (en) * 2020-07-17 2020-11-24 浙江工业大学 Multi-target detection method based on improved VGG16 network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010024514A1 (en) * 2000-03-22 2001-09-27 Shinichi Matsunaga Image processing device, singular spot detection method, and recording medium upon which singular spot detection program is recorded
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
CN101145196A (en) * 2006-09-13 2008-03-19 中国科学院自动化研究所 Quick fingerprint identification method based on strange topology structure
CN101271577A (en) * 2008-03-28 2008-09-24 北京工业大学 Wrong-page on-line fast image detection method of binder
CN102855495A (en) * 2012-08-22 2013-01-02 苏州多捷电子科技有限公司 Method for implementing electronic edition standard answer, and application system thereof
WO2015132191A1 (en) * 2014-03-03 2015-09-11 Advanced Track & Trace Methods and devices for identifying and recognizing objects
CN109800692A (en) * 2019-01-07 2019-05-24 重庆邮电大学 A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN110175506A (en) * 2019-04-08 2019-08-27 复旦大学 Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again
CN110879965A (en) * 2019-10-12 2020-03-13 中国平安财产保险股份有限公司 Automatic reading and amending method of test paper objective questions, electronic device, equipment and storage medium
CN110991374A (en) * 2019-12-10 2020-04-10 电子科技大学 Fingerprint singular point detection method based on RCNN
CN111986126A (en) * 2020-07-17 2020-11-24 浙江工业大学 Multi-target detection method based on improved VGG16 network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GENG LI-CHUAN;SU SONG-ZHI ET AL.: "Perspective invariant binary feature descriptor based image matching algorithm", 《JOURNAL ON COMMUNICATIONS》, vol. 36, no. 4 *
樊浩;郝宁;: "基于频域匹配的输电线路故障定位算法", 自动化技术与应用, no. 10, 23 October 2020 (2020-10-23) *
赵琪 等: "基于奇异点区域方向场的指纹检索", 《微计算机信息》, pages 1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187570A (en) * 2022-07-27 2022-10-14 北京拙河科技有限公司 Singular traversal retrieval method and device based on DNN deep neural network

Also Published As

Publication number Publication date
CN112818797B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
EP3816928A1 (en) Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and computer-readable storage medium
CN104008538B (en) Based on single image super-resolution method
CN110991266B (en) Binocular face living body detection method and device
CN110400278B (en) Full-automatic correction method, device and equipment for image color and geometric distortion
CN103530599A (en) Method and system for distinguishing real face and picture face
CN109829924B (en) Image quality evaluation method based on principal feature analysis
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN108171735B (en) Billion pixel video alignment method and system based on deep learning
CN111915485B (en) Rapid splicing method and system for feature point sparse workpiece images
CN109685772B (en) No-reference stereo image quality evaluation method based on registration distortion representation
CN111401266A (en) Method, device, computer device and readable storage medium for positioning corner points of drawing book
CN108765476A (en) Polarized image registration method
CN112734832B (en) Method for measuring real size of on-line object in real time
CN113159158B (en) License plate correction and reconstruction method and system based on generation countermeasure network
CN104036468A (en) Super-resolution reconstruction method for single-frame images on basis of pre-amplification non-negative neighbor embedding
CN107154017A (en) A kind of image split-joint method based on SIFT feature Point matching
CN113012234A (en) High-precision camera calibration method based on plane transformation
Sun et al. Image adaptation and dynamic browsing based on two-layer saliency combination
CN116342519A (en) Image processing method based on machine learning
CN116580028A (en) Object surface defect detection method, device, equipment and storage medium
CN112818797A (en) Consistency detection method and storage device for answer sheet document images of online examination
CN111047618A (en) Multi-scale-based non-reference screen content image quality evaluation method
CN110321452A (en) A kind of image search method based on direction selection mechanism
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
CN113642397A (en) Object length measuring method based on mobile phone video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant