CN116486344A - Cheating identification method and device for drawing examination, electronic equipment and storage medium - Google Patents

Cheating identification method and device for drawing examination, electronic equipment and storage medium Download PDF

Info

Publication number
CN116486344A
CN116486344A CN202310487998.1A CN202310487998A CN116486344A CN 116486344 A CN116486344 A CN 116486344A CN 202310487998 A CN202310487998 A CN 202310487998A CN 116486344 A CN116486344 A CN 116486344A
Authority
CN
China
Prior art keywords
image
target object
outline
examination
fisheye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310487998.1A
Other languages
Chinese (zh)
Inventor
陈浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Yikangsi Technology Co ltd
Original Assignee
Hubei Yikangsi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Yikangsi Technology Co ltd filed Critical Hubei Yikangsi Technology Co ltd
Priority to CN202310487998.1A priority Critical patent/CN116486344A/en
Publication of CN116486344A publication Critical patent/CN116486344A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20088Trinocular vision calculations; trifocal tensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a cheating identification method, a device, electronic equipment and a storage medium for drawing examination, wherein the method comprises the following steps: acquiring handwriting tracks of a target object drawn in a drawing examination in real time; if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object; acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track; and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline. The invention can accurately identify whether the examinee has fraud or not in the drawing-type online examination, ensures the fairness and fairness of the examination, and achieves the real purpose of the examination.

Description

Cheating identification method and device for drawing examination, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computer science, in particular to a cheating identification method, a device, electronic equipment and a storage medium for drawing examination.
Background
In recent years, the on-line examination system gradually replaces the traditional examination due to the advantages of strong confidentiality, rapidness, accuracy, fairness, low cost, high efficiency and the like, and becomes the main stream of future examination. The on-line examination system can well solve the problem due to a series of advantages of simple installation, simple operation, flexible examination paper assembly, strong question bank, easy management, automatic examination paper evaluation, unlimited area, safety, high efficiency and the like, thereby being widely applied to examination of various industries. Although the examination system is developed rapidly and has the characteristics of convenience, high efficiency, economy and the like, the phenomena of tilapia, cheating and the like existing in the traditional examination are gradually transferred into the on-line examination system while enjoying the convenience of the on-line examination system.
Therefore, how to prevent fraud in an online examination system, ensure fairness and fairness of the examination, and realize the real purpose of the examination has become a technical problem to be solved urgently at present.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a cheating identification method, a device, electronic equipment and a storage medium for drawing examination, which can accurately identify whether an examinee has cheating behaviors in an online examination system of drawing class.
In order to solve the above problems, in a first aspect, an embodiment of the present invention provides a method for identifying cheating in a drawing examination, including:
acquiring handwriting tracks of a target object drawn in a drawing examination in real time;
if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object;
acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track;
and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
In a second aspect, an embodiment of the present invention further provides a cheating identification apparatus for drawing examination, including:
the first acquisition unit is used for acquiring handwriting tracks of the target object drawn in the drawing examination in real time;
the second acquisition unit is used for acquiring a fisheye image which is currently shot by the fisheye camera on a drawing desktop of the target object if the target object is detected to complete the drawing examination;
the first generation unit is used for acquiring a first drawing outline in the fisheye image and generating a second drawing outline of the target object in the drawing examination according to the handwriting track;
And the identification unit is used for identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the cheating identification method for drawing exams according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program when executed by a processor causes the processor to execute the cheating identification method for drawing exams according to the first aspect above.
The embodiment of the invention provides a cheating identification method, a device, electronic equipment and a storage medium for a drawing examination, wherein the method is used for accurately identifying whether the target object has a cheating behavior in the drawing examination by acquiring the handwriting track of the target object drawn in the drawing examination in real time and drawing the examination after the target object is finished, shooting the drawing desktop of the target object by adopting a fish-eye camera to obtain a fish-eye image, acquiring a first drawing contour in the fish-eye image and generating a second drawing contour of the target object in the drawing examination by the handwriting track acquired in real time, and comparing the first drawing contour with the second drawing contour, so that the fair and fair examination of the target object can be ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a cheating identification method for drawing examination according to an embodiment of the present invention;
FIG. 2 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 3 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 4 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 5 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 6 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 7 is another flow chart of a cheating identification method for drawing exams according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a cheating identification device for drawing exams according to an embodiment of the present invention;
Fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for identifying cheating in a drawing examination according to an embodiment of the present invention. The cheating identification method for the drawing examination is applied to the terminal equipment and is executed through application software installed in the terminal equipment. The terminal equipment can be a desktop computer, a notebook computer, a tablet personal computer, a mobile phone and the like.
It should be noted that, the application scenario described in the embodiment of the present application is for more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided in the embodiment of the present application, and as a person of ordinary skill in the art can know, with the appearance of the new application scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
The cheating identification method for drawing examination is described in detail below.
As shown in fig. 1, the method includes the following steps S110 to S140.
S110, acquiring handwriting tracks of the target object drawn in the drawing examination in real time.
In this embodiment, the target object is an examinee who performs a drawing test, and the handwriting track is a motion track of the target object for drawing a corresponding image contour on a drawing desktop of an electronic device with a display screen, that is, a plurality of coordinate points of the corresponding image contour are formed on the drawing desktop, and the image contour of the target object drawn on the drawing desktop is generated by acquiring the handwriting track of the target object drawn in the drawing test in real time. The drawing examination mentioned in the application can be an examination for drawing two-dimensional engineering drawing.
And S120, if the target object is detected to complete the drawing examination, acquiring a fisheye image which is currently shot on a drawing desktop of the target object by a fisheye camera.
Specifically, after the line examination system detects that the target object completes the drawing examination, instruction information of a drawing desktop where the target object is located can be sent to a fisheye camera for monitoring the target object in the examination room, and after the fisheye camera receives the instruction information, the drawing desktop of the target object can be shot to obtain a fisheye image. When the fish-eye camera shoots the drawing desktop of the target object, the drawing desktop of the target object displays an image drawn by the target object.
S130, acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track.
Specifically, the first drawing outline is an outline of an image drawn by the target object and shot by the fisheye camera in the drawing desktop of the target object, the second drawing outline is an outline of an image generated by the on-line examination system by acquiring handwriting tracks drawn by the target object in drawing examination in real time, and whether the target object has fraud or not can be determined by comparing the first drawing outline with the second drawing outline.
In other embodiments of the invention, as shown in fig. 2, step S130 includes steps S131 and S132.
S131, carrying out distortion correction on the fisheye image to obtain a corrected fisheye image;
and S132, performing target detection on the corrected fisheye image to obtain the first drawing contour.
Specifically, due to the characteristics of the imaging principle of the fisheye camera, the fisheye image shot by the fisheye camera in the examination room has larger image distortion, so that the fisheye image needs to be subjected to distortion correction before target detection, and the first drawing contour can be accurately detected.
In some embodiments, the application may employ a pre-trained fisheye image correction model to correct distortion of a fisheye image, where the pre-trained fisheye image correction model may be constructed from a pre-trained generated countermeasure network. The specific construction method comprises the following steps: acquiring a pre-trained generated countermeasure network; wherein the generated countermeasure network includes a discriminator and a generator; constructing a fisheye image correction model according to the generator; the generator is used as a teacher model, and the fisheye image correction model is used as a student model; and carrying out knowledge distillation on the fisheye image correction model according to the generator to obtain a distilled fisheye image correction model. The distilled fisheye image correction model can quickly correct the distortion of the fisheye image.
In this embodiment, the generated antagonism network is obtained in advance through image sample training and is used for correcting the distorted image. Wherein the pre-trained generated countermeasure network includes a discriminant and a generator that can be used to correct the distorted image. However, the parameter amount of the generator is large, resulting in a large operation amount of the device. Therefore, in order to increase the operation speed of the device, after the pre-trained generation type countermeasure network is acquired, a student model of the fish-eye correction model is constructed by using a generator in the generation type countermeasure network as a teacher model.
The teacher model is a single complex network or a set of a plurality of networks and has good performance and generalization capability, the student model is a model with small network scale and limited expression capability, and the teacher model has strong learning capability and can transfer learned knowledge to a student model with relatively weak learning capability, so that the generalization capability of the student model is enhanced. According to the method and the device, training of the student model is assisted by utilizing the teacher model, so that the student model has the performance equivalent to that of the teacher model, but the parameter quantity is greatly reduced, and model compression and acceleration are achieved.
In the process of constructing the fisheye image correction model by adopting the generator, the fisheye image correction model can be constructed by pruning, parameter sharing and other modes of the generator, and the fisheye image correction model is assisted to train by adopting the generator, so that the knowledge learned by the generator is migrated into the fisheye image correction model, and the distilled fisheye image correction model has the same image correction function as the generator.
In some embodiments, a generator is used to construct a fisheye image correction model, which specifically comprises the following steps: performing network parameter pruning on the generator to obtain an intermediate fisheye image correction model; performing knowledge distillation on the middle fish-eye image correction model to obtain a distilled middle fish-eye image correction model; and (3) performing network parameter pruning on the distilled middle fisheye image correction model to obtain a final fisheye image correction model.
In this embodiment, the middle fisheye image correction model is also a student model of the generator, but the network parameters of the middle fisheye image correction model are still relatively large, so after knowledge distillation is performed on the middle fisheye image correction model, at least one parameter pruning needs to be performed on the middle fisheye image correction model until the parameter amount of the final fisheye image correction model reaches a minimum.
In the process of performing network parameter pruning on the distilled middle fisheye image correction model, image correction models with different parameter numbers can be obtained, and then performing distillation and pruning again until the parameter amount of the finally pruned fisheye image correction model reaches the minimum.
Before parameter pruning is performed on the generator, a network structure of the generator needs to be separately constructed, and weights of the generator are loaded, so that the generator can be stripped from the generation type countermeasure network. Wherein the number of parameter trimmings of the generator is related to the minimum functional unit of the generator. Meanwhile, when parameter pruning is performed on the generator, pruning may be performed in units of base units in the generator. For example, when the network structure of the generator is Resnet, pruning may be performed in units of ResBlock.
In other embodiments, the middle fisheye image correction model M' g After training, a preset test sample set can be used for correcting the model M 'for middle fisheye image' g Testing to calculate middle fish eye image correction model M 'after training' g First accuracy in a test sample setAt the same time adopt the sample set pair generator M g Testing to calculate generator M g Second accuracy in test sample set +.>Then calculate the first accuracy +.>And second accuracy->If the difference value delta acc is larger than the preset first threshold value Thr, the middle fisheye image correction model can be directly used as the final fisheye image correction model. Wherein, the liquid crystal display device comprises a liquid crystal display device,the above judgment formula is:
when s=1, the trimming and training can be stopped, and the middle fisheye image model after the last trimming is taken as the final fisheye image correction model.
In some embodiments, the specific step of knowledge-distilling the fisheye image correction model with the generator may include: acquiring a first predicted value output by a generator and a second predicted value output by a fisheye image correction model; and determining a fisheye image correction model after distillation according to the first predicted value and the second predicted value.
In this embodiment, when the generator is used to perform knowledge distillation on the fisheye image correction model, a first image sample set formed by the training image and the distorted training image may be specifically input into the generator to perform training on the fisheye image model, and meanwhile, a second predicted value is output, and a corresponding predicted value, that is, a first predicted value, is obtained from the generator, and then, whether the fisheye image correction model performs knowledge distillation is determined through a deviation between the first predicted value and the second predicted value.
Specifically, a first predicted value and a second predicted value at the time t are obtained, deviation of the predicted values is generated according to the first predicted value and the second predicted value, whether the deviation is larger than a set second threshold value is judged, and if the deviation is larger than the set second threshold value, a fisheye image correction model at the time t-1 is taken as a fisheye image correction model after final knowledge distillation.
In some embodiments, the training specific process of the generated countermeasure network may be: processing the training image by adopting internal parameters and distortion coefficients of the fisheye camera to obtain a distorted training image; training the generated type countermeasure network according to the training image and the distorted training image to obtain the trained generated type countermeasure network.
Specifically, the training image is a high-resolution image which is not distorted, the distorted training image can be obtained by converting a distortion mapping relation formed by internal parameters and distortion coefficients of the fisheye camera, the training image and the distorted training image are used as a first image sample set of a training generation type countermeasure network, the distorted training image is used as an input of the generation type countermeasure network, and the training image is used as a label of the training generation type countermeasure network.
The internal parameters and the distortion coefficients of the fisheye camera can be obtained by adopting a chessboard calibration method for the fisheye camera, specifically, the fisheye camera can be adopted to shoot a chessboard for calibration from a plurality of angles and positions, and the internal parameters and the distortion coefficients are calculated by using a fisheye calibration algorithm; the matrix K of internal parameters may be:
the vector D of distortion coefficients may be:
D=(k 1 ,k 2 ,k 3 ,k 4 )
wherein f x 、f y C is a parameter of focal length x 、c y Is the longitudinal and transverse offset of the origin of the image relative to the imaging point of the optical center, k 1 、k 2 、k 3 、k 4 Is the radial and lateral distortion coefficient of the camera.
In some embodiments, the specific process of generating the distorted training image includes:
calculating a corrected camera internal reference matrix R by adopting K, D; wherein r=f e (K,D);
Decomposing the camera internal reference matrix R by using singular values to obtain an inverse matrix iR of R; wherein iR = SVD (R);
Converting the two-dimensional coordinates (u, v) of the training image into a camera coordinate system (x, y, z) according to the inverse matrix iR; wherein (x, y, z) = (u, v, 1) iR;
normalization in the z-axis, i.e.
Calculating the radius r of the cross section of the fish eye hemisphere; wherein, the liquid crystal display device comprises a liquid crystal display device,
calculating an incidence angle theta between the light and the optical axis; wherein θ=atan (r);
correcting the incidence angle theta to obtain a corrected incidence angle theta d The method comprises the steps of carrying out a first treatment on the surface of the Wherein θ d =θ(1+k 1 θ 2 +k 2 θ 4 +k 3 θ 6 +k 4 θ 8 );
Generating corrected camera coordinate system coordinates (x ', y') according to the corrected incident angle; wherein, the liquid crystal display device comprises a liquid crystal display device,
converting the camera coordinate system into a pixel coordinate system (u ', v'), namely (u ', v') being the two-dimensional coordinates of the distorted training image; wherein u' =f x x′+c x ,v′=f y y′+c y
In some embodiments, the specific process of training the generated countermeasure network using the training image and the distorted training image includes: inputting the distorted training image into a generator to obtain a pseudo-undistorted image; constructing a second image sample set according to the training image and the pseudo-undistorted image; training the discriminator according to the second image sample set to obtain a trained discriminator; training the generator according to the distorted training image to obtain a trained generator.
In this embodiment, the pseudo-undistorted image is obtained by correcting the distorted training image through the generator, and the second image sample set is composed of the training image and the pseudo-undistorted image and is used for training the discriminator, so that the discriminator can better distinguish the pseudo-image and the real image.
In addition, after the discriminator finishes training, the discriminator is fixed, and the distorted training image is input into the generator to train the generator, so that the image generated by the generator can be indistinguishable into a pseudo image by the discriminator, and the generator after the training is finished can be obtained.
In other inventive embodiments, as shown in fig. 3, step S132 includes steps S1321, S1322, and S1323.
S1321, carrying out background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation;
s1322, cutting the fish-eye image after background segmentation into a plurality of small block images, and extracting the characteristics of each small block image to obtain the characteristic information of each small block image;
s1323, generating the first drawing outline according to the characteristic information of each small image.
Specifically, the background segmentation mainly comprises the steps of segmenting an image drawn in a drawing desktop where a target object is located from a fisheye image, cutting the corrected fisheye image into a plurality of small images after the background segmentation is completed, extracting features of the small images to obtain feature information of each small image, and predicting a first drawing contour in the fisheye image through a multi-layer neural network by the feature information of each small image.
In other inventive embodiments, as shown in fig. 4, step S1321 includes steps S13211, S13212, and S13213.
S13211, carrying out gray processing on the corrected fisheye image to obtain the fisheye image after gray processing;
s13212, determining an optimal threshold value of the fish-eye image background after the segmentation gray level processing;
s13213, performing background segmentation on the fisheye image subjected to gray level processing according to the optimal threshold value, and obtaining the fisheye image subjected to background segmentation.
In this embodiment, the corrected fisheye image is converted into the gray level image, then the color is inverted, the background of the fisheye image is set to be dark, the foreground of the drawn image is set to be bright, the optimal threshold value for background segmentation is determined by using an image binary method, and meanwhile, the image drawn by the target object on the drawing desktop can be separated by adopting a truncation method and applied to the RGB image, so that the background segmentation of the fisheye image can be realized.
In other inventive embodiments, as shown in fig. 5, step S1322 includes steps S13221 and S13222.
S13221, performing convolution operation on each small image to obtain convolution characteristics of each small image;
s13222, performing pooling operation on the convolution characteristics to obtain characteristic information of each small image.
In this embodiment, convolution operation may be performed on each small image by using, but is not limited to, a res net50 network to obtain a vector matrix corresponding to a shallow feature of each small image, and then pooling operation is performed on the vector matrix corresponding to the shallow feature of each small image, so as to obtain a vector matrix corresponding to a deep feature of each small image, that is, feature information of each small image.
And S140, identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
Specifically, after the first drawing outline and the second image outline are obtained, the first drawing outline and the second image outline can be compared, so that whether the target object has the cheating behavior in the drawing examination can be determined.
In other embodiments of the invention, as shown in fig. 6, step S140 includes steps S141 and S142.
S141, obtaining the similarity between the first drawing outline and the second drawing outline;
s142, determining whether the target object is cheated or not according to the similarity.
In this embodiment, the similarity between the first drawing outline and the second drawing outline is calculated to determine whether the target object has a fraud behavior in the drawing test. When the similarity between the first drawing outline and the second drawing outline is lower than a preset threshold, determining that the target object has fraud behaviors in the drawing examination; when the similarity between the first drawing outline and the second drawing outline is higher than a preset threshold value, the target object can be determined to have no fraud behavior in the drawing examination. The similarity calculation can be realized by adopting cosine similarity, pearson correlation coefficient, euclidean distance, jacquard distance and other modes between the first drawing contour and the second drawing contour.
In other inventive embodiments, as shown in fig. 7, step S140 includes steps S1401, S1402, and S1403.
S1401, determining a reference base point in the drawing desktop;
s1402, obtaining a first interval between the reference base point and the first drawing outline and a second interval between the reference base point and the second drawing outline;
S1403, determining whether the target object is cheated according to the first interval and the second interval.
In this embodiment, the reference base point is the origin of coordinates on the drawing table surface where the target object is located, when calculating the first distance between the reference base point and the first drawing contour, the fisheye image may be subjected to target detection again to obtain the feature information of the reference base point, and the first distance between the reference base point and the first drawing contour may be calculated through the feature information; in the calculating of the second distance between the reference base point and the second drawing contour, the coordinates of the reference base point may be obtained in advance, and then, a key point in the second drawing contour closest to the reference base point may be calculated, and the distance between the key point and the reference base point may be taken as the second distance. After the first distance between the reference base point and the first drawing outline and the second distance between the reference base point and the second drawing outline are calculated, whether the target object has fraud in drawing examination can be determined by calculating whether the error between the two distances is within a preset range.
It should be noted that, the present application preferably determines, by using the similarity between the first drawing outline and the second drawing outline, whether the target object has a fraud behavior in the drawing examination, and if the target object does not have a fraud behavior, the present application may continue to execute steps S1401, S1402 and S1403 to further determine whether the target object has a fraud behavior in the drawing examination, so as to more accurately identify whether the examinee has a fraud behavior.
In the cheating identification method for the drawing examination provided by the embodiment of the invention, the handwriting track of the target object drawn in the drawing examination is obtained in real time; if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object; acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track; and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline. The method can accurately identify whether the examinee has the cheating behavior in the drawing-type online examination, ensures the fairness and fairness of the examination, and achieves the real purpose of the examination.
The embodiment of the invention also provides a cheating identification device 100 for drawing examination, which is used for executing any embodiment of the cheating identification method for drawing examination.
In particular, referring to fig. 8, fig. 8 is a schematic block diagram of a cheating identification apparatus 100 for drawing exams according to an embodiment of the present invention.
As shown in fig. 8, the cheating identification device 100 for drawing examination includes: the first acquisition unit 110, the second acquisition unit 120, the first generation unit 130, and the identification unit 140.
The first obtaining unit 110 is configured to obtain, in real time, a handwriting track of a drawing of the target object in the drawing examination.
The second obtaining unit 120 is configured to obtain, if it is detected that the target object completes the drawing examination, a fisheye image currently captured by the fisheye camera on the drawing desktop of the target object.
The first generating unit 130 is configured to obtain a first drawing outline in the fisheye image, and generate a second drawing outline of the target object in the drawing examination according to the handwriting track.
In other inventive embodiments, the first generating unit 130 includes: and a correction unit and a detection unit.
The correcting unit is used for carrying out distortion correction on the fisheye image to obtain a corrected fisheye image; and the detection unit is used for carrying out target detection on the corrected fisheye image to obtain the first drawing outline.
In other inventive embodiments, the detection unit comprises: the device comprises a first dividing unit, a clipping unit and a second generating unit.
The first segmentation unit is used for carrying out background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation; the cutting unit is used for cutting the fish-eye image after background segmentation into a plurality of small block images, and extracting the characteristics of each small block image to obtain the characteristic information of each small block image; and the second generation unit is used for generating the first drawing outline according to the characteristic information of each small image.
In other inventive embodiments, the first dividing unit includes: the device comprises a processing unit, a first determining unit and a second dividing unit.
The processing unit is used for carrying out gray processing on the corrected fisheye image to obtain the fisheye image after gray processing; a first determining unit, configured to determine an optimal threshold value of the fish-eye image background after the segmentation gray processing; and the second segmentation unit is used for carrying out background segmentation on the fisheye image subjected to gray level processing according to the optimal threshold value to obtain the fisheye image subjected to background segmentation.
In other inventive embodiments, the second generating unit includes: a convolution unit and a pooling unit.
The convolution unit is used for carrying out convolution operation on each small image to obtain the convolution characteristic of each small image; and the pooling unit is used for pooling the convolution characteristics to obtain the characteristic information of each small image.
And the identifying unit 140 is configured to identify whether the target object is cheated according to the first drawing outline and the second drawing outline.
In other inventive embodiments, the identification unit 140 comprises: and a third acquisition unit and a second determination unit.
A third obtaining unit configured to obtain a similarity between the first drawing contour and the second drawing contour; and the second determining unit is used for determining whether the target object is cheated or not according to the similarity.
In other inventive embodiments, the identification unit 140 comprises: a third determination unit, a fourth acquisition unit, and a fourth determination unit.
A third determining unit, configured to determine a reference base point in the drawing desktop; a fourth acquisition unit configured to acquire a first pitch between the reference base point and the first drawing contour and a second pitch between the reference base point and the second drawing contour; and the fourth determining unit is used for determining whether the target object is cheated or not according to the first interval and the second interval.
The cheating identification device 100 for drawing examination provided by the embodiment of the invention is used for executing the real-time acquisition of the handwriting track of the target object drawn in the drawing examination; if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object; acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track; and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the cheating identification apparatus 100 and each unit in the drawing examination can refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the detailed description is omitted here.
The cheating identification means of drawing examinations described above may be implemented in the form of a computer program that is executable on an electronic device as shown in fig. 9.
Referring to fig. 9, fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
Referring to fig. 9, the device 500 includes a processor 502, a memory, and a network interface 505, which are connected by a system bus 501, wherein the memory may include a storage medium 503 and an internal memory 504.
The storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, may cause the processor 502 to perform a cheating identification method for drawing tests.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a method of cheating identification for drawing examinations.
The network interface 505 is used for network communication, such as providing for transmission of data information, etc. It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the apparatus 500 to which the present inventive arrangements are applied, and that a particular apparatus 500 may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to perform the following functions: acquiring handwriting tracks of a target object drawn in a drawing examination in real time; if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object; acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track; and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
In an embodiment, the processor 502 further specifically implements the following steps when implementing the acquiring the first drawing contour in the fisheye image: carrying out distortion correction on the fisheye image to obtain a corrected fisheye image; and performing target detection on the corrected fish-eye image to obtain the first drawing outline.
In an embodiment, when the processor 502 performs the target detection on the corrected fisheye image to obtain the first drawing contour, the following steps are specifically further implemented: performing background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation; cutting the fish-eye image after background segmentation into a plurality of small block images, and extracting the characteristics of each small block image to obtain the characteristic information of each small block image; and generating the first drawing outline according to the characteristic information of each small image.
In an embodiment, when the processor 502 performs the background segmentation on the corrected fisheye image to obtain the fisheye image after the background segmentation, the following steps are specifically further implemented: gray processing is carried out on the corrected fisheye image, and the fisheye image after gray processing is obtained; determining an optimal threshold value of the fish-eye image background after the segmentation gray level processing; and carrying out background segmentation on the fisheye image subjected to gray level processing according to the optimal threshold value to obtain the fisheye image subjected to background segmentation.
In an embodiment, when the processor 502 performs the feature extraction on each of the small images to obtain feature information of each of the small images, the following steps are specifically further implemented: performing convolution operation on each small image to obtain convolution characteristics of each small image; and carrying out pooling operation on the convolution characteristics to obtain the characteristic information of each small image.
In an embodiment, when implementing the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline, the processor 502 specifically further implements the following steps: obtaining the similarity between the first drawing outline and the second drawing outline; and determining whether the target object is cheated or not according to the similarity.
In an embodiment, when implementing the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline, the processor 502 specifically further implements the following steps: determining a reference base point in the drawing desktop; acquiring a first interval between the reference base point and the first drawing outline and a second interval between the reference base point and the second drawing outline; and determining whether the target object is cheated or not according to the first interval and the second interval.
Those skilled in the art will appreciate that the embodiment of the apparatus 500 shown in fig. 9 is not limiting of the specific construction of the apparatus 500, and in other embodiments, the apparatus 500 may include more or less components than illustrated, or certain components may be combined, or a different arrangement of components. For example, in some embodiments, the device 500 may include only the memory and the processor 502, and in such embodiments, the structure and the function of the memory and the processor 502 are consistent with the embodiment shown in fig. 9, and will not be described herein.
It should be appreciated that in an embodiment of the invention, the processor 502 may be a central processing unit (Central Processing Unit, CPU), the processor 502 may also be other general purpose processors 502, digital signal processors 502 (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor 502 may be the microprocessor 502 or the processor 502 may be any conventional processor 502 or the like.
In another embodiment of the invention, a computer storage medium is provided. The storage medium may be a nonvolatile computer-readable storage medium or a volatile storage medium. The storage medium stores a computer program 5032, wherein the computer program 5032 when executed by the processor 502 performs the steps of: acquiring handwriting tracks of a target object drawn in a drawing examination in real time; if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object; acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track; and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
In an embodiment, when the processor executes the program instructions to implement the acquiring the first drawing contour in the fisheye image, the method specifically further includes the following steps: carrying out distortion correction on the fisheye image to obtain a corrected fisheye image; and performing target detection on the corrected fish-eye image to obtain the first drawing outline.
In an embodiment, when the processor executes the program instructions to implement the target detection on the corrected fisheye image to obtain the first drawing contour, the method specifically further includes the following steps: performing background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation; cutting the fish-eye image after background segmentation into a plurality of small block images, and extracting the characteristics of each small block image to obtain the characteristic information of each small block image; and generating the first drawing outline according to the characteristic information of each small image.
In an embodiment, when the processor executes the program instruction to perform the background segmentation on the corrected fisheye image to obtain the fisheye image after the background segmentation, the method specifically further includes the following steps: gray processing is carried out on the corrected fisheye image, and the fisheye image after gray processing is obtained; determining an optimal threshold value of the fish-eye image background after the segmentation gray level processing; and carrying out background segmentation on the fisheye image subjected to gray level processing according to the optimal threshold value to obtain the fisheye image subjected to background segmentation.
In an embodiment, when the processor executes the program instructions to implement the feature extraction on each of the small images to obtain feature information of each of the small images, the processor specifically further implements the following steps: performing convolution operation on each small image to obtain convolution characteristics of each small image; and carrying out pooling operation on the convolution characteristics to obtain the characteristic information of each small image.
In an embodiment, when the processor executes the program instructions to implement the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline, the method specifically further includes the following steps: obtaining the similarity between the first drawing outline and the second drawing outline; and determining whether the target object is cheated or not according to the similarity.
In an embodiment, when the processor executes the program instructions to implement the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline, the method specifically further includes the following steps: determining a reference base point in the drawing desktop; acquiring a first interval between the reference base point and the first drawing outline and a second interval between the reference base point and the second drawing outline; and determining whether the target object is cheated or not according to the first interval and the second interval.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and unit described above may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein. Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units is merely a logical function division, there may be another division manner in actual implementation, or units having the same function may be integrated into one unit, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units may be stored in a storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on such understanding, the technical solution of the present invention may be essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing an apparatus 500 (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The cheating identification method for drawing examination is characterized by comprising the following steps:
acquiring handwriting tracks of a target object drawn in a drawing examination in real time;
if the target object is detected to finish the drawing examination, acquiring a fisheye image which is currently shot by a fisheye camera on a drawing desktop of the target object;
acquiring a first drawing outline in the fish-eye image, and generating a second drawing outline of the target object in the drawing examination according to the handwriting track;
and identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
2. The method for cheating identification in drawing exams according to claim 1, wherein said obtaining a first drawing profile in said fish-eye image comprises:
Carrying out distortion correction on the fisheye image to obtain a corrected fisheye image;
and performing target detection on the corrected fish-eye image to obtain the first drawing outline.
3. The method for identifying cheating in a drawing examination according to claim 2, wherein the performing object detection on the corrected fisheye image to obtain the first drawing outline includes:
performing background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation;
cutting the fish-eye image after background segmentation into a plurality of small block images, and extracting the characteristics of each small block image to obtain the characteristic information of each small block image;
and generating the first drawing outline according to the characteristic information of each small image.
4. The method for identifying cheating in a drawing examination according to claim 3, wherein the performing background segmentation on the corrected fisheye image to obtain the fisheye image after background segmentation comprises:
gray processing is carried out on the corrected fisheye image, and the fisheye image after gray processing is obtained;
determining an optimal threshold value of the fish-eye image background after the segmentation gray level processing;
And carrying out background segmentation on the fisheye image subjected to gray level processing according to the optimal threshold value to obtain the fisheye image subjected to background segmentation.
5. The method for identifying cheating in drawing exams according to claim 3, wherein the feature extraction of each of the small images to obtain feature information of each of the small images comprises:
performing convolution operation on each small image to obtain convolution characteristics of each small image;
and carrying out pooling operation on the convolution characteristics to obtain the characteristic information of each small image.
6. The cheating identification method for drawing exams according to claim 1, wherein the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline comprises:
obtaining the similarity between the first drawing outline and the second drawing outline;
and determining whether the target object is cheated or not according to the similarity.
7. The cheating identification method for drawing exams according to claim 1, wherein the identifying whether the target object is cheating according to the first drawing outline and the second drawing outline comprises:
Determining a reference base point in the drawing desktop;
acquiring a first interval between the reference base point and the first drawing outline and a second interval between the reference base point and the second drawing outline;
and determining whether the target object is cheated or not according to the first interval and the second interval.
8. A cheating identification device for drawing exams, comprising:
the first acquisition unit is used for acquiring handwriting tracks of the target object drawn in the drawing examination in real time;
the second acquisition unit is used for acquiring a fisheye image which is currently shot by the fisheye camera on a drawing desktop of the target object if the target object is detected to complete the drawing examination;
the first generation unit is used for acquiring a first drawing outline in the fisheye image and generating a second drawing outline of the target object in the drawing examination according to the handwriting track;
and the identification unit is used for identifying whether the target object is cheated or not according to the first drawing outline and the second drawing outline.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the cheating identification method for drawing examinations according to any one of claims 1 to 7 when the computer program is executed by the processor.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to perform the cheating identification method of a drawing test according to any one of claims 1 to 7.
CN202310487998.1A 2023-04-24 2023-04-24 Cheating identification method and device for drawing examination, electronic equipment and storage medium Pending CN116486344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310487998.1A CN116486344A (en) 2023-04-24 2023-04-24 Cheating identification method and device for drawing examination, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310487998.1A CN116486344A (en) 2023-04-24 2023-04-24 Cheating identification method and device for drawing examination, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116486344A true CN116486344A (en) 2023-07-25

Family

ID=87221204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310487998.1A Pending CN116486344A (en) 2023-04-24 2023-04-24 Cheating identification method and device for drawing examination, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116486344A (en)

Similar Documents

Publication Publication Date Title
WO2021164228A1 (en) Method and system for selecting augmentation strategy for image data
CN111709409B (en) Face living body detection method, device, equipment and medium
CN110569878B (en) Photograph background similarity clustering method based on convolutional neural network and computer
CN112926410B (en) Target tracking method, device, storage medium and intelligent video system
CN110909693A (en) 3D face living body detection method and device, computer equipment and storage medium
CN110070531B (en) Model training method for detecting fundus picture, and fundus picture detection method and device
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
CN111476806A (en) Image processing method, image processing device, computer equipment and storage medium
CN111814821A (en) Deep learning model establishing method, sample processing method and device
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN111415339A (en) Image defect detection method for complex texture industrial product
CN112561878A (en) Finger vein image quality evaluation method based on weighted fusion
CN113256572B (en) Gastroscope image analysis system, method and equipment based on restoration and selective enhancement
CN116934747B (en) Fundus image segmentation model training method, fundus image segmentation model training equipment and glaucoma auxiliary diagnosis system
CN111353325A (en) Key point detection model training method and device
CN113128518A (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN117133041A (en) Three-dimensional reconstruction network face recognition method, system, equipment and medium based on deep learning
CN112348762A (en) Single image rain removing method for generating confrontation network based on multi-scale fusion
CN116704401A (en) Grading verification method and device for operation type examination, electronic equipment and storage medium
CN110969657B (en) Gun ball coordinate association method and device, electronic equipment and storage medium
CN113436735A (en) Body weight index prediction method, device and storage medium based on face structure measurement
Hepburn et al. Enforcing perceptual consistency on generative adversarial networks by using the normalised laplacian pyramid distance
CN116486344A (en) Cheating identification method and device for drawing examination, electronic equipment and storage medium
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination