CN114419008A - Image quality evaluation method and system - Google Patents

Image quality evaluation method and system Download PDF

Info

Publication number
CN114419008A
CN114419008A CN202210077125.9A CN202210077125A CN114419008A CN 114419008 A CN114419008 A CN 114419008A CN 202210077125 A CN202210077125 A CN 202210077125A CN 114419008 A CN114419008 A CN 114419008A
Authority
CN
China
Prior art keywords
image
classification
certificate
score
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210077125.9A
Other languages
Chinese (zh)
Inventor
陶坚坚
饶顶锋
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yitu Zhixun Technology Co ltd
Original Assignee
Beijing Yitu Zhixun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yitu Zhixun Technology Co ltd filed Critical Beijing Yitu Zhixun Technology Co ltd
Priority to CN202210077125.9A priority Critical patent/CN114419008A/en
Publication of CN114419008A publication Critical patent/CN114419008A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10008Still image; Photographic image from scanner, fax or copier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing and computer vision, and discloses an image quality evaluation method and system. The method comprises the following steps: image preprocessing: scaling the image in equal proportion, normalizing the image and filling the default value into the missing part; image effective position prediction and image filtering: determining the position and the certificate type of the certificate image based on a pixel level segmentation-classification technical model, and filtering the image by setting a threshold value; and (3) correcting the image to be evaluated: segmenting the certificate image according to the position information of the certificate image and correcting the certificate image; multi-dimensional quality assessment: judging 5 dimensions from type, integrity, definition, light spot and PS to evaluate and score quality based on deep learning and convolution neural network technology model; and outputting the result in a structured mode. The invention effectively distinguishes the certificate image to be evaluated by adopting a pixel-level image segmentation-classification technology, carries out multi-dimensional evaluation and scoring by combining a deep learning technology, a convolutional neural network technology and a traditional method, and has high efficiency and accuracy in extracting a prediction result and accurate evaluation result.

Description

Image quality evaluation method and system
Technical Field
The invention belongs to the technical field of image processing and computer vision, and particularly relates to an image quality evaluation method and system.
Background
In the daily data auditing process, whether the acquired image meets the relevant specifications is an important prerequisite for auditing tasks or other downstream tasks. General image quality-related specifications include, but are not limited to, whether an image is original, whether an image is complete, whether an image is blurred, whether an image has PS traces, and the like. How to rapidly and automatically judge and grade the image quality is very important for automatic data auditing and data approving procedures.
The existing data auditing methods are roughly divided into two types: (1) and (6) manual auditing. In the mode, when large-batch data is audited and approved, the effort is not good, and a large amount of labor and time are required to be spent on the work of auditing the images, so that the cost is high, and the error rate is not relieved; (2) semi-manual and semi-automatic. The method generally adopts a traditional image processing mode to judge a certain aspect of an image, such as definition, and then manually proofreads the image once; due to the adoption of the traditional technology, the method cannot well support all pictures acquired by the acquisition modes (such as a scanner, a high-speed camera, a mobile phone and the like), particularly pictures acquired by mobile phone photographing, so that the semi-manual and semi-automatic method is time-consuming and labor-consuming. In the current process of auditing data services, multidimensional, multifunctional and comprehensive quality judgment needs to be carried out on images, and the method not only is unilateral judgment, but also needs to support pictures acquired by all acquisition modes.
With the development of deep learning and convolutional neural network technology, the technology in various computer machine vision related fields is promoted to develop rapidly. Compared with the traditional technology, the deep learning and convolution neural network technology has the advantages that: the feature codes are automatically extracted, fixed features do not need to be set manually, the prediction effect can be improved according to sample mark training, and the performance is more outstanding compared with the traditional mode. Therefore, the invention provides a multi-dimensional image quality evaluation method and system based on deep learning and convolutional neural network technology, and has good application prospect.
Disclosure of Invention
Aiming at the problems in the prior art, the invention extracts a specific multidimensional evaluation mode aiming at the image quality from the actual service scene: (1) judging the image type, in the approval business process, judging whether the uploaded picture is an original picture or a copy or a screen shot by a mobile phone and a computer, and the like, so as to preferentially determine whether the uploaded picture meets the specification; (2) the integrity of the image is the condition that whether the uploaded picture to be evaluated is complete or not and whether the picture to be evaluated has edge and corner defects or not needs to be judged; (3) the image definition needs to be judged whether the uploaded image is clear and visible or not, and preparation is made for subsequent manual inspection; (4) whether the image has light spots or not; 5. whether the image has PS traces or not, whether information in the image is tampered by PS or not, and whether the image is spliced or whether other changes exist or not.
Based on the multi-dimensional evaluation mode, the invention provides an image quality evaluation method and system, the effective area and the background area of the certificate are effectively distinguished by adopting a pixel-level image segmentation-classification technology, multi-dimensional quality evaluation prediction is carried out on the image by combining deep learning, a convolutional neural network technology, some traditional methods and priori knowledge, the prediction result is extracted efficiently and accurately, and the image quality evaluation processing speed and the accuracy of the evaluation result are effectively improved.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
an image quality evaluation method, the method comprising the steps of:
step S1: image preprocessing: scaling the original image in equal proportion, carrying out image normalization, and filling a missing part of the image with a default value to obtain a preprocessed image;
step S2: image effective position prediction and image filtering: detecting and determining certificate drawing position information and certificate drawing category information in the preprocessed image based on a pixel-level segmentation-classification technology model, wherein the certificate drawing category information comprises but is not limited to certificates, cards, documents and bills; if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information;
step S3: and (3) correcting the image to be evaluated: after the position of the certificate image to be evaluated in the image is obtained through the step S2, the certificate image is segmented and corrected, and then the image needing to be processed only comprises the certificate image needing to be evaluated for quality;
step S4: multi-dimensional quality assessment: after obtaining a certificate image needing to evaluate the quality, judging 5 dimensions of the certificate image from the image type, the image integrity, the image definition, the image facula and the PS respectively by using a model based on deep learning and convolutional neural network technology as a main means to evaluate the quality, and giving a corresponding score;
step S5: and normalizing the score of each dimension to a percentile system, structuring the result into a json data format, and finally outputting a quality evaluation result.
Further, the step S2 specifically includes the following steps:
step S21: the image preprocessed in the step S1 is transmitted to a CNN convolutional neural network and processed to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network;
step S22: setting a fixed number of ROI (regions of interest) at each pixel position of the feature map, and then sending the ROI into an RPN (resilient packet network) to perform foreground and background secondary classification and coordinate regression so as to obtain a refined ROI;
step S23: performing regional feature aggregation on the ROI obtained in the step S22, namely, firstly, corresponding pixels of an original image and a feature image, then, corresponding the feature image and fixed features, finally, performing multi-class classification and candidate frame regression on the ROI and introducing a full convolution network to generate a Mask, completing segmentation and classification tasks, and finally obtaining effective accurate positions of the certificate image to be evaluated in the image and certificate image class information;
step S24: if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; and deleting and filtering the images with the certificate image category information being null according to the obtained certificate image category information.
Further, in the step S21, the CNN convolutional neural network uses resnet18 as a feature extraction network.
Further, the multi-dimensional quality assessment in step S4 specifically includes:
and (3) image type judgment: processing and analyzing the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain an image type and give a corresponding score, and finally outputting the image type with the highest score and the corresponding image type score; the image types comprise an original, a copy, a computer screen shooting piece and a non-computer screen shooting piece;
and (3) image integrity judgment: according to the image type judgment result, analyzing and classifying the image to be evaluated by adopting an integrity classification model which is trained in deep learning to obtain an image integrity score;
image definition judgment: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
image light spot detection: based on an image light spot detection model, firstly detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
and (6) PS judgment: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; then, whether tampered data exists or not is judged through analysis of data in the image data, or whether information related to PS (photoshop software) exists or not exists in the data, for example, whether the data contains a photoshop character or not is judged, the image is screened again, and a final score of the image modified by the PS (photoshop software) is obtained.
Further, in step S4, the image type determination specifically includes:
step S401: based on the traditional mode, acquiring the bit depth of the image, and if the bit depth is not the 24-bit true color image, directly returning the image type as a copy and giving a score; if the bit depth is 24-bit true color image, continuing to enter the next step;
step S402: based on a cascade type classification mode of a convolutional neural network, performing cascade type classification on the images by using a plurality of classification models, respectively performing classification screening on the copies, the computer screen shots, the non-computer screen shots and the original in sequence, and returning the image type with the highest score and the corresponding image type score; the step S402 specifically includes:
step S4021: classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification in the classification results is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
step S4023: and outputting an image classification result.
Meanwhile, the present invention also provides an image quality evaluation system for executing any one of the above image quality evaluation methods, the system comprising: the system comprises an image preprocessing module, an image effective position predicting and filtering module, an image correction module to be evaluated, a multi-dimensional quality evaluation module and a normalization output module;
the image preprocessing module is used for scaling the original image in equal proportion, carrying out image normalization, and filling a missing part of the image with a default value to obtain a preprocessed image;
the image effective position predicting and filtering module comprises a pixel-level segmentation-classification technology model, and the image effective position predicting and filtering module detects and determines certificate image position information and corresponding certificate image category information in the preprocessed image based on the segmentation-classification technology model, wherein the certificate image category information comprises but is not limited to certificates, clamping pieces, documents and bills; if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information;
after the image correction module to be evaluated obtains the position of the certificate image in the image, the certificate image is segmented and corrected, and then the image to be processed by the module only contains the certificate image of which the quality needs to be evaluated;
after obtaining a certificate image of which the quality needs to be evaluated, the multi-dimensional quality evaluation module uses a model based on deep learning and convolutional neural network technology as a main means to judge 5 dimensions of the certificate image from image type, image integrity, image definition, image facula and PS respectively for quality evaluation, and gives out corresponding scores;
and the normalization output module normalizes the score of each dimension, normalizes the score to a percentage system, structures the score into a json data format, and finally outputs a quality evaluation result.
Further, the segmentation-classification technology model comprises a CNN convolutional neural network, an RPN network, a region feature aggregation model and an image filtering model;
the CNN convolutional neural network processes the input preprocessed image to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network;
the RPN sets a fixed number of ROI at each pixel position of the feature map, and then sends the ROI area into the RPN to carry out foreground and background secondary classification and coordinate regression so as to obtain a refined ROI area;
the regional feature aggregation model performs regional feature aggregation operation on the obtained ROI, firstly, the original image and the pixels of the feature image are corresponded, then, the feature image and the fixed features are corresponded, finally, the ROI is subjected to multi-class classification, candidate frame regression and introduction of a full convolution network to generate a Mask, segmentation and classification tasks are completed, and finally, the effective accurate position of the certificate image to be evaluated in the image and the certificate image class information are obtained;
and the image filtering model deletes and filters the images with empty certificate image category information according to the obtained certificate image category information.
Further, the CNN convolutional neural network employs resnet18 as a feature extraction network.
Furthermore, the multi-dimensional quality evaluation module comprises an image type analysis model, an image integrity analysis model, an image definition analysis model, an image light spot detection model and a PS analysis model;
the image type analysis model is as follows: processing and analyzing the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain an image type and give a corresponding score, and finally outputting the image type with the highest score and the corresponding image type score; the image types comprise an original, a copy, a computer screen shooting piece and a non-computer screen shooting piece;
the image integrity analysis model is as follows: according to the image type judgment result, analyzing and classifying the image to be evaluated by adopting an integrity classification model which is trained in deep learning to obtain an image integrity score;
the image definition analysis model comprises: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
the image light spot detection model comprises: firstly, detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
the PS analytical model: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; then, whether tampered data exists or whether information related to PS (photoshop software) exists or not is judged through analysis of data in the image data, for example, whether the data contains a photoshop character or not is judged, the image is screened again, and a final score of the image modified by the PS (photoshop software) is obtained.
Still further, the image type analysis model comprises a bit depth classification model and a cascade type classification model;
the bit depth classification model acquires the bit depth of the image based on a traditional mode, and if the bit depth is not a 24-bit true color image, the image type is directly returned to be a copy; if the bit depth is 24-bit true color image, continuously entering a cascade type classification model for classification;
the cascade type classification model is based on a cascade type classification mode of a convolutional neural network, a plurality of classification models are used for carrying out cascade type classification on images, the images are classified and screened respectively according to a copy, a computer screen shooting piece, a non-computer screen shooting piece and an original in sequence, and the image type with the highest score and the corresponding image type score are returned; the cascade type classification model specifically comprises a full image type classification model, an original-computer screen shooting classification model and an original-non-computer screen shooting classification model; the specific classification steps of the cascade type classification model are as follows:
firstly, classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuously entering an original-computer screen shooting classification model;
secondly, classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuously entering the original-non-computer screen shooting classification model;
then, classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification result is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
and finally, outputting an image classification result.
Compared with the prior art, the invention has the following beneficial effects:
(1) according to the method, effective position prediction segmentation and certificate image category detection are carried out on the certificate image to be evaluated in the image based on the pixel-level segmentation-classification technology model, interference of other regions which are not the certificate image to be evaluated on subsequent evaluation work can be eliminated as much as possible through the effective position segmentation of the certificate image, and the whole extraction and analysis result is more accurate and efficient; by deleting and filtering the images with empty certificate image category information, the images without the certificate images to be evaluated can be automatically filtered, for example, when the quality of the images of the identity cards needs to be evaluated, the position of the images of the identity cards can be automatically found out, the images without the identity cards can be directly returned, and the processing flow and the processing speed can be effectively accelerated;
(2) after the multi-dimensional quality evaluation module obtains the certificate image of which the quality needs to be evaluated, the model based on the deep learning and convolutional neural network technology is used as a main means, and the certificate image is subjected to feature extraction, quality evaluation and scoring from multiple dimensions of image types (original, copy, computer screen shot and non-computer screen shot), image integrity, image definition (whether fuzzy), image spots (whether spots exist) and PS judgment (whether PS processing traces exist) respectively;
(3) the existing image type judging mode is based on the traditional image bit depth judging mode or a single classification model to judge the image type, and can only distinguish an original document from other image types, for example, only can distinguish the image as the original document or a copy document, or only can distinguish the image as the original document or a computer screen shooting document, the supported image types are less, the classification precision is low, and an error classification result is easy to generate; the image type judgment is based on the image type classification model, a traditional mode and a cascade type classification model mode based on a convolutional neural network are adopted, the images are firstly subjected to primary classification screening by adopting a simple traditional mode, then the cascade type classification model based on the convolutional neural network is adopted, classification screening is sequentially carried out according to the sequence of copies, computer screen prints, non-computer screen prints and original images, from the characteristics of image samples, a plurality of different classification models are used for screening from easy to difficult, the image types with obvious characteristics can be firstly screened out in a priority mode, then the images with the characteristics which are difficult to distinguish are placed in a back model for accurate classification, the image type judgment efficiency is obviously improved, the image classification precision can be effectively improved, and the accuracy of classification results is guaranteed.
Drawings
FIG. 1 is a flowchart of an algorithm of an image quality evaluation method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image segmentation technique according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the effect of image pixel level segmentation techniques according to an embodiment of the present invention;
FIG. 4 is a flow chart of multi-dimensional quality detection according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating image type quality assessment according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 5, the present invention provides an image quality evaluation method, including the steps of:
step S1: image preprocessing: scaling the original image in equal proportion, carrying out image normalization, and filling a missing part of the image with a default value to obtain a preprocessed image;
step S2: image effective position prediction and image filtering: detecting and determining certificate drawing position information and corresponding certificate drawing category information in the preprocessed image based on a pixel-level segmentation-classification technology model, wherein the certificate drawing category information comprises but is not limited to certificates, cards, documents and bills; if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information; specifically, the step S2 can be divided into the following steps:
step S21: the image preprocessed in the step S1 is transmitted to a CNN convolutional neural network and processed to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network; as a preferable scheme, the CNN convolutional neural network adopts renet 18 as a feature extraction network to meet the requirements of processing speed and detection accuracy.
Step S22: setting a fixed number of ROI (Region of interest) at each pixel position of the feature map, and then sending the ROI Region into an RPN (Region Proposal Net) network for foreground and background secondary classification and coordinate regression to obtain a refined ROI Region;
step S23: performing regional feature aggregation on the ROI region obtained in step S22, that is, first corresponding pixels of the original image and the feature image, then corresponding the feature image and the fixed feature, and finally performing multi-class classification, candidate frame regression, and introducing a Full Convolution Network (FCN) to the ROI region to generate a Mask, completing segmentation and classification tasks, and finally obtaining an effective accurate position of the image to be evaluated in the image and certificate image class information;
step S24: if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image category information result as null, and deleting and filtering the image with the certificate image category information as null according to the obtained certificate image category information; and deleting and filtering the images with the certificate image category information being null according to the obtained certificate image category information.
Step S3: and (3) correcting the image to be evaluated: after the position of the certificate image to be evaluated in the image is obtained through the step S2, the certificate image is segmented and corrected, and then the image needing to be processed only comprises the certificate image needing to be evaluated for quality;
step S4: multi-dimensional quality assessment: after obtaining a certificate image needing to evaluate the quality, judging 5 dimensions of the certificate image from the image type, the image integrity, the image definition, the image facula and the PS respectively by using a model based on deep learning and convolutional neural network technology as a main means to evaluate the quality, and giving a corresponding score; the multi-dimensional quality assessment in step S4 specifically includes:
and (3) image type judgment: processing and analyzing the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain an image type and give a corresponding score, and finally outputting the image type with the highest score and the corresponding image type score; the image types comprise original images, copy images, computer screen shooting images and non-computer screen shooting images ((the non-computer screen shooting images refer to other screen shooting images such as mobile phones and video cameras));
and (3) image integrity judgment: according to the image type judgment result, analyzing and classifying the image to be evaluated by adopting an integrity classification model which is trained by deep learning, so as to obtain an image integrity score; for some special certificate images, such as second-generation identity cards containing text contents, and the like, the image integrity analysis model can make final score judgment on the image integrity by combining the traditional modes of text information, text position information and the like in the certificate images;
image definition judgment: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
image light spot detection model: based on an image light spot detection model, firstly detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
and (6) PS judgment: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; then, whether tampered data exists or whether information related to PS (photoshop software) exists or not is judged through analysis of data in the image data, for example, whether the data contains a photoshop character or not is judged, the image is screened again, and a final score of the PS (photoshop software) is obtained.
The process of judging the image type is shown in fig. 5, and specifically includes the following steps:
step S401: based on the traditional mode, acquiring the bit depth of the image, and if the bit depth is not the 24-bit true color image, directly returning the image type as a copy; if the bit depth is 24-bit true color image, continuing to enter the next step;
step S402: based on a cascade type classification mode of a convolutional neural network, performing cascade type classification on the images by using a plurality of classification models, respectively performing classification screening on the copies, the computer screen shots, the non-computer screen shots and the original in sequence, and returning the image type with the highest score and the corresponding image type score; the step S402 specifically includes:
step S4021: classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification in the classification results is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
step S4023: and outputting an image classification result.
Therefore, the classified screening is carried out in sequence according to the sequence of the copy, the computer screen shooting piece, the non-computer screen shooting piece and the original, and the reason is that: (1) only a single classification model is used, the classification precision is low, especially the camera pixels equipped in the current electronic products are higher and higher, the similarity between the mobile phone screen shooting piece and the original price is very high, and the classification effect is not ideal if only one classification model is used; (2) based on the characteristics of the image sample, a plurality of different classification models are used for screening from easy to difficult, image types with obvious characteristics can be screened out preferentially, and then images with characteristics which are difficult to distinguish are placed in a rear model for accurate classification, so that the efficiency of image type judgment is improved remarkably, the classification precision of the images can be improved effectively, and the accuracy of classification results is guaranteed.
Step S5: and normalizing the score of each dimension, uniformly normalizing to a percentile system, structuring the result into a json data format, and finally outputting a quality evaluation result.
In addition, the present invention also provides an image quality evaluation system for executing the above-described image quality evaluation method, the system including: the device comprises an image preprocessing module, an image effective position prediction module, an image correction module to be evaluated, a multi-dimensional quality evaluation module and a normalization output module; in particular, the amount of the solvent to be used,
(1) the image preprocessing module: the image normalization processing unit is used for scaling the original image in equal proportion, carrying out image normalization, filling a missing part of the image with a default value, and obtaining a preprocessed image;
(2) the image effective position prediction and image filtering module: the system comprises a pixel-level segmentation-classification technical model, wherein the image effective position prediction and image filtering module detects and determines certificate image position information and corresponding certificate image category information in a preprocessed image based on the segmentation-classification technical model, wherein the certificate image category information comprises but is not limited to common image categories such as certificates, cards, documents, bills and the like; if the image does not contain the certificate image to be evaluated, the corresponding certificate image category information is null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information;
the segmentation-classification technology model comprises a CNN convolutional neural network, an RPN network, a regional characteristic aggregation model and an image filtering model;
the CNN convolutional neural network processes the input preprocessed image to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network; as a preferable scheme, the CNN convolutional neural network adopts renet 18 as a feature extraction network to meet the requirements of processing speed and detection accuracy.
The RPN sets a fixed number of ROI at each pixel position of the feature map, and then sends the ROI area into the RPN to carry out foreground and background secondary classification and coordinate regression so as to obtain a refined ROI area;
the regional feature aggregation model performs regional feature aggregation operation on the obtained ROI, firstly, the original image and the pixels of the feature image are corresponded, then, the feature image and the fixed features are corresponded, finally, the ROI is subjected to multi-class classification, candidate frame regression and introduction of a full convolution network to generate a Mask, segmentation and classification tasks are completed, and finally, the effective accurate position of the certificate image to be evaluated in the image and the certificate image class information are obtained;
and the image filtering model deletes and filters the images with empty certificate image category information according to the obtained certificate image category information.
(3) The image correction module to be evaluated: after the position of the certificate image in the image is obtained, the certificate image is segmented and corrected, and then the image needing to be processed only contains the certificate image needing to be evaluated for quality;
(4) the multi-dimensional quality assessment module: after obtaining a certificate image needing to evaluate the quality, using a model based on deep learning and convolutional neural network technology as a main means, respectively judging 5 dimensions of the certificate image from image type, image integrity, image definition, image facula and PS to evaluate the quality, and giving a corresponding score;
the multi-dimensional quality evaluation module comprises an image type analysis model, an image integrity analysis model, an image definition analysis model, an image light spot detection model and a PS analysis model;
the image type analysis model processes and analyzes the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain the image type and give a corresponding score, and finally outputs the image type with the highest score and the corresponding image type score; the image types comprise an original, a copy, a computer screen shooting piece and a non-computer screen shooting piece;
the image integrity analysis model is as follows: according to the image type judgment result, analyzing and classifying the images to be evaluated by adopting the integrity classification model trained by deep learning to obtain image integrity scores; for some special certificate images, such as second-generation identity cards containing text contents, and the like, the image integrity analysis model can make final score judgment on the image integrity by combining the traditional modes of text information, text position information and the like in the certificate images;
the image definition analysis model comprises: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
the image light spot detection model comprises: firstly, detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
the PS analytical model: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; then, whether tampered data exists or whether information related to PS (photoshop software) exists or not is judged through analysis of data in the image data, for example, whether the data contains a photoshop character or not is judged, the image is screened again, and a final score of the image modified by the PS (photoshop software) is obtained. .
The image type analysis model comprises a bit depth classification model and a cascade type classification model;
the bit depth classification model acquires the bit depth of the image based on a traditional mode, and if the bit depth is not a 24-bit true color image, the image type is directly returned to be a copy; if the bit depth is 24-bit true color image, continuously entering a cascade type classification model for classification;
the cascade classification model is based on a cascade classification mode of a convolutional neural network, a plurality of classification models are used for carrying out cascade classification on images, classification results of the images respectively aiming at a computer screen shooting piece, a mobile phone screen shooting piece and an original piece are obtained, and the cascade classification model specifically comprises a full-class classification model, an original piece-computer screen shooting piece classification model and an original piece-mobile phone screen shooting piece classification model; the specific classification steps of the cascade type classification model are as follows:
the cascade type classification model is based on a cascade type classification mode of a convolutional neural network, a plurality of classification models are used for carrying out cascade type classification on images, the images are classified and screened respectively according to a copy, a computer screen shooting piece, a non-computer screen shooting piece and an original in sequence, and the image type with the highest score and the corresponding image type score are returned; the cascade type classification model specifically comprises a full image type classification model, an original-computer screen shooting classification model and an original-non-computer screen shooting classification model; the specific classification steps of the cascade type classification model are as follows:
firstly, classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuously entering an original-computer screen shooting classification model;
secondly, classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuously entering the original-non-computer screen shooting classification model;
then, classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification result is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
and finally, outputting an image classification result.
(5) The normalization output module: and normalizing the score of each dimension, normalizing the score to a percentile system, structuring the score to a json data format, and finally outputting a quality evaluation result.
The above description is only an example of the present application and is not intended to limit the present invention. Any modification, equivalent replacement, and improvement made within the scope of the application of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image quality evaluation method, characterized by comprising the steps of:
step S1: image preprocessing: scaling the original image in equal proportion, carrying out image normalization, and filling a missing part of the image with a default value to obtain a preprocessed image;
step S2: image effective position prediction and image filtering: detecting and determining certificate drawing position information and certificate drawing category information in the preprocessed image based on a pixel-level segmentation-classification technology model, wherein the certificate drawing category information comprises but is not limited to certificates, cards, documents and bills; if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information;
step S3: and (3) correcting the image to be evaluated: after the position of the certificate image to be evaluated in the image is obtained through the step S2, the certificate image is segmented and corrected, and then the image needing to be processed only comprises the certificate image needing to be evaluated for quality;
step S4: multi-dimensional quality assessment: after obtaining a certificate image needing to evaluate the quality, judging 5 dimensions of the certificate image from the image type, the image integrity, the image definition, the image facula and the PS respectively by using a model based on deep learning and convolutional neural network technology as a main means to evaluate the quality, and giving a corresponding score;
step S5: and normalizing the score of each dimension to a percentile system, structuring the result into a json data format, and finally outputting a quality evaluation result.
2. The image quality evaluation method according to claim 1, wherein the step S2 specifically comprises the steps of:
step S21: the image preprocessed in the step S1 is transmitted to a CNN convolutional neural network and processed to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network;
step S22: setting a fixed number of ROI (regions of interest) at each pixel position of the feature map, and then sending the ROI into an RPN (resilient packet network) to perform foreground and background secondary classification and coordinate regression so as to obtain a refined ROI;
step S23: performing regional feature aggregation on the ROI obtained in the step S22, namely, firstly, corresponding pixels of an original image and a feature image, then, corresponding the feature image and fixed features, finally, performing multi-class classification and candidate frame regression on the ROI and introducing a full convolution network to generate a Mask, completing segmentation and classification tasks, and finally obtaining effective accurate positions of the certificate image to be evaluated in the image and certificate image class information;
step S24: if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; and deleting and filtering the images with the certificate image category information being null according to the obtained certificate image category information.
3. The image quality assessment method according to claim 2, wherein in said step S21, said CNN convolutional neural network adopts resnet18 as a feature extraction network.
4. The image quality assessment method according to claim 2 or 3, wherein the multi-dimensional quality assessment in the step S4 specifically comprises:
and (3) image type judgment: processing and analyzing the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain an image type and give a corresponding score, and finally outputting the image type with the highest score and the corresponding image type score; the image types comprise an original, a copy, a computer screen shooting piece and a non-computer screen shooting piece;
and (3) image integrity judgment: according to the image type judgment result, analyzing and classifying the image to be evaluated by adopting an integrity classification model which is trained in deep learning to obtain an image integrity score;
image definition judgment: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
image light spot detection: based on an image light spot detection model, firstly detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
and (6) PS judgment: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; and then, judging whether tampered data exists or not or whether information related to PS software exists in the data or not by analyzing the data in the image data, and screening the image again to obtain a final score of the image modified by the PS.
5. The image quality evaluation method according to claim 4, wherein in the step S4, the image type judgment specifically comprises:
step S401: based on the traditional mode, acquiring the bit depth of the image, and if the bit depth is not the 24-bit true color image, directly returning the image type as a copy and giving a score; if the bit depth is 24-bit true color image, continuing to enter the next step;
step S402: based on a cascade type classification mode of a convolutional neural network, performing cascade type classification on the images by using a plurality of classification models, respectively performing classification screening on the copies, the computer screen shots, the non-computer screen shots and the original in sequence, and returning the image type with the highest score and the corresponding image type score; the step S402 specifically includes:
step S4021: classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuing to enter the next step;
step S4022: classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification in the classification results is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
step S4023: and outputting an image classification result.
6. An image quality evaluation system for executing the image quality evaluation method according to any one of claims 1 to 5, characterized by comprising: the system comprises an image preprocessing module, an image effective position predicting and filtering module, an image correction module to be evaluated, a multi-dimensional quality evaluation module and a normalization output module;
the image preprocessing module is used for scaling the original image in equal proportion, carrying out image normalization, and filling a missing part of the image with a default value to obtain a preprocessed image;
the image effective position predicting and filtering module comprises a pixel-level segmentation-classification technology model, and the image effective position predicting and filtering module detects and determines certificate image position information and corresponding certificate image category information in the preprocessed image based on the segmentation-classification technology model, wherein the certificate image category information comprises but is not limited to certificates, clamping pieces, documents and bills; if the image does not contain the certificate image to be evaluated, setting the corresponding certificate image type information result as null; deleting and filtering the images with empty certificate drawing category information according to the obtained certificate drawing category information;
after the image correction module to be evaluated obtains the position of the certificate image in the image, the certificate image is segmented and corrected, and then the image to be processed by the module only contains the certificate image of which the quality needs to be evaluated;
after obtaining a certificate image of which the quality needs to be evaluated, the multi-dimensional quality evaluation module uses a model based on deep learning and convolutional neural network technology as a main means to judge 5 dimensions of the certificate image from image type, image integrity, image definition, image facula and PS respectively for quality evaluation, and gives out corresponding scores;
and the normalization output module normalizes the score of each dimension, normalizes the score to a percentage system, structures the score into a json data format, and finally outputs a quality evaluation result.
7. An image quality assessment system according to claim 6, wherein said segmentation-classification technique model comprises a CNN convolutional neural network, an RPN network, a region feature aggregation model and an image filtering model;
the CNN convolutional neural network processes the input preprocessed image to obtain a characteristic diagram; the CNN convolutional neural network comprises one of, but is not limited to, resnet, vgg and mobilenet convolutional neural network;
the RPN sets a fixed number of ROI at each pixel position of the feature map, and then sends the ROI area into the RPN to carry out foreground and background secondary classification and coordinate regression so as to obtain a refined ROI area;
the regional feature aggregation model performs regional feature aggregation operation on the obtained ROI, firstly, the original image and the pixels of the feature image are corresponded, then, the feature image and the fixed features are corresponded, finally, the ROI is subjected to multi-class classification, candidate frame regression and introduction of a full convolution network to generate a Mask, segmentation and classification tasks are completed, and finally, the effective accurate position of the certificate image to be evaluated in the image and the certificate image class information are obtained;
and the image filtering model deletes and filters the images with empty certificate image category information according to the obtained certificate image category information.
8. The image quality assessment system according to claim 7, wherein said CNN convolutional neural network employs resnet18 as a feature extraction network.
9. An image quality assessment system according to claim 7 or 8, wherein said multi-dimensional quality assessment module comprises an image type analysis model, an image integrity analysis model, an image sharpness analysis model, an image spot detection model and a PS analysis model;
the image type analysis model is as follows: processing and analyzing the image to be evaluated by adopting a traditional mode and a cascade type classification model mode based on a convolutional neural network to obtain an image type and give a corresponding score, and finally outputting the image type with the highest score and the corresponding image type score; the image types comprise an original, a copy, a computer screen shooting piece and a non-computer screen shooting piece;
the image integrity analysis model is as follows: according to the image type judgment result, analyzing and classifying the image to be evaluated by adopting an integrity classification model which is trained in deep learning to obtain an image integrity score;
the image definition analysis model comprises: on one hand, a definition classification model which is trained through deep learning is adopted to analyze and classify the image to be evaluated, and definition scores are obtained; on the other hand, after the image is grayed, the gradient value of the image is calculated, and the image definition is judged according to the gradient value; finally, combining the scores of the two aspects, and outputting an image definition score;
the image light spot detection model comprises: firstly, detecting an image to be evaluated to obtain a possible light spot area; then, combining some shape and color information of the light spots, and recalibrating the possible area; finally, corresponding score judgment is given according to the number and the size of the light spots;
the PS analytical model: firstly, analyzing and classifying images to be evaluated by adopting a PS classification model trained by deep learning to obtain scores of the images which are possibly tampered by the PS; and then, judging whether tampered data exists or not or whether information related to PS software exists in the data or not by analyzing the data in the image data, and screening the image again to obtain a final score of the image modified by the PS.
10. The image quality evaluation system according to claim 9, wherein the image type analysis model includes a bit depth classification model and a cascade classification model;
the bit depth classification model acquires the bit depth of the image based on a traditional mode, and if the bit depth is not a 24-bit true color image, the image type is directly returned to be a copy; if the bit depth is 24-bit true color image, continuously entering a cascade type classification model for classification;
the cascade type classification model is based on a cascade type classification mode of a convolutional neural network, a plurality of classification models are used for carrying out cascade type classification on images, the images are classified and screened respectively according to a copy, a computer screen shooting piece, a non-computer screen shooting piece and an original in sequence, and the image type with the highest score and the corresponding image type score are returned; the cascade type classification model specifically comprises a full image type classification model, an original-computer screen shooting classification model and an original-non-computer screen shooting classification model; the specific classification steps of the cascade type classification model are as follows:
firstly, classifying the images by using a full image type classification model, if the highest score in the classification result is a copy, returning the image type as the copy and giving a score, and if not, continuously entering an original-computer screen shooting classification model;
secondly, classifying the images by using an original-computer screen shooting classification model, if the highest score in the classification result is the computer screen shooting, returning the image type as the computer screen shooting and giving a score, and if not, continuously entering the original-non-computer screen shooting classification model;
then, classifying the images by using an original-non-computer screen shooting classification model, and if the highest classification result is a non-computer screen shooting, returning the image type as the non-computer screen shooting and giving a score; if the highest score in the classification result is the original, returning the image type as the original and giving a score;
and finally, outputting an image classification result.
CN202210077125.9A 2022-01-24 2022-01-24 Image quality evaluation method and system Pending CN114419008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210077125.9A CN114419008A (en) 2022-01-24 2022-01-24 Image quality evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210077125.9A CN114419008A (en) 2022-01-24 2022-01-24 Image quality evaluation method and system

Publications (1)

Publication Number Publication Date
CN114419008A true CN114419008A (en) 2022-04-29

Family

ID=81274832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210077125.9A Pending CN114419008A (en) 2022-01-24 2022-01-24 Image quality evaluation method and system

Country Status (1)

Country Link
CN (1) CN114419008A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842000A (en) * 2022-07-01 2022-08-02 杭州同花顺数据开发有限公司 Endoscope image quality evaluation method and system
CN116578763A (en) * 2023-07-11 2023-08-11 卓谨信息科技(常州)有限公司 Multisource information exhibition system based on generated AI cognitive model

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842000A (en) * 2022-07-01 2022-08-02 杭州同花顺数据开发有限公司 Endoscope image quality evaluation method and system
CN116578763A (en) * 2023-07-11 2023-08-11 卓谨信息科技(常州)有限公司 Multisource information exhibition system based on generated AI cognitive model
CN116578763B (en) * 2023-07-11 2023-09-15 卓谨信息科技(常州)有限公司 Multisource information exhibition system based on generated AI cognitive model

Similar Documents

Publication Publication Date Title
Xiao et al. Noise or signal: The role of image backgrounds in object recognition
CN110517246B (en) Image processing method and device, electronic equipment and storage medium
CN111027297A (en) Method for processing key form information of image type PDF financial data
US20070253040A1 (en) Color scanning to enhance bitonal image
CN114419008A (en) Image quality evaluation method and system
CN111259891B (en) Method, device, equipment and medium for identifying identity card in natural scene
CN111626249B (en) Method and device for identifying geometric figure in topic image and computer storage medium
CN106096667A (en) Bill images sorting technique based on SVM
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
Sidhwa et al. Text extraction from bills and invoices
CN105678301B (en) method, system and device for automatically identifying and segmenting text image
CN113537211A (en) Deep learning license plate frame positioning method based on asymmetric IOU
CN116542975A (en) Defect classification method, device, equipment and medium for glass panel
CN113076860B (en) Bird detection system under field scene
JP5887242B2 (en) Image processing apparatus, image processing method, and program
CN114078109A (en) Image processing method, electronic device, and storage medium
US9684984B2 (en) Nearsighted camera object detection
CN117058805A (en) Banknote image processing method and system
CN116521917A (en) Picture screening method and device
CN112734742B (en) Method and system for improving industrial quality inspection accuracy
CN115457585A (en) Processing method and device for homework correction, computer equipment and readable storage medium
CN111445433B (en) Method and device for detecting blank page and fuzzy page of electronic file
CN111083468B (en) Short video quality evaluation method and system based on image gradient
CN114677670A (en) Automatic identification and positioning method for identity card tampering
CN114627457A (en) Ticket information identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination