CN113628181A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113628181A
CN113628181A CN202110880952.7A CN202110880952A CN113628181A CN 113628181 A CN113628181 A CN 113628181A CN 202110880952 A CN202110880952 A CN 202110880952A CN 113628181 A CN113628181 A CN 113628181A
Authority
CN
China
Prior art keywords
image
certificate
area
processed
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110880952.7A
Other languages
Chinese (zh)
Inventor
郑利群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202110880952.7A priority Critical patent/CN113628181A/en
Publication of CN113628181A publication Critical patent/CN113628181A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application provides an image processing method, an image processing device, an electronic device, a computer readable storage medium and a computer program product; the method comprises the following steps: identifying a face area in an image to be processed, wherein the image to be processed is obtained by carrying out image acquisition on a certificate; determining the certificate area in the image to be processed according to the relative position and proportion between the face area and the certificate area, wherein the certificate area comprises the face area; segmenting the image to be processed according to the certificate area to obtain a certificate image; and acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified. By the method and the device, the qualified certificate image can be accurately extracted from the collected image.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Image recognition technology is an important field of artificial intelligence. It refers to a technique of performing object recognition on an image to recognize various different modes of objects and objects. At present, the image recognition technology is mature day by day and is widely applied to the aspects of human faces, other objects and the like.
In scenes such as bank account opening, certificate photos need to be scanned and uploaded, original photos are required to be photographed, at the moment, the mobile phone can be subjected to detection guided by the front end, whether the mobile phone is located in a photo area or not is moderate in size, whether an identity card is complete or not is detected, and certificate images meeting requirements are obtained.
In the related technology, based on the traditional edge detection algorithm, the interference of the photo background is large, the effect is unstable, the parameters are difficult to adjust, the certificate type cannot be identified, the quality detection is not carried out, such as copying, integrity and the like, the efficiency is low because the unqualified certificate image is easy to collect, and the high-fidelity qualified image shot by the high-refresh-rate and high-resolution imaging equipment is still difficult to accurately distinguish.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product, which can accurately extract a qualified certificate image from a captured image.
The technical scheme of the embodiment of the application is realized as follows:
an embodiment of the present application provides an image processing method, including:
identifying a face area in an image to be processed, wherein the image to be processed is obtained by carrying out image acquisition on a certificate;
determining a certificate area in the image to be processed according to the relative position and proportion between the face area and the certificate area, wherein the certificate area comprises the face area;
segmenting the image to be processed according to the certificate area to obtain a certificate image;
and acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified.
An embodiment of the present application provides an image processing apparatus, including:
the image recognition module is used for recognizing a face area in an image to be processed, wherein the image to be processed is obtained by carrying out image acquisition on a certificate;
the image detection module is used for determining a certificate area in the image to be processed according to the relative position and proportion between the face area and a preset certificate area, wherein the certificate area comprises the face area;
the image segmentation module is used for carrying out segmentation processing on the image to be processed according to the certificate area to obtain a certificate image;
and the image quality detection module is used for acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified.
In the above solution, the apparatus further comprises: the certificate type identification module is used for carrying out certificate type identification processing on the image to be processed to obtain the type of the certificate in the image to be processed; and querying a mapping table according to the type to obtain the relative position and proportion between the face region and the certificate region in the image to be processed, wherein the mapping table comprises the relative position and proportion between the face region and the certificate region in certificate images of different types.
In the above scheme, the relative position between the face region and the certificate region includes a relative position between a first central point of the face region and a second central point of the certificate region, and the ratio between the face region and the certificate region includes a first ratio of widths of the face region and the certificate region and a second ratio of heights of the face region and the certificate region; the type recognition module is further configured to move the first center point of the face region to the second center point of the certificate region according to the relative position between the first center point of the face region and the second center point of the certificate region; and taking a second central point of the certificate area as a reference, prolonging the width of the face area according to the first proportion, and prolonging the height of the face area according to the second proportion to obtain the certificate area.
In the above scheme, the quality detection includes integrity detection; the image quality detection module is also used for acquiring the content characteristics of the certificate image; carrying out integrity classification processing on the certificate image based on the content characteristics of the certificate image to obtain an integrity classification detection result; when the integrity classification detection result represents that the certificate image is a complete image, determining that the certificate image is a qualified image; and when the integrity classification detection result represents that the certificate image is an incomplete image, determining that the certificate image is an unqualified image.
In the scheme, the content characteristics of the certificate image comprise the number of keywords, position information and the certificate type; the image quality detection module is also used for acquiring a certificate type rule corresponding to the certificate type; and classifying the certificate image according to the certificate type rule and the number and the position information of the obtained keywords to obtain a classification detection result, wherein the preset certificate type rule comprises the number and the position information of the keywords corresponding to different certificate types.
In the above aspect, the quality inspection includes a copy inspection; the image quality detection module is also used for acquiring the texture features of the certificate image; calling a two-classification model based on the texture features to perform copy classification processing on the certificate image to obtain a copy classification detection result; when the copy classification detection result represents that the certificate image is a non-copy image, determining that the certificate image is a qualified image; and when the copy classification detection result represents that the certificate image is a copy image, determining that the certificate image is an unqualified image.
In the above scheme, the quality detection comprises sharpness detection; the image quality detection module is further configured to acquire visual features of the document image, where the visual features include at least one of: color information, brightness level; determining a certificate image based on the visual features to perform definition detection to obtain a definition detection result; when the definition detection result represents that the certificate image is a clear image, determining that the certificate image is a qualified image; and when the definition detection result represents that the certificate image is an unsharp image, determining that the certificate image is an unqualified image.
In the above solution, the apparatus further comprises: the forged certificate identification module is used for carrying out image acquisition processing on the certificate at different angles to obtain a plurality of to-be-processed images comprising the certificate; acquiring the image characteristics of the anti-counterfeiting mark in each image to be processed; carrying out counterfeit certificate classification processing on the image to be processed according to the image characteristics of the anti-counterfeiting mark to obtain a counterfeit certificate classification result; and when the counterfeit certificate classification result indicates that the certificate is not a counterfeit certificate, determining that the process of identifying the human face area in any image to be processed is to be executed.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the image processing method provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions and is used for realizing the image processing method provided by the embodiment of the application when being executed by a processor.
The embodiment of the present application provides a computer program product, which includes a computer program, and the computer program is executed by a processor to implement the image processing method provided by the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
identifying a face area from the image to be processed, and efficiently and accurately determining a certificate area according to the relative position and the proportion between the face area and the certificate area so as to identify a preliminarily qualified certificate area, further identifying the preliminarily qualified certificate area by combining quality detection, and more accurately and efficiently judging that the image to be processed is a qualified certificate image; compared with the prior art in which the traditional edge detection algorithm is used, the method can eliminate the interference of the background in the image to be processed, has more stable effect, and realizes accurate resolution of the image to be processed by increasing the quality detection.
Drawings
FIG. 1 is a schematic structural diagram of an image processing system provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
FIG. 3 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 4 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating movement of a connection intersection of diagonal lines of a face region according to an embodiment of the present application;
6A-6B are schematic flow charts of image recognition methods provided by embodiments of the present application;
FIG. 7 is a process diagram of an image recognition method according to an embodiment of the present application;
FIG. 8 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 9 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 10 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
FIG. 11 is a schematic flowchart of an image recognition method according to an embodiment of the present application;
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Qualified image (qualified image) means an image that is to be processed and, after passing the quality inspection, is confirmed as qualified from the point of view of reproduction, duplication, integrity, brightness and sharpness.
2) Image segmentation process, which is a technique and process that divides an image into several specific regions with unique properties and proposes an object of interest.
3) The face detection means that for any given image, a certain strategy is adopted to search the image to determine whether the image contains a face, and if so, the position, size and posture of the face are returned.
The applicant finds that, in order to acquire a qualified certificate image, the traditional edge detection algorithm is generally greatly interfered by the photo background, the effect is unstable, the parameters are difficult to adjust, the certificate type cannot be identified, quality detection such as copying, integrity and the like is not performed, and an unqualified certificate image is easy to acquire.
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product, which can accurately identify qualified certificate images.
First, an image processing system provided in an embodiment of the present application is described, referring to fig. 1, fig. 1 is a schematic structural diagram of an image processing system 100 provided in an embodiment of the present application, a terminal 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission.
In some embodiments, the terminal 400 may be, but is not limited to, a laptop, a tablet, a desktop computer, a smart phone, a dedicated messaging device, a portable gaming device, a smart speaker, a smart watch, and the like. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like. The network 300 may be a wide area network or a local area network, or a combination of both. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
The terminal 400 is configured to send an image identification request carrying an image to be identified to the server 200, so as to request the server 200 to identify whether the image to be identified is a qualified image.
The server 200 is configured to determine a certificate area in the image to be processed according to a relative position and a ratio between a face area and a preset certificate area after the face area in the image to be processed is analyzed from the image recognition request; segmenting the image to be processed according to the certificate area to obtain a certificate image; acquiring the characteristics of the certificate image, performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified or not, and returning the detection result to the terminal 400; the terminal 400 is further configured to output a detection result of whether the image to be identified is a qualified image.
In some embodiments, the terminal 400 is provided with an image recognition client 410, the user selects an image to be recognized based on the qualified image recognition client 410, and triggers an image recognition instruction based on the selected image to be recognized, and the qualified image recognition client 410 sends an image recognition request carrying the image to be recognized to the server in response to the image recognition instruction; the server analyzes a face area in an image to be recognized from the image recognition request, wherein the image to be recognized is obtained by image acquisition of a certificate; determining a certificate area in the image to be processed according to the relative position and proportion between the face area and a preset certificate area; segmenting the image to be processed according to the certificate area to obtain a certificate image; acquiring the characteristics of the certificate image, performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified or not, returning the classification result to the qualified image identification client 410, and outputting the classification result of whether the image to be identified is the qualified image or not by the qualified image identification client 410.
It should be noted that the image processing method provided by the embodiment of the present application may be implemented by a terminal and a server in a combined manner, and may also be implemented by the terminal independently.
Next, an exemplary application of the electronic device implementing the image processing method provided by the embodiment of the present application when the electronic device is a terminal is described.
By way of example, taking the terminal 400 in fig. 1 as an example, an image recognition client 410 is arranged on the terminal 400, a user selects an image to be recognized based on the image recognition client 410 and triggers an image recognition instruction based on the selected image to be recognized, the image recognition client 410 responds to the image recognition instruction and analyzes a face area in the image to be recognized and processed from an image recognition request, wherein the image to be processed is obtained by image acquisition of a document; determining a certificate area in the image to be processed according to the relative position and proportion between the face area and the preset certificate area; segmenting the image to be processed according to the certificate area to obtain a certificate image; acquiring the characteristics of the certificate image, performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified, returning the classification result to the image identification client 410, and outputting the classification result of whether the image to be identified is the image by the image identification client 410.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 provided in the embodiment of the present application, in practical applications, the electronic device 500 may be implemented as the terminal 400 or the server 200 in fig. 1, and an electronic device implementing the image processing method in the embodiment of the present application is described by taking the electronic device as the server 200 shown in fig. 1 as an example. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It will be appreciated that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the user interface, presentation module and output processing means described above may be defaulted to the server.
In some embodiments, the image processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 2 shows an image processing apparatus 555 stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the image recognition module 5551, the feature detection module 5552, the feature segmentation module 5553, the image quality detection module 5554, the certificate type recognition module 5555, and the counterfeit certificate recognition module 5556 are logical, and thus may be arbitrarily combined or further separated according to the functions implemented. The functions of the respective modules will be explained below.
In other embodiments, the image processing apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the image processing apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the image processing method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
In step 101, a face region in an image to be processed is identified, wherein the image to be processed is obtained by image acquisition of a certificate.
Here, the certificate may be various certificates commonly used in reality, such as an identification card, a passport, a bank card, a driver's license, and the like.
It should be noted that, in the embodiment of the present application, the image to be processed may be an original image obtained by cutting a specific area in an image or by shooting, or an image stored in an album of an electronic device may be selected for a user, the face area is an area including a face in the image to be processed, image acquisition may be performed in real time or may be performed in non-real time, where the coordinates of a face frame may be obtained by recognizing the face area through a face detection algorithm.
As an example, the face detection algorithm includes a face detection algorithm based on a moving contour model, and a face contour can be used as a main feature for describing a face, and the feature of a deformed contour of any shape can be modeled and extracted by using a moving contour model method, and a large number of samples need to be collected for learning training, so that coordinates of a face frame are obtained.
In step 102, a certificate area in the image to be processed is determined according to the relative position and proportion between the face area and the certificate area, wherein the certificate area comprises the face area.
In some embodiments, the document region in the image to be processed is determined based on the face region in a particular type of document, which may be any of an identification card, passport, bank card, driver's license. For example, in the case of recognizing only a driver's license image, in order to determine whether the certificate scanned by the user meets the requirements, the face area is determined directly according to the relative position and ratio between the face area and the certificate area in the driver's license image. And determining the certificate area according to the relative position between the intersection point of the connecting lines of the diagonals of the face area and the intersection point of the connecting lines of the diagonals of the certificate area and the width and length proportion of the face area and the certificate area in the driving license certificate.
In other embodiments, certificate type identification processing is carried out on the image to be processed to obtain the type of the certificate in the image to be processed; and inquiring a mapping table according to the type to obtain the relative position and proportion between the face area and the certificate area in the image to be processed, wherein the mapping table comprises the relative position and proportion between the face area and the certificate area in the certificate images of different types.
As an example, according to different types of documents, the document areas may be an identity card, a passport, a bank card, a driver's license, and the like, since different types of documents have fixed sizes, and there are fixed relative positions and proportions between the face area and the document area, and since different types of documents have relatively fixed size requirements, positions and proportions corresponding to the face area and the document area are preset for different documents according to experience information, and are stored in the type query mapping table. For example, when the certificate type is an identity card, a point is determined in a face region, the coordinate P1 of the point is taken as a position point representing the face region, a point is determined in a corresponding region in the certificate region, the coordinate P2 of the point represents the position point of the certificate region, the coordinates P1 and P2 are stored in a type query mapping table, the area of the face region is taken as R1, and the area of the face region is taken as R2. Areas R1 and R2 are stored in the class query mapping table. The relative position relation between the face area and the certificate area of different types of certificates is represented by coordinates of points, and the area relation between the face area and the certificate area is stored in a type query mapping table, so that the processing compatible with various certificates is realized.
In some embodiments, the type of the document in the image to be processed is obtained through image recognition, for example, a document type classification model is called, or the document type is manually indicated by a user, or the document type to be processed in a service scene is read according to a specific service scene (such as real name authentication), so as to determine the document type.
It should be noted that the certificate type classification model may be various types of neural network models, such as a convolutional neural network, a deep convolutional neural network, a fully-connected neural network, and the like, and the labels of the training samples in the certificate type classification model may be different types of certificate labels. The server may obtain the training image labeled with the label from the database, may also obtain the training image labeled with the label in a manual labeling manner, and may also obtain the training image labeled with the label in other manners, which is not limited herein in the embodiments of the present application.
As an example, taking a certificate type classification model as an image classification model based on a convolutional neural network as an example, a training sample of the convolutional neural network model includes a normal certificate image, annotation data of the training sample includes certificate size and keyword information in the normal certificate image, the certificate size and the keyword information are extracted from the normal certificate image, image features of each image in a first training set are respectively extracted directly through a convolutional layer of the convolutional neural network model, and the extracted image features are sequentially input into a full connection layer (used for connecting the image features into a vector) and a pooling layer (used for average pooling or maximum pooling) of the convolutional neural network model to determine a prediction result corresponding to each image respectively, and the prediction result is optimized through a loss function.
It should be noted that the loss function is to take the error between the prediction result of the image type and the label corresponding to the image type as a difference factor and minimize the difference factor. The types of the Loss Function may include Mean square Error Loss Function (MSE), Hinge Loss Function (HLF), Cross Entropy Loss Function (Cross Entropy), and the like
In some embodiments, referring to fig. 4, fig. 4 is a flowchart illustrating a creating manner of the index table provided in the embodiment of the present application, and step 102 may be implemented by steps 1021 to step 1022 in fig. 4.
In step 1021, the first center point of the face area is moved to the second center point of the certificate area according to the relative position between the first center point of the face area and the second center point of the certificate area.
In some embodiments, fig. 5 exemplarily shows the relative position and ratio between the face region and the document region when the document is an identity card, for example, in fig. 5, point a is an intersection point of diagonal lines of the document region, point B is an intersection point of diagonal lines of the face region, point a is an origin of a coordinate system, coordinates of point B is (x, y), coordinates of point a is (0, 0), the relative position relationship between the face region and the document region can be represented by coordinates between points determined by the face region and the document region, it should be noted that, for example, the length of the identity card region is 85.6 mm, the width is 54 mm, the length of the face region is 26 mm, and the width is 37 mm, the ratio of the length of the identity card region to the length of the face region and the ratio of the width of the face region are determined, so that in the case that the document type is determined, the ratio of the length of the document region to the length of the face region is fixed, the ratio of the width of the certificate area to the width of the face area is also fixed.
As an example, the first central point of the face region may be a connection intersection point of diagonal lines of the face region, the second central point of the certificate region may be a connection intersection point of diagonal lines of the certificate region, a relative position between the connection intersection point of the diagonal lines of the face region and the connection intersection point of the diagonal lines of the certificate region is moved (according to a coordinate relationship between two points), fig. 6A and 6B are schematic diagrams illustrating the movement of the connection intersection point of the diagonal lines of the face region according to the embodiment of the present invention, fig. 6A shows a schematic diagram before the movement of the intersection point of the face region, fig. 6B shows a schematic diagram after the movement of the intersection point of the face region, and when the face region and the certificate region are determined, 601 is the face region before the movement, and the intersection point a is marked according to the determined face region and the diagonal lines thereof. And determining the intersection point B of the diagonal lines of the certificate area according to the certificate area, obtaining A, B coordinates, and moving the connecting line intersection point B of the diagonal lines of the face area to the connecting line intersection point A of the diagonal lines of the certificate area according to the coordinate information of the point A and the point B.
In step 1022, the second central point of the certificate area is taken as a reference, the width of the face area is extended according to the first proportion, and the height of the face area is extended according to the second proportion, so as to obtain the certificate area.
As an example, the face region after movement is obtained 602 as shown in fig. 6B; and the width and the length of the face area are amplified by the moved point A according to the width and the length proportion of the face area to the certificate area to obtain a certificate area 603.
In step 103, the image to be processed is segmented according to the certificate area to obtain a certificate image.
In some embodiments, the image to be processed is segmented according to the certificate area obtained from the face area, so as to obtain the certificate image.
As an example, the segmentation process may be implemented by a semantic segmentation model. Here, the semantic segmentation model may be a U-type Convolutional neural network U-net model, a full Convolutional neural network (FCN) model, a deep neural network model, or the like, which is not limited in the embodiment of the present application.
In practical implementation, taking a full convolution neural network FCN model as an example, after semantic feature extraction and downsampling processing are performed through a plurality of stacked convolution and pooling layers, a semantic feature segmentation result graph corresponding To an original image can be obtained, and the length and width of the semantic feature segmentation result graph are upsampled To the size of the original image by using methods such as bilinear interpolation and the like, so that pixel-level End-To-End (End-To-End) semantic segmentation is realized, a corresponding semantic segmentation graph is obtained, and the semantic segmentation graph includes different regions representing different types.
For example, through the semantic segmentation model in the embodiment of the present application, semantic features are extracted from an image to be processed, semantic recognition is performed on the image to be processed, so as to obtain a semantic segmentation map smaller than the original image in size, where the semantic segmentation map includes at least one semantic region and a certificate region, and the at least one semantic region obtained through the semantic recognition processing is used as the certificate region. Therefore, the problems that the traditional edge detection algorithm is greatly interfered by the background of the photo and has unstable effect are avoided, the instability problem of the traditional edge detection algorithm is solved based on the multi-target detection algorithm, and the accuracy and the stability of the detection of the certificate frame are improved.
In step 104, the characteristics of the certificate image are acquired, and quality detection processing is performed based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified.
In practical applications, different processing methods may be adopted to perform the quality detection combination processing on the image to be recognized, which will be described below.
In some embodiments, referring to fig. 7, fig. 7 is a schematic flowchart of a process of quality detection including integrity detection provided by an embodiment of the present application, and step 104 may be implemented by step 1041A-step 1044A in fig. 7. Quality testing includes integrity testing; correspondingly, the features of the certificate image are acquired in step 104, quality detection processing is performed based on the features of the certificate image, and a detection result representing whether the certificate image is qualified is obtained, which can be realized by the following technical scheme: in step 1041A, acquiring content features of the certificate image; in step 1042A, performing integrity classification processing on the certificate image based on the content characteristics of the certificate image to obtain an integrity classification detection result; in step 1043A, when the integrity classification detection result indicates that the certificate image is an intact image, determining that the certificate image is a qualified image; in step 1044A, when the integrity classification detection result indicates that the document image is an incomplete image, the document image is determined to be an unsatisfactory image.
By way of example, the content features can be the number of keywords and also can be position information of the keywords, the certificate type rule is used for determining the integrity of the certificate image, and the certificate type rule can comprise the number of keywords contained in different types of certificates and the position information of the keywords. And comparing certificate type rules based on the number of the keywords of the certificate image and the position information of the keywords to perform integrity classification processing on the certificate image to obtain an integrity classification detection result. For example, when the detected certificate is a driver license certificate, the keywords may be name, gender, nationality, and date of birth, and the keyword location information may be a specific arrangement of the keywords on the driver license certificate, such as an interval between name and gender. When the keyword is detected to be absent or the position information of the number of the keyword is different from the certificate type rule, the certificate can be judged to be an incomplete certificate.
In some embodiments, referring to fig. 8, fig. 8 is a schematic flowchart of the quality detection including integrity detection provided in the embodiment of the present application, and step 104 may be implemented by steps 1041B to 1042B in fig. 8. In step 1041B, acquiring a certificate type rule corresponding to the certificate type; in step 1042B, classification processing is performed on the certificate image according to the certificate type rule and the number and the position information of the obtained keywords, so as to obtain a classification detection result, where the preset certificate type rule includes the number and the position information of the keywords corresponding to different certificate types.
As an example, a corresponding certificate type rule is determined according to the obtained certificate type, and keyword detection is performed on the obtained certificate image to obtain a position relationship of a plurality of keywords in the certificate image. Specifically, multi-keyword detection is performed on the certificate image through the multi-target detection model, and the position relationship of a plurality of keywords of the certificate image is obtained, wherein the position relationship of the keywords may include coordinates of the keywords. In image processing, the positional relationship of a keyword represents a combination or context relationship of a plurality of keywords within a range in a fixed region, wherein the positional relationship represents the combination relationship of the keyword context and the surrounding neighborhood. And classifying the certificate images according to the position relation and the number of the detected keywords to obtain a classification detection result.
It should be noted that the multi-target recognition may be various types of neural network models. Such as a convolutional neural network, a deep convolutional neural network, a fully-connected neural network, etc., which is not limited in the embodiments of the present application.
In practical implementation, taking the full convolutional neural network as an example, two full connection layers are changed into convolutional layers, and then 4 convolutional layer structure network structures are added. Respectively convolving the outputs of 5 different convolution layers by using two convolution kernels of 3 x 3, outputting confidence for classification, and generating 21 confidence by each default box; one output regression localization, each default box generates 4 coordinate values (x, y, w, h). In addition, the 5 convolutional layers also pass through the priorBox layer to generate a default box (the generated coordinates). The number of default boxes per layer in the 5 convolutional layers described above is given. And finally, counting the three calculation results, and selecting the identified multiple target points based on the calculation results, thereby realizing the effects of high multi-target detection speed and high detection accuracy.
In some embodiments, referring to fig. 9, fig. 9 is a schematic flowchart of the quality detection including integrity detection provided in the embodiment of the present application, and step 104 may be implemented by steps 1041C to step 1044C in fig. 9. In step 1041C, acquiring texture features of the certificate image; in step 1042C, a binary classification model is called based on the texture features to perform copy classification processing on the certificate image to obtain a copy classification detection result; in step 1043C, determining that the document image is a qualified image when the copy classification detection result indicates that the document image is a non-copy image; in step 1044C, the document image is determined to be an unsatisfactory image when the copy classification detection result indicates that the document image is a copy image.
As an example, extracting corresponding texture features in the certificate image according to the copy classification convolutional neural network model, determining the probability that the certificate image is a copy through a two-classification model according to the obtained texture features, wherein the two-classification model can be a full-connection layer structure, performing full-connection processing on the texture features through the full-connection layer structure can output the probabilities corresponding to the two classification results, and taking the higher probability as the classification detection result of the copy.
In some embodiments, visual features of the document image are acquired, wherein the visual features include at least one of: color information, brightness level; determining a certificate image based on the visual characteristics to carry out definition detection to obtain a definition detection result; when the definition detection result represents that the certificate image is a clear image, determining that the certificate image is a qualified image; and when the definition detection result represents that the certificate image is an unclear image, determining that the certificate image is an unqualified image.
As an example, histogram equalization processing is performed on the certificate image; converting the three-primary-color channel of the certificate image into an HSV channel; taking the V channel for brightness processing to obtain a new V channel; combining the new V channel with the H channel and the S channel, converting the new V channel into the three-primary-color channel, and calculating the average value of the certificate image brightness; and calculating by using a Laplacian operator according to the brightness mean value to obtain the image definition and preset a threshold value, and determining the certificate image according to the preset threshold value.
It should be noted that, in the embodiment of the present application, a form of combining multiple quality detection modes may be performed on a certificate image, and a to-be-processed image is screened for a qualified certificate image, for example, after the certificate image is subjected to integrity classification processing, reproduction detection, brightness and definition detection may be performed in sequence, and all the pass results are qualified.
In addition, it should be noted that the quality detection combination form performed on different types of images is determined according to a specific application scenario, when the safety factor required by the application scenario is higher, the types to be detected are more, and the safety factor is positively correlated with the types to be detected. For example, when an image of an identification card is acquired, since the information of the identification card needs to be highly accurate and complete in the acquisition of the corresponding image, the identification of the image to be processed of the identification card needs to be performed by integer classification processing, copying detection, copy detection, brightness and definition detection, and the identification card is qualified after all the identification card passes the integer classification processing, copying detection, and definition detection.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. The image processing method provided by the embodiment is implemented by the mobile terminal.
Referring to fig. 10, fig. 10 is a schematic flowchart of an image processing method according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 10.
Step 401, performing anti-counterfeiting detection on the original identity card.
Before inputting the image to be recognized into the trained self-coding model, the mobile terminal needs to use the detection model to detect the anti-counterfeiting point on the certificate, such as the laser pattern of the great wall of the Chinese identity card, the laser change of the driving license, and the like, to judge whether the certificate is a genuine one. The method comprises the steps of acquiring a small section of video by adopting a pitch angle for laser transformation on a certificate, namely continuously transforming angles to acquire multi-angle certificate images, extracting features through a multi-dimensional CNN network, and judging whether the original paper is the original paper or not by synthesizing a plurality of images, wherein the non-original paper does not have laser transformation.
Compared with the prior art that the input of the conventional convolution network is one image, the classification result is obtained by extracting the features through the CNN network and then connecting the features with the two classifiers, the multidimensional CNN input is a plurality of certificate images at different angles, and then whether the certificate image is an original or not is obtained by extracting the features through the multidimensional CNN network and then connecting the features with the classifiers.
And 402, receiving an image acquired by the terminal equipment through a camera, wherein the identity card is placed in a photographing area.
Step 403, detecting the face on the certificate based on a face detection algorithm (e.g. retinaFace), and obtaining coordinates of the face frame.
And step 404, zooming to obtain the certificate frame according to the face frame, and the relative positions and size ratios of the face area and the certificate area.
And 405, segmenting the certificate image according to the detected certificate frame.
And judging whether the certificate scanned by the user meets the requirement or not according to the position and the occupied proportion of the certificate frame in the picture frame of the photo, and if the certificate image segmented according to the certificate frame exceeds the boundary of the photo, indicating that the scanned certificate does not meet the requirement.
And step 406, performing quality detection on the divided certificate image.
Referring to fig. 11, fig. 11 is a schematic flowchart of a process of quality testing including integrity testing provided by the embodiment of the present application, and step 406 may be implemented by steps 4061 to 4064 in fig. 11.
Step 4061, detecting the reproduction. And extracting Moire pattern characteristics from the certificate image obtained by segmentation by adopting wavelet transformation, and judging whether the image is a copied image by using a cnn classifier model.
Step 4062, the copy is detected. Features of the certificate image obtained by segmentation are extracted by adopting a convolutional neural network model, and whether the image is a copy or not is judged by using a binary classification model.
Step 4063, integrity check. And obtaining the coordinates of each key point by adopting a multi-target detection model on the certificate image obtained by segmentation, and judging whether the image is complete according to the preset rules and conditions of the corresponding certificate.
In some embodiments, whether the certificate is complete and clear is judged according to the number and the position of the keywords detected by the target detection model and the preset rules and conditions of the corresponding certificate; if the key points are unclear or blocked, the key points are not detected or not detected completely.
For example, the document frame, the document type, and the key points (key points constituting the key words, which are equivalent to whether the key words contain the preset key words) on the document are detected simultaneously by a multi-target detection model, such as yolo, ssd, maskrnnn, etc., so as to obtain the coordinates of each key point. If the key words of the ID card area, the head portrait, the name, the sex, the birth, the nationality, the address and the citizen ID number are detected as targets, and whether the image is complete or not is judged according to the preset rules and conditions of the corresponding certificate.
Step 4064, brightness sharpness detection. Calculating the mean value of the image brightness based on HSV color space for the certificate image obtained by segmentation; and calculating the image definition according to the Laplace operator, and judging whether the conditions are met according to a preset threshold value of the image definition.
Step 407, if the photo meets the certificate preset rules and conditions, the photo is transmitted to the back end to collect images and other processing, and the process is ended.
The embodiment of the application detects the frame of the target certificate through a multi-target detection algorithm, and segments out the certificate image, then the copying detection is carried out, the copy detection is carried out, the integrity detection is carried out, the brightness and definition detection is carried out, whether the requirement is met or not is judged according to the preset conditions and rules, wherein the instability problem of the traditional edge algorithm is overcome through the multi-target detection algorithm in the quality detection of the image, the accuracy and the stability of the offline certificate frame detection are greatly improved, the definition and the integrity of the certificate image are judged by utilizing keywords, the method is more reliable than the traditional definition and brightness detection algorithm, the copying of the image meeting the requirement, the copy, the integrity and the brightness detection can be ensured, the quality of the certificate image collection is greatly improved, and the problem that the photos are not qualified is avoided.
Continuing with the exemplary structure of the image processing apparatus 555 provided by the embodiments of the present application implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the image processing apparatus 555 of the memory 550 may include: the image recognition module 5551 is configured to recognize a face region in an image to be processed, where the image to be processed is obtained by image acquisition of a certificate; the image detection module 5552 is configured to determine a certificate region in the image to be processed according to the relative position and ratio between the face region and the certificate region, where the certificate region includes the face region; the image segmentation module 5553 is configured to perform segmentation processing on the image to be processed according to the certificate area to obtain a certificate image; the image quality detection module 5554 is used for acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified;
in some embodiments, the apparatus further comprises: the certificate type identification module 5555 is used for carrying out certificate type identification processing on the image to be processed to obtain the type of the certificate in the image to be processed; and inquiring a mapping table according to the type to obtain the relative position and proportion between the face area and the certificate area in the image to be processed, wherein the mapping table comprises the relative position and proportion between the face area and the certificate area in the certificate images of different types.
In some embodiments, the relative position between the face region and the credential region comprises a relative position between a first center point of the face region and a second center point of the credential region, the ratio between the face region and the credential region comprises a first ratio of the width of the face region to the credential region, and a second ratio of the height of the face region to the credential region; the type recognition module 5555 is further configured to move the first central point of the face region to the second central point of the certificate region according to the relative position between the first central point of the face region and the second central point of the certificate region; and taking the second central point of the certificate area as a reference, prolonging the width of the face area according to a first proportion, and prolonging the height of the face area according to a second proportion to obtain the certificate area.
In some embodiments, the quality detection comprises integrity detection; the image quality detection module 5554 is further used for acquiring the content characteristics of the certificate image; carrying out integrity classification processing on the certificate image based on the content characteristics of the certificate image to obtain an integrity classification detection result; when the integrity classification detection result represents that the certificate image is an integral image, determining that the certificate image is a qualified image; and when the integrity classification detection result represents that the certificate image is an incomplete image, determining that the certificate image is an unqualified image.
In some embodiments, the content features of the document image include the number of keywords, location information, and document type; the image quality detection module 5554 is further used for acquiring a certificate type rule corresponding to the certificate type; and classifying the certificate image according to the certificate type rule and the number and the position information of the obtained keywords to obtain a classification detection result, wherein the preset certificate type rule comprises the number and the position information of the keywords corresponding to different certificate types.
In some embodiments, the quality detection comprises copy detection; the image quality detection module 5554 is further used for acquiring texture features of the certificate image; calling a two-classification model based on the texture features to perform copy classification processing on the certificate image to obtain a copy classification detection result; when the classified detection result of the copy represents that the certificate image is a non-copy image, determining that the certificate image is a qualified image; and when the copy classification detection result represents that the certificate image is a copy image, determining that the certificate image is an unqualified image.
In some embodiments, the quality detection comprises sharpness detection; the image quality detection module 5554 is further configured to obtain visual characteristics of the document image, where the visual characteristics include at least one of: color information, brightness level; determining a certificate image based on the visual characteristics to carry out definition detection to obtain a definition detection result; when the definition detection result represents that the certificate image is a clear image, determining that the certificate image is a qualified image; and when the definition detection result represents that the certificate image is an unclear image, determining that the certificate image is an unqualified image.
In some embodiments, the apparatus further comprises: the forged certificate identification module 5556 is used for carrying out image acquisition processing on the certificate at different angles to obtain a plurality of images to be processed including the certificate; acquiring the image characteristics of the anti-counterfeiting mark in each image to be processed; carrying out counterfeit certificate classification processing on the image to be processed according to the image characteristics of the anti-counterfeiting mark to obtain a counterfeit certificate classification result; when the counterfeit certificate classification result indicates that the certificate is not a counterfeit certificate, it is determined that a process of recognizing a face area in any image to be processed is to be performed.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated.
The embodiment of the present application provides a computer program product, which includes a computer program, and is characterized in that the computer program is executed by a processor to implement the image processing method provided by the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium which stores executable instructions, and when the executable instructions are executed by a processor, the executable instructions cause the processor to execute the image processing method provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the technical scheme of the embodiment of the invention has the following beneficial effects:
1) based on the multi-target detection algorithm, the instability problem of the traditional edge algorithm is solved, and the accuracy and stability of the detection of the mobile terminal off-line certificate frame are greatly improved.
2) The definition and the integrity are judged based on the key words (key points), and compared with the traditional definition and brightness detection algorithm, the method is more reliable, and can ensure that the pictures meeting the requirements are obtained.
3) The detection of copying, integrality and brightness definition is carried out, the quality of certificate photo collection is greatly improved, and the non-compliant photos are avoided.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (12)

1. An image processing method, characterized in that the method comprises:
identifying a face area in an image to be processed, wherein the image to be processed is obtained by carrying out image acquisition on a certificate;
determining the certificate area in the image to be processed according to the relative position and proportion between the face area and the certificate area, wherein the certificate area comprises the face area;
segmenting the image to be processed according to the certificate area to obtain a certificate image;
and acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified.
2. The method of claim 1, wherein before determining the document region in the image to be processed based on the relative position and ratio between the face region and the document region, the method further comprises:
carrying out certificate type identification processing on the image to be processed to obtain the type of the certificate in the image to be processed;
and querying a mapping table according to the type to obtain the relative position and proportion between the face region and the certificate region in the image to be processed, wherein the mapping table comprises the relative position and proportion between the face region and the certificate region in certificate images of different types.
3. The method of claim 2, wherein the relative position between the face region and the document region comprises a relative position between a first center point of the face region and a second center point of the document region, and wherein the ratio between the face region and the document region comprises a first ratio of the width of the face region to the document region and a second ratio of the height of the face region to the document region;
determining the certificate area in the image to be processed according to the relative position and proportion between the face area and the certificate area, wherein the determining comprises the following steps:
moving the first central point of the face area to the second central point of the certificate area according to the relative position between the first central point of the face area and the second central point of the certificate area;
and taking a second central point of the certificate area as a reference, prolonging the width of the face area according to the first proportion, and prolonging the height of the face area according to the second proportion to obtain the certificate area.
4. The method of claim 1, wherein the quality check comprises an integrity check;
the acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified or not includes:
acquiring content characteristics of the certificate image;
carrying out integrity classification processing on the certificate image based on the content characteristics of the certificate image to obtain an integrity classification detection result;
when the integrity classification detection result represents that the certificate image is a complete image, determining that the certificate image is a qualified image;
and when the integrity classification detection result represents that the certificate image is an incomplete image, determining that the certificate image is an unqualified image.
5. The method of claim 4, wherein the content features of the document image comprise the number of keywords, location information, and document type;
the integrity classification processing is carried out on the certificate image based on the content characteristics to obtain a classification detection result, and the method comprises the following steps:
acquiring a certificate type rule corresponding to the certificate type;
and classifying the certificate image according to the certificate type rule and the number and the position information of the obtained keywords to obtain a classification detection result, wherein the preset certificate type rule comprises the number and the position information of the keywords corresponding to different certificate types.
6. The method of claim 1, wherein the quality detection comprises copy detection;
the acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified or not includes:
acquiring texture features of the certificate image;
calling a two-classification model based on the texture features to perform copy classification processing on the certificate image to obtain a copy classification detection result;
when the copy classification detection result represents that the certificate image is a non-copy image, determining that the certificate image is a qualified image;
and when the copy classification detection result represents that the certificate image is a copy image, determining that the certificate image is an unqualified image.
7. The method of claim 1, wherein the quality detection comprises sharpness detection;
the acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified or not includes:
acquiring visual features of the document image, wherein the visual features include at least one of: color information, brightness level;
determining a certificate image based on the visual features to perform definition detection to obtain a definition detection result;
when the definition detection result represents that the certificate image is a clear image, determining that the certificate image is a qualified image;
and when the definition detection result represents that the certificate image is an unsharp image, determining that the certificate image is an unqualified image.
8. The method of claim 1, wherein prior to identifying the face region in the image to be processed, the method further comprises:
carrying out image acquisition processing on the certificate at different angles to obtain a plurality of to-be-processed images comprising the certificate;
acquiring the image characteristics of the anti-counterfeiting mark in each image to be processed;
carrying out counterfeit certificate classification processing on the image to be processed according to the image characteristics of the anti-counterfeiting mark to obtain a counterfeit certificate classification result;
and when the counterfeit certificate classification result indicates that the certificate is not a counterfeit certificate, determining that the process of identifying the human face area in any image to be processed is to be executed.
9. An image processing apparatus, characterized in that the apparatus comprises:
the image recognition module is used for recognizing a face area in an image to be processed, wherein the image to be processed is obtained by carrying out image acquisition on a certificate;
the image detection module is used for determining the certificate area in the image to be processed according to the relative position and proportion between the face area and the certificate area, wherein the certificate area comprises the face area;
the image segmentation module is used for carrying out segmentation processing on the image to be processed according to the certificate area to obtain a certificate image;
and the image quality detection module is used for acquiring the characteristics of the certificate image, and performing quality detection processing based on the characteristics of the certificate image to obtain a detection result representing whether the certificate image is qualified.
10. An electronic device, characterized in that the electronic device comprises:
a memory for storing executable instructions;
a processor for implementing the image processing method of any one of claims 1 to 8 when executing executable instructions stored in the memory.
11. A computer-readable storage medium storing executable instructions for implementing the image processing method of any one of claims 1 to 8 when executed by a processor.
12. A computer program product comprising a computer program, characterized in that the computer program realizes the image processing method of any one of claims 1 to 8 when executed by a processor.
CN202110880952.7A 2021-08-02 2021-08-02 Image processing method, image processing device, electronic equipment and storage medium Pending CN113628181A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110880952.7A CN113628181A (en) 2021-08-02 2021-08-02 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110880952.7A CN113628181A (en) 2021-08-02 2021-08-02 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113628181A true CN113628181A (en) 2021-11-09

Family

ID=78382173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110880952.7A Pending CN113628181A (en) 2021-08-02 2021-08-02 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113628181A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565967A (en) * 2022-04-28 2022-05-31 广州丰石科技有限公司 Worker card face detection method, terminal and storage medium
CN115375998A (en) * 2022-10-24 2022-11-22 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565967A (en) * 2022-04-28 2022-05-31 广州丰石科技有限公司 Worker card face detection method, terminal and storage medium
CN115375998A (en) * 2022-10-24 2022-11-22 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium
CN115375998B (en) * 2022-10-24 2023-03-17 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Walia et al. Digital image forgery detection: a systematic scrutiny
Qureshi et al. A bibliography of pixel-based blind image forgery detection techniques
Tokuda et al. Computer generated images vs. digital photographs: A synergetic feature and classifier combination approach
JP5775225B2 (en) Text detection using multi-layer connected components with histograms
JP5050075B2 (en) Image discrimination method
US20240112316A1 (en) Systems and methods for image data processing to correct document deformations using machine learning system
US20180089533A1 (en) Automated methods and systems for locating document subimages in images to facilitate extraction of information from the located document subimages
JP2010262648A (en) Automated method for alignment of document object
Liu et al. Smooth filtering identification based on convolutional neural networks
CN112036395A (en) Text classification identification method and device based on target detection
JP7419080B2 (en) computer systems and programs
CN113628181A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114092938B (en) Image recognition processing method and device, electronic equipment and storage medium
Jwaid et al. Study and analysis of copy-move & splicing image forgery detection techniques
US20220414393A1 (en) Methods and Systems for Generating Composite Image Descriptors
Cyganek Hybrid ensemble of classifiers for logo and trademark symbols recognition
CN114359912B (en) Software page key information extraction method and system based on graph neural network
JP6377214B2 (en) Text detection method and apparatus
Chagnon-Forget et al. Enhanced visual-attention model for perceptually improved 3D object modeling in virtual environments
Mehta et al. Near-duplicate image detection based on wavelet decomposition with modified deep learning model
Abraham Digital image forgery detection approaches: A review and analysis
CN114445916A (en) Living body detection method, terminal device and storage medium
CN113869419A (en) Method, device and equipment for identifying forged image and storage medium
KR102047936B1 (en) Apparatus and method for classifying images stored in a digital device
WO2021098861A1 (en) Text recognition method, apparatus, recognition device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication