CN114549817A - Seal detection method and device, computer equipment and storage medium - Google Patents

Seal detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114549817A
CN114549817A CN202210168420.5A CN202210168420A CN114549817A CN 114549817 A CN114549817 A CN 114549817A CN 202210168420 A CN202210168420 A CN 202210168420A CN 114549817 A CN114549817 A CN 114549817A
Authority
CN
China
Prior art keywords
image
sample image
seal
negative sample
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210168420.5A
Other languages
Chinese (zh)
Inventor
冷绵绵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202210168420.5A priority Critical patent/CN114549817A/en
Publication of CN114549817A publication Critical patent/CN114549817A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The embodiment of the application belongs to the technical field of image processing in artificial intelligence, and relates to a seal detection method and device based on a deep convolutional network, computer equipment and a storage medium. In addition, the application also relates to a block chain technology, and the image to be detected and the seal detection result of the user can be stored in the block chain. This application is through constructing the recognition model based on deep learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to detect the image according to this recognition model detection trained, because Yolov3 itself can regard as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, seal detection makes the flow automation of approving the file, make the manpower that needs to examine and verify the file that has made a waste in the business reduce greatly, the human cost is reduced, system efficiency is improved.

Description

Seal detection method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing techniques in artificial intelligence, and in particular, to a seal detection method and apparatus based on a deep convolutional network, a computer device, and a storage medium.
Background
In recent years, the development of AI technology has been rapidly advanced, and its application fields are also increasingly wider, such as robot, speech recognition, image recognition, computer vision, automatic driving, and the like. In the application scene of bill identification, how to effectively identify whether a seal is stamped in a document image picture becomes an extremely important link in bill identification.
The existing seal identification method is characterized in that a seal area is judged by specifying the color of a seal and utilizing a tolerance mode, and finally, the seal is extracted by adopting a Hough transform mode. The method comprises the steps of extracting a seal region by a tolerance method, namely, firstly, appointing a reference color of a seal, and then, finishing the judgment of the seal region by setting a tolerance range, namely, if the difference degree of a certain color and the reference color is within the tolerance range, considering the color as the seal color. And after the judgment of the seal area is finished, extracting the seal through Hough transformation.
However, the applicant finds that the conventional seal identification method adopts a tolerance mode to complete the extraction of the designated color region, is easily influenced by conditions such as illumination and the like, has poor stability, has large influence on the final result due to the setting of the tolerance, and is not beneficial to the accurate judgment of the seal region. Therefore, the traditional seal identification method has the problem of low identification accuracy.
Disclosure of Invention
The embodiment of the application aims to provide a seal detection method, a seal detection device, computer equipment and a storage medium based on a deep convolutional network, so as to solve the problem that the traditional seal identification method is low in identification accuracy.
In order to solve the above technical problem, an embodiment of the present application provides a seal detection method based on a deep convolutional network, which adopts the following technical scheme:
receiving a model training request carrying an original seal image;
carrying out positive sample processing operation on the original seal image to obtain a positive sample image;
carrying out negative sample generation operation on the original seal image to obtain a negative sample image;
inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data;
performing prediction operation on the seal characteristic data to obtain initial prediction result data;
detecting and identifying the prediction result data based on a k-means algorithm to obtain final prediction results and loss data of labeling results;
optimizing the loss data based on a random gradient descent algorithm to obtain a trained identification model;
acquiring an image to be detected;
and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result.
In order to solve the above technical problem, an embodiment of the present application further provides a seal detection apparatus based on a deep convolutional network, which adopts the following technical scheme:
the request receiving module is used for receiving a model training request carrying an original seal image;
the positive sample processing module is used for carrying out positive sample processing operation on the original seal image to obtain a positive sample image;
the negative sample generating module is used for carrying out negative sample generating operation on the original seal image to obtain a negative sample image;
the feature extraction module is used for inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data;
the prediction module is used for performing prediction operation on the seal characteristic data to obtain initial prediction result data;
the detection and identification module is used for carrying out detection and identification operation on the prediction result data based on a k-means algorithm to obtain the loss data of the final prediction result and the labeling result;
the optimization module is used for carrying out optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model;
the image acquisition module to be detected is used for acquiring an image to be detected;
and the seal detection module is used for inputting the image to be detected to the trained recognition model to perform seal detection operation so as to obtain a seal detection result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
the seal detection method based on the deep convolutional network comprises a memory and a processor, wherein computer readable instructions are stored in the memory, and the processor realizes the steps of the seal detection method based on the deep convolutional network when executing the computer readable instructions.
In order to solve the foregoing technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer readable storage medium has stored thereon computer readable instructions which, when executed by a processor, implement the steps of the deep convolutional network-based stamp detection method as described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application provides a seal detection method based on a deep convolutional network, which comprises the following steps: receiving a model training request carrying an original seal image; carrying out positive sample processing operation on the original seal image to obtain a positive sample image; carrying out negative sample generation operation on the original seal image to obtain a negative sample image; inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data; performing prediction operation on the seal characteristic data to obtain initial prediction result data; detecting and identifying the prediction result data based on a k-means algorithm to obtain final prediction results and loss data of the labeling results; optimizing the loss data based on a random gradient descent algorithm to obtain a trained recognition model; acquiring an image to be detected; and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result. This application is through constructing the recognition model based on deep learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to detect the image according to this recognition model detection trained, because Yolov3 itself can regard as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, seal detection makes the flow automation of approving the file, make the manpower that needs to examine and verify the file that has made a waste in the business reduce greatly, the human cost is reduced, system efficiency is improved.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for use in the description of the embodiments of the present application, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
fig. 2 is a flowchart illustrating an implementation of a seal detection method based on a deep convolutional network according to an embodiment of the present disclosure;
FIG. 3 is a flow diagram illustrating one embodiment of a DarkNet53 network according to one embodiment of the present application;
FIG. 4 is a flowchart of one embodiment of a feature extraction operation provided in an embodiment of the present application;
FIG. 5 is a flowchart of an embodiment of the present application for obtaining stamp feature data;
FIG. 6 is a flowchart of an embodiment of an image enhancement operation provided in an embodiment of the present application;
FIG. 7 is an exemplary diagram of one embodiment of an image enhancement operation provided in an embodiment of the present application;
FIG. 8 is a flowchart of one embodiment of step S502 in FIG. 5;
FIG. 9 is a diagram illustrating an embodiment of a conventional IOU computing method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a seal detection apparatus based on a deep convolutional network according to a second embodiment of the present application;
fig. 11 is a schematic structural diagram of a specific implementation of obtaining stamp feature data according to a second embodiment of the present application;
FIG. 12 is a block diagram of one embodiment of the feature extraction submodule 2041 of FIG. 11;
FIG. 13 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures, are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the seal detection method based on the deep convolutional network provided in the embodiment of the present application is generally executed by a server/terminal device, and accordingly, the seal detection apparatus based on the deep convolutional network is generally disposed in the server/terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Continuing to refer to fig. 2, a flowchart of an implementation of the seal detection method based on the deep convolutional network according to an embodiment of the present application is shown, and for convenience of description, only a portion related to the present application is shown.
The seal detection method based on the deep convolutional network comprises the following steps: step S201, step S202, step S203, step S204, step S205, step S206, step S207, step S208, and step S209.
Step S201: and receiving a model training request carrying an original seal image.
In this embodiment, the original seal image is a high-definition image carrying a target seal, the target seal may be various seal contents, and a user may select the seal contents according to an actual situation, as an example, for example: revocation seal, etc., it should be understood that the examples of the original seal image herein are for convenience of understanding only and are not intended to limit the present application.
Step S202: and carrying out positive sample processing operation on the original stamp image to obtain a positive sample image.
In the embodiment of the application, the positive sample processing operation refers to that the existing stamp style is cut out by a PS method, the small stamp images are preprocessed by rotation and the like, and the preprocessed small stamp images are added to a file image without a stamp by an image synthesis method, so that the data of the positive sample is increased.
Step S203: and carrying out negative sample generation operation on the original seal image to obtain a negative sample image.
In the embodiment of the application, the negative sample generation operation refers to cutting out the existing seal style by a PS method to obtain a file image without a seal or only an incomplete seal, and the data of the negative sample is increased.
Step S204: and inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain the seal feature data.
In the embodiment of the present application, the DarkNet53 network is a classic deep network structure, as shown in fig. 3 below, and combines the features of Resnet to ensure the super-expression of features and avoid the gradient problem caused by the network being too deep.
In the embodiment of the present application, as shown in fig. 4, the size of the matrix obtained after the training image data with the size of 512 × 3 passes through the feature extraction network darktet 53 is (512/32) × (512/32) × 1024 ═ 16 × 1024, and then the matrix size obtained after three times of feature recognition is (512/32) ((512/32) ((3) × (1+ 4))) -16 × 15, (512/32) × 2) ((512/32 × 2) ((3 (+ 1+4)) -32 ═ 15), (512/32) × 2) (# 512/32) × 2) ((3 × 1 × 4) × 64 × 35y) is three sets of features, i.e., 369, 356, 3513 × 6 and three sets of features are shown in the final three sets of feature matrices, each group is 3, which corresponds to three characteristic diagrams respectively), 1 represents the number of categories (which is a seal category) and 4 represents the coordinate of the prediction frame (i.e. the seal detection frame) (i.e. the coordinate x and y of the central point of the prediction frame and the width and height w and h of the prediction frame).
In the embodiment of the present application, the feature maps y2 and y3 have mainly one more residual join operation than y1 in operation (i.e. the feature map obtained from the previous layer is upsampled (upsamplle) and then joined (concat) with the corresponding block in the DarkNet53 network).
In the embodiment of the present application, the y1, y2, and y3 are stamp feature data.
Step S205: and performing prediction operation on the seal characteristic data to obtain initial prediction result data.
Step S206: and detecting and identifying the prediction result data based on a k-means algorithm to obtain the final prediction result and loss data of the labeling result.
Step S207: and performing optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model.
In the embodiment of the application, the seal detection and identification are respectively carried out on three feature graphs of y1, y2 and y3 with different scales by utilizing 9 anchor boxes obtained by clustering by using a k-means algorithm in advance, the coordinates, the types (namely two types, whether the two types are seals or not) and the DIOU of 3 different anchor boxes are respectively predicted on each feature graph, finally, the loss is minimized as an optimization target, the update iteration of model parameters is carried out by utilizing a random gradient descent algorithm, and finally, the model convergence is achieved (namely, the model training is completed).
Step S208: and acquiring an image to be detected.
In the embodiment of the present application, acquiring the image to be detected may be acquired in real time through an image acquisition terminal, and the image to be detected may also be acquired by sending data carrying the image to be detected through a user terminal.
Step S209: and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result.
In an embodiment of the present application, a seal detection method based on a deep convolutional network is provided, including: receiving a model training request carrying an original seal image; performing positive sample processing operation on the original seal image to obtain a positive sample image; carrying out negative sample generation operation on the original seal image to obtain a negative sample image; inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data; performing prediction operation on the seal characteristic data to obtain initial prediction result data; detecting and identifying the prediction result data based on a k-means algorithm to obtain a final prediction result and loss data of a labeling result; optimizing the loss data based on a random gradient descent algorithm to obtain a trained recognition model; acquiring an image to be detected; and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result. This application is through constructing the recognition model that the training is good based on deep learning object detection Yolov3 algorithm to whether there is the target seal in waiting to detect the image according to this recognition model detection that trains, because Yolov3 itself can regard as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, the seal detects the process automation that makes the approval file, make the manpower that needs to examine and verify the file that has made a waste in the business reduce greatly, the human cost is reduced, system efficiency has been improved.
Continuing to refer to fig. 5, a flowchart of a specific implementation of obtaining stamp feature data according to an embodiment of the present application is shown, and for convenience of description, only the portions related to the present application are shown.
In some optional implementations of this embodiment, after step S203 and before step S204, the method further includes: step S501; step S204 includes: step S502.
Step S501: and respectively carrying out image enhancement operation on the positive sample image and the negative sample image to obtain an enhanced positive sample image and an enhanced negative sample image.
In the embodiment of the present application, as shown in fig. 6 below, a specific implementation manner of the image enhancement operation is: respectively randomly cropping 4 pictures, namely 1, 2, 3 and 4 (random _ cut), then randomly splicing and combining the 4 cropped pictures (random _ combine) to obtain 1 mixed picture, and finally reducing the mixed picture to obtain a picture image with a fixed size of 512 × 512 as an input image of the final model.
In practical application, as shown in fig. 7 below, it is assumed that 4 pictures to be subjected to image enhancement are respectively image1, image2, image3 and image4 in fig. 7, and the 4 pictures are first randomly cropped (i.e. the position and size of cropping are random) to obtain 4 new small pictures (i.e. 4 red region pictures in fig. 7), then, the 4 thumbnails are randomly combined (i.e. the splicing sequence of the 4 thumbnails is random, for example, the splicing sequence in fig. 7 is 1- >2- >4- >3, or 1- >2- >3- >4, 1- >3- >4- >2, 3- >4- >2- >1, and other splicing sequences clockwise, respectively), and then the combined image (i.e. image in fig. 7) resize is 512 × 512 with a fixed size as an input of the model.
Step S502: and inputting the enhanced positive sample image and the enhanced negative sample image into a DarkNet53 network for feature extraction operation to obtain the seal feature data.
In the embodiment of the application, the image enhancement algorithm Mosaic is introduced, so that the characteristic content of the image of the input model is increased, the training width of the model is increased, the condition that the non-seal is mistakenly identified into the seal is effectively reduced, meanwhile, the recall of the image to be detected is increased in the difficult scenes such as severe light, and the whole accuracy and recall rate of seal identification are improved.
Continuing to refer to fig. 8, a flowchart of one embodiment of step S502 in fig. 5 is shown, and for ease of illustration, only the portions relevant to the present application are shown.
In some optional implementation manners of this embodiment, step S502 specifically includes: step S801, step S802, step S803, and step S804.
Step S801: and judging whether the enhanced positive sample image and the enhanced negative sample image meet the preset image condition.
In the embodiment of the present application, the preset image condition is mainly used for defining the size of the image input to the DarkNet53 network.
Step S802: and if the enhanced positive sample image and the enhanced negative sample image meet the preset image condition, executing the feature extraction operation to obtain the seal feature data.
Step S803: and if the enhanced positive sample image and the enhanced negative sample image do not meet the preset image condition, preprocessing the enhanced positive sample image and the enhanced negative sample image to obtain a standard positive sample image and a standard negative sample image.
Step S804: and inputting the standard positive sample image and the standard negative sample image into a DarkNet53 network for feature extraction operation to obtain the seal feature data.
In the embodiment of the application, due to the fact that the sizes of the image data are not uniform, the processing efficiency of the DarkNet53 network for feature extraction is further influenced, and the training image data which do not meet the preset image condition are preprocessed, so that a foundation is laid for the feature extraction operation of the subsequent DarkNet53 network, and the processing efficiency of seal identification is effectively improved.
In some alternative implementations of this embodiment, the loss function of the recognition model is represented as:
loss=∑lossxy+losswh+lossclass+lossdiou
therein, lossxyRepresenting the coordinate loss of the center point of the prediction frame; losswhRepresents the width and height loss of the prediction box; lossclassRepresenting a category loss; lossdiouRepresenting DIOU losses.
In the embodiment of the present application, the coordinate loss of the center point of the frame (i.e. the sum of the cross entropy losses of the coordinate x and the coordinate y) is predicted:
Figure BDA0003517567610000104
wide-high penalty of the prediction box (i.e., the sum of the mean square penalties of width w and height h):
Figure BDA0003517567610000101
class loss (i.e., cross-entropy loss of classes):
lossclass=-(ytrue_classlogypredict_class
+(1-ytrue_class)log(1-ypredict_class))
DIOU penalty (i.e. penalty between prediction box and annotation box):
Figure BDA0003517567610000102
the DIOU is an improvement of the traditional IOU, and can perform better regression on anchor boxes with different proportions, areas and directions.
The conventional IOU has the formula:
Figure BDA0003517567610000103
wherein I represents the intersection area of the model prediction frame and the real marking frame; and U represents the union area of the model prediction frame and the real labeling frame.
In the embodiment of the present application, as shown in fig. 9 below, d (box _ predict, box _ true) represents a distance between the center points of the model prediction frame and the real annotation frame; c represents the diagonal length of the minimum bounding matrix that encloses both the model prediction box and the true annotation box. The geometric meanings of d and c are shown in fig. 9, wherein a red rectangle represents a model prediction frame, a green rectangle represents a real labeling frame, and a maximum blue rectangle represents a minimum bounding matrix surrounding the model prediction frame and the real labeling frame at the same time.
In summary, the present application provides a seal detection method based on a deep convolutional network, including: receiving a model training request carrying an original seal image; performing positive sample processing operation on the original seal image to obtain a positive sample image; carrying out negative sample generation operation on the original seal image to obtain a negative sample image; inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data; performing prediction operation on the seal characteristic data to obtain initial prediction result data; detecting and identifying the prediction result data based on a k-means algorithm to obtain a final prediction result and loss data of a labeling result; performing optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model; acquiring an image to be detected; and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result. This application is through constructing the recognition model based on deep learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to detect the image according to this recognition model detection trained, because Yolov3 itself can be as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, seal detection makes the flow automation of examining and criticizing the file, make the manpower that needs to examine and verify to the file that has made a waste in the business reduce greatly, the human cost is reduced, system efficiency is improved. Furthermore, an image enhancement algorithm Mosaic is introduced, so that the characteristic content of an image of an input model is increased, the training width of the model is increased, the situation that a non-seal is mistakenly identified into a seal is effectively reduced, meanwhile, the recall of an image to be detected shot under difficult scenes such as severe light is increased, and the overall accuracy and recall rate of seal identification are improved; due to the fact that the image data are not uniform in size, the processing efficiency of feature extraction of the DarkNet53 network is further influenced, and the preprocessing operation is performed on the training image data which do not meet the preset image condition, so that a foundation is laid for the feature extraction operation of the subsequent DarkNet53 network, and the processing efficiency of seal identification is effectively improved.
It should be emphasized that, in order to further ensure the privacy and security of the image to be detected and the seal detection result, the image to be detected and the seal detection result may also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, which is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Example two
With further reference to fig. 10, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a deep convolutional network-based seal detection apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices in particular.
As shown in fig. 10, the stamp detecting apparatus 200 based on the deep convolutional network of the present embodiment includes: a request receiving module 201, a positive sample processing module 202, a negative sample generating module 203, a feature extracting module 204, a predicting module 205, a detecting and identifying module 206, an optimizing module 207, an image to be detected acquiring module 208 and a stamp detecting module 209. Wherein:
a request receiving module 201, configured to receive a model training request carrying an original stamp image;
a positive sample processing module 202, configured to perform a positive sample processing operation on the original stamp image to obtain a positive sample image;
the negative sample generation module 203 is used for performing negative sample generation operation on the original seal image to obtain a negative sample image;
the feature extraction module 204 is configured to input the positive sample image and the negative sample image to a DarkNet53 network for feature extraction operation, so as to obtain seal feature data;
the prediction module 205 is configured to perform a prediction operation on the seal characteristic data to obtain initial prediction result data;
the detection and identification module 206 is configured to perform detection and identification operations on the prediction result data based on a k-means algorithm to obtain loss data of a final prediction result and a final labeling result;
the optimization module 207 is used for performing optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model;
an image to be detected acquisition module 208, configured to acquire an image to be detected;
and the seal detection module 209 is used for inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result.
In this embodiment, the original seal image is a high-definition image carrying a target seal, the target seal may be various seal contents, and a user may select the seal contents according to an actual situation, as an example, for example: revocation seal, etc., it should be understood that the examples of the original seal image herein are for convenience of understanding only and are not intended to limit the present application.
In the embodiment of the application, the positive sample processing operation refers to that the existing stamp style is cut out by a PS method, the small stamp images are preprocessed by rotation and the like, and the preprocessed small stamp images are added to a file image without a stamp by an image synthesis method, so that the data of the positive sample is increased.
In the embodiment of the application, the negative sample generation operation refers to cutting out the existing seal style by a PS method to obtain a file image without a seal or only an incomplete seal, and the data of the negative sample is increased.
In the embodiment of the present application, the DarkNet53 network is a classic deep network structure, as shown in fig. 3 below, and combines the features of Resnet to ensure the super-expression of features and avoid the gradient problem caused by the network being too deep.
In the embodiment of the present application, as shown in fig. 4, the matrix size obtained after the training image data with the size of 512 × 3 passes through the feature extraction network DarkNet53 becomes (512/32) × (512/32) × 1024 16 × 1024, and then three times of feature recognition are performed to obtain feature maps y1, y2, y3(y1, y2, y3 with the matrix sizes of (512/32) × (512/32) (3 × (1+4)) 16 × 15, (512/32 × 2) (512/32 × 2) (3 × 1+4)) -32 ═ 15, (512/32 × 2) (-512/32 × 2) (-3 × 64) × 358 × 2) (-3 × 64) × 9, and finally three dimensional features of three dimensional (369) × 3 × 358 × 6 and 369 are illustrated as three dimensional matrices 369 and 369, each group is 3, which corresponds to three characteristic diagrams respectively), 1 represents the number of categories (which is a seal category) and 4 represents the coordinate of the prediction frame (i.e. the seal detection frame) (i.e. the coordinate x and y of the central point of the prediction frame and the width and height w and h of the prediction frame).
In the embodiment of the present application, the feature maps y2 and y3 have mainly one more residual join operation than y1 in operation (i.e. the feature map obtained from the previous layer is upsampled (upsamplle) and then joined (concat) with the corresponding block in the DarkNet53 network).
In the embodiment of the present application, the y1, y2, and y3 are stamp feature data.
In the embodiment of the application, the seal detection and identification are respectively carried out on three feature graphs of y1, y2 and y3 with different scales by utilizing 9 anchor boxes obtained by clustering by using a k-means algorithm in advance, the coordinates, the types (namely two types, whether the two types are seals or not) and the DIOU of 3 different anchor boxes are respectively predicted on each feature graph, finally, the loss is minimized as an optimization target, the update iteration of model parameters is carried out by utilizing a random gradient descent algorithm, and finally, the model convergence is achieved (namely, the model training is completed).
In the embodiment of the present application, acquiring the image to be detected may be acquired in real time through an image acquisition terminal, and the image to be detected may also be acquired by sending data carrying the image to be detected through a user terminal.
In the embodiment of the present application, a seal detection apparatus 200 based on a deep convolutional network is provided, including: a request receiving module 201, configured to receive a model training request carrying an original seal image; a positive sample processing module 202, configured to perform a positive sample processing operation on the original stamp image to obtain a positive sample image; the negative sample generation module 203 is used for performing negative sample generation operation on the original seal image to obtain a negative sample image; the feature extraction module 204 is configured to input the positive sample image and the negative sample image to a DarkNet53 network for feature extraction operation, so as to obtain seal feature data; the prediction module 205 is configured to perform a prediction operation on the seal characteristic data to obtain initial prediction result data; the detection and identification module 206 is configured to perform detection and identification operations on the prediction result data based on a k-means algorithm to obtain loss data of a final prediction result and a labeling result; the optimization module 207 is used for performing optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model; an image to be detected acquisition module 208, configured to acquire an image to be detected; and the seal detection module 209 is used for inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result. This application is through constructing the recognition model based on deep learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to detect the image according to this recognition model detection trained, because Yolov3 itself can be as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, seal detection makes the flow automation of examining and criticizing the file, make the manpower that needs to examine and verify to the file that has made a waste in the business reduce greatly, the human cost is reduced, system efficiency is improved.
Continuing to refer to fig. 11, a schematic structural diagram of a specific implementation of obtaining stamp feature data according to the second embodiment of the present application is shown, and for convenience of description, only the portions related to the present application are shown.
In some optional implementation manners of this embodiment, the seal detection apparatus 200 based on a deep convolutional network further includes: the image enhancement module 210, the feature extraction module 204, comprises: a feature extraction submodule 2041, wherein:
the image enhancement module 210 is configured to perform image enhancement operations on the positive sample image and the negative sample image respectively to obtain an enhanced positive sample image and an enhanced negative sample image;
the feature extraction sub-module 2041 is configured to input the enhanced positive sample image and the enhanced negative sample image to a DarkNet53 network for feature extraction operation, so as to obtain stamp feature data.
In the embodiment of the present application, as shown in fig. 6 below, a specific implementation manner of the image enhancement operation is: respectively randomly cropping 4 pictures, namely 1, 2, 3 and 4 (random _ cut), then randomly splicing and combining the 4 cropped pictures (random _ combine) to obtain 1 mixed picture, and finally reducing the mixed picture to obtain a picture image with a fixed size of 512 × 512 as an input image of the final model.
In practical application, as shown in fig. 7 below, it is assumed that 4 pictures to be subjected to image enhancement are respectively image1, image2, image3 and image4 in fig. 7, and the 4 pictures are first randomly cropped (i.e. the position and size of cropping are random) to obtain 4 new small pictures (i.e. 4 red region pictures in fig. 7), then, the 4 thumbnails are randomly combined (i.e. the splicing sequence of the 4 thumbnails is random, for example, the splicing sequence in fig. 7 is 1- >2- >4- >3, or 1- >2- >3- >4, 1- >3- >4- >2, 3- >4- >2- >1, and other splicing sequences clockwise, respectively), and then the combined image (i.e. image in fig. 7) resize is 512 × 512 with a fixed size as an input of the model.
In the embodiment of the application, the image enhancement algorithm Mosaic is introduced, so that the characteristic content of the image of the input model is increased, the training width of the model is increased, the condition that the non-seal is mistakenly identified into the seal is effectively reduced, meanwhile, the recall of the image to be detected is increased in the difficult scenes such as severe light, and the whole accuracy and recall rate of seal identification are improved.
Continuing to refer to fig. 12, a schematic diagram of one specific implementation of the feature extraction sub-module 2041 of fig. 11 is shown, and for convenience of illustration, only the relevant portions of the present application are shown.
In some optional implementations of the present embodiment, the feature extraction sub-module 2041 includes: a condition determining unit 20411, a first result unit 20412, a second result unit 20413, and a feature extracting subunit 20414, where:
a condition determining unit 20411, configured to determine whether the enhanced positive sample image and the enhanced negative sample image satisfy a preset image condition;
a first result unit 20412, configured to execute a feature extraction operation if the enhanced positive sample image and the enhanced negative sample image satisfy a preset image condition, to obtain stamp feature data;
a second result unit 20413, configured to perform preprocessing operation on the enhanced positive sample image and the enhanced negative sample image if the enhanced positive sample image and the enhanced negative sample image do not satisfy the preset image condition, so as to obtain a normalized positive sample image and a normalized negative sample image;
and a feature extraction subunit 20414, configured to input the standard positive sample image and the standard negative sample image to a DarkNet53 network for feature extraction operation, so as to obtain stamp feature data.
In the embodiment of the present application, the preset image condition is mainly used for defining the size of the image input to the DarkNet53 network.
In the embodiment of the application, due to the fact that the sizes of the image data are not uniform, the processing efficiency of the DarkNet53 network for feature extraction is further influenced, and the training image data which do not meet the preset image condition are preprocessed, so that a foundation is laid for the feature extraction operation of the subsequent DarkNet53 network, and the processing efficiency of seal identification is effectively improved.
In some alternative implementations of this embodiment, the loss function of the recognition model is represented as:
loss=∑lossxy+losswh+lossclass+lossdiou
therein, lossxyRepresenting the coordinate loss of the center point of the prediction frame; losswhRepresents the width and height loss of the prediction box; lossclassRepresenting a category loss; lossdiouRepresenting DIOU losses.
In the embodiment of the present application, the coordinate loss of the center point of the frame (i.e. the sum of the cross entropy losses of the coordinate x and the coordinate y) is predicted:
Figure BDA0003517567610000174
wide-high penalty of the prediction box (i.e., the sum of the mean square penalties of width w and height h):
Figure BDA0003517567610000171
class loss (i.e., cross-entropy loss of a class):
lossclass=-(yture_classlogypredict_class
+(1-ytrue_class)log(1-ypredict_class))
DIOU penalty (i.e. penalty between prediction box and annotation box):
Figure BDA0003517567610000172
the DIOU is an improvement of the traditional IOU, and can perform better regression on anchor boxes with different proportions, areas and directions.
The conventional IOU has the formula:
Figure BDA0003517567610000173
wherein I represents a model prediction box and a real labeling boxThe area of intersection of; and U represents the union area of the model prediction frame and the real labeling frame.
In the embodiment of the present application, as shown in fig. 9 below, d (box _ predict, box _ true) represents a distance between the center points of the model prediction box and the real annotation box; c represents the diagonal length of the minimum bounding matrix that encloses both the model prediction box and the true annotation box. The geometric meanings of d and c are shown in fig. 9, wherein a red rectangle represents a model prediction frame, a green rectangle represents a real labeling frame, and a maximum blue rectangle represents a minimum bounding matrix surrounding the model prediction frame and the real labeling frame at the same time.
In summary, the present application provides a seal detection apparatus 200 based on a deep convolutional network, including: a request receiving module 201, configured to receive a model training request carrying an original stamp image; a positive sample processing module 202, configured to perform a positive sample processing operation on the original stamp image to obtain a positive sample image; the negative sample generation module 203 is used for performing negative sample generation operation on the original seal image to obtain a negative sample image; the feature extraction module 204 is configured to input the positive sample image and the negative sample image to a DarkNet53 network for feature extraction operation, so as to obtain seal feature data; the prediction module 205 is configured to perform a prediction operation on the seal characteristic data to obtain initial prediction result data; the detection and identification module 206 is configured to perform detection and identification operations on the prediction result data based on a k-means algorithm to obtain a final prediction result and loss data of a labeling result; the optimization module 207 is used for performing optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model; an image to be detected acquisition module 208, configured to acquire an image to be detected; and the seal detection module 209 is used for inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result. This application is through constructing the recognition model based on deep learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to examine the mapping image according to this recognition model detection trained, because Yolov3 itself can regard as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, the seal detects the flow automation that makes the approval file, make in the business need significantly reduce the manpower to the audit of file that has made a waste, the human cost is reduced, system efficiency is improved. Furthermore, an image enhancement algorithm Mosaic is introduced, so that the characteristic content of an image of an input model is increased, the training width of the model is increased, the condition that a non-seal is mistakenly identified into a seal is effectively reduced, meanwhile, the recall of the image to be detected shot under difficult scenes such as severe light and the like is increased, and the overall accuracy and recall rate of seal identification are improved; due to the fact that the sizes of the image data are not uniform, the processing efficiency of feature extraction of the DarkNet53 network is further influenced, and the preprocessing operation is performed on the training image data which do not meet the preset image condition, so that a foundation is laid for the feature extraction operation of the subsequent DarkNet53 network, and the processing efficiency of seal identification is effectively improved.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 13, fig. 13 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 300 includes a memory 310, a processor 320, and a network interface 330 communicatively coupled to each other via a system bus. It is noted that only computer device 300 having components 310 and 330 is shown, but it is understood that not all of the shown components are required and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 310 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 310 may be an internal storage unit of the computer device 300, such as a hard disk or a memory of the computer device 300. In other embodiments, the memory 310 may also be an external storage device of the computer device 300, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 300. Of course, the memory 310 may also include both internal and external storage devices of the computer device 300. In this embodiment, the memory 310 is generally used for storing an operating system installed in the computer device 300 and various application software, such as computer readable instructions of a seal detection method based on a deep convolutional network. The memory 310 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 320 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 320 is generally operative to control overall operation of the computer device 300. In this embodiment, the processor 320 is configured to execute the computer readable instructions or processing data stored in the memory 310, for example, execute the computer readable instructions of the seal detection method based on the deep convolutional network.
The network interface 330 may include a wireless network interface or a wired network interface, and the network interface 330 is generally used to establish a communication connection between the computer device 300 and other electronic devices.
The application provides a computer equipment, this application is through constructing the recognition model based on degree of depth learning object detection Yolov3 algorithm trained, and whether there is the target seal in waiting to detect the image according to this recognition model detection trained, because Yolov3 itself can regard as one-stage's od (object detection) algorithm, thereby guarantee the precision of seal discernment, and simultaneously, the seal detects the flow automation that makes the approval file, make in the business need significantly reduce the manpower to the audit of file that has made a waste, the human cost is reduced, system efficiency has been improved.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the deep convolutional network-based stamp detecting method as described above.
The computer-readable storage medium provided by the application comprises a trained recognition model based on a deep learning target detection Yolov3 algorithm, and detects whether a target seal exists in an image to be detected according to the trained recognition model, wherein the Yolov3 can be used as a one-stage od (object detection) algorithm, so that the seal recognition precision is ensured, meanwhile, the seal detection enables the flow of checking files to be automatic, the manpower for checking the files which are invalidated in the service is greatly reduced, the labor cost is reduced, and the system efficiency is improved.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that the present application may be practiced without these specific details or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A seal detection method based on a deep convolutional network is characterized by comprising the following steps:
receiving a model training request carrying an original seal image;
carrying out positive sample processing operation on the original seal image to obtain a positive sample image;
carrying out negative sample generation operation on the original seal image to obtain a negative sample image;
inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data;
performing prediction operation on the seal characteristic data to obtain initial prediction result data;
detecting and identifying the prediction result data based on a k-means algorithm to obtain loss data of a final prediction result and a labeling result;
optimizing the loss data based on a random gradient descent algorithm to obtain a trained recognition model;
acquiring an image to be detected;
and inputting the image to be detected into the trained recognition model to perform seal detection operation, so as to obtain a seal detection result.
2. The method according to claim 1, wherein after the step of performing a negative sample generation operation on the original stamp image to obtain a negative sample image, and before the step of inputting the positive sample image and the negative sample image into a DarkNet53 network to perform a feature extraction operation to obtain stamp feature data, the method further comprises the following steps:
respectively carrying out image enhancement operation on the positive sample image and the negative sample image to obtain an enhanced positive sample image and an enhanced negative sample image;
the step of inputting the positive sample image and the negative sample image to a DarkNet53 network for feature extraction operation to obtain the seal feature data specifically comprises the following steps:
and inputting the enhanced positive sample image and the enhanced negative sample image to the DarkNet53 network for carrying out the feature extraction operation to obtain the seal feature data.
3. The seal detection method based on the deep convolutional network of claim 2, wherein the step of inputting the enhanced positive sample image and the enhanced negative sample image to the DarkNet53 network for the feature extraction operation to obtain the seal feature data specifically comprises the following steps:
judging whether the enhanced positive sample image and the enhanced negative sample image meet preset image conditions or not;
if the enhanced positive sample image and the enhanced negative sample image meet the preset image condition, executing the feature extraction operation to obtain the seal feature data;
if the enhanced positive sample image and the enhanced negative sample image do not meet the preset image condition, performing preprocessing operation on the enhanced positive sample image and the enhanced negative sample image to obtain a standard positive sample image and a standard negative sample image;
and inputting the standard positive sample image and the standard negative sample image into the DarkNet53 network to carry out the feature extraction operation, so as to obtain the seal feature data.
4. The deep convolutional network-based stamp detecting method of claim 1, wherein the loss function of the recognition model is expressed as:
loss=∑lossxy+losswh+lossclass+lossdiou
therein, lossxyRepresenting the coordinate loss of the center point of the prediction frame; losswhRepresenting the width and height loss of the prediction box; lossclassRepresenting a category loss; lossdiouRepresenting DIOU losses.
5. The seal detection method based on the deep convolutional network of claim 1, wherein after the step of inputting the image to be detected to the trained recognition model for seal detection operation to obtain a seal detection result, the method further comprises the following steps:
and storing the image to be detected and the seal detection result into a block chain.
6. The utility model provides a seal detection device based on deep convolution network which characterized in that includes:
the request receiving module is used for receiving a model training request carrying an original seal image;
the positive sample processing module is used for carrying out positive sample processing operation on the original seal image to obtain a positive sample image;
the negative sample generating module is used for carrying out negative sample generating operation on the original seal image to obtain a negative sample image;
the feature extraction module is used for inputting the positive sample image and the negative sample image into a DarkNet53 network for feature extraction operation to obtain seal feature data;
the prediction module is used for performing prediction operation on the seal characteristic data to obtain initial prediction result data;
the detection and identification module is used for carrying out detection and identification operation on the prediction result data based on a k-means algorithm to obtain the loss data of the final prediction result and the labeling result;
the optimization module is used for carrying out optimization operation on the loss data based on a random gradient descent algorithm to obtain a trained recognition model;
the image acquisition module to be detected is used for acquiring an image to be detected;
and the seal detection module is used for inputting the image to be detected to the trained recognition model to perform seal detection operation so as to obtain a seal detection result.
7. The deep convolutional network-based stamp detecting device of claim 6, further comprising: an image enhancement module, the feature extraction module comprising: a feature extraction submodule, wherein:
the image enhancement module is used for respectively carrying out image enhancement operation on the positive sample image and the negative sample image to obtain an enhanced positive sample image and an enhanced negative sample image;
the feature extraction submodule is configured to input the enhanced positive sample image and the enhanced negative sample image to the DarkNet53 network to perform the feature extraction operation, so as to obtain the stamp feature data.
8. The deep convolutional network-based stamp detecting device of claim 7, wherein the feature extraction submodule comprises:
a condition judging unit, configured to judge whether the enhanced positive sample image and the enhanced negative sample image satisfy a preset image condition;
a first result unit, configured to execute the feature extraction operation to obtain the stamp feature data if the enhanced positive sample image and the enhanced negative sample image satisfy the preset image condition;
a second result unit, configured to perform a preprocessing operation on the enhanced positive sample image and the enhanced negative sample image to obtain a normative positive sample image and a normative negative sample image if the enhanced positive sample image and the enhanced negative sample image do not satisfy the preset image condition;
and the feature extraction subunit is used for inputting the standard positive sample image and the standard negative sample image into the DarkNet53 network to perform the feature extraction operation, so as to obtain the seal feature data.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed performs the steps of the method for deep convolutional network-based seal detection of any of claims 1 to 5.
10. A computer readable storage medium, wherein computer readable instructions are stored thereon, and when executed by a processor, the computer readable instructions implement the steps of the deep convolutional network-based seal detection method according to any one of claims 1 to 5.
CN202210168420.5A 2022-02-23 2022-02-23 Seal detection method and device, computer equipment and storage medium Pending CN114549817A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210168420.5A CN114549817A (en) 2022-02-23 2022-02-23 Seal detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210168420.5A CN114549817A (en) 2022-02-23 2022-02-23 Seal detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114549817A true CN114549817A (en) 2022-05-27

Family

ID=81677482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210168420.5A Pending CN114549817A (en) 2022-02-23 2022-02-23 Seal detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549817A (en)

Similar Documents

Publication Publication Date Title
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
US20210357710A1 (en) Text recognition method and device, and electronic device
CN112016510A (en) Signal lamp identification method and device based on deep learning, equipment and storage medium
CN110232131B (en) Creative material searching method and device based on creative tag
CN112650875A (en) House image verification method and device, computer equipment and storage medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN112686243A (en) Method and device for intelligently identifying picture characters, computer equipment and storage medium
CN112016502A (en) Safety belt detection method and device, computer equipment and storage medium
CN114299366A (en) Image detection method and device, electronic equipment and storage medium
CN114022891A (en) Method, device and equipment for extracting key information of scanned text and storage medium
EP3564833B1 (en) Method and device for identifying main picture in web page
CN112581344A (en) Image processing method and device, computer equipment and storage medium
CN112182157A (en) Training method of online sequence labeling model, online labeling method and related equipment
CN115810132A (en) Crack orientation identification method, device, equipment and storage medium
WO2022105120A1 (en) Text detection method and apparatus from image, computer device and storage medium
CN112016503B (en) Pavement detection method, device, computer equipment and storage medium
CN112395834B (en) Brain graph generation method, device and equipment based on picture input and storage medium
CN114549817A (en) Seal detection method and device, computer equipment and storage medium
CN114330240A (en) PDF document analysis method and device, computer equipment and storage medium
CN114461833A (en) Picture evidence obtaining method and device, computer equipment and storage medium
CN113742485A (en) Method and device for processing text
CN113791426A (en) Radar P display interface generation method and device, computer equipment and storage medium
CN113989618A (en) Recyclable article classification and identification method
CN112036501A (en) Image similarity detection method based on convolutional neural network and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination