CN113033635A - Coin invisible image-text detection method and device - Google Patents

Coin invisible image-text detection method and device Download PDF

Info

Publication number
CN113033635A
CN113033635A CN202110270942.1A CN202110270942A CN113033635A CN 113033635 A CN113033635 A CN 113033635A CN 202110270942 A CN202110270942 A CN 202110270942A CN 113033635 A CN113033635 A CN 113033635A
Authority
CN
China
Prior art keywords
image
defect
coin
text
invisible
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110270942.1A
Other languages
Chinese (zh)
Inventor
王刚
吴天序
鞠健
庞文浩
杜梁缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhong Chao Great Wall Financial Equipment Holding Co ltd
Original Assignee
Zhong Chao Great Wall Financial Equipment Holding Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhong Chao Great Wall Financial Equipment Holding Co ltd filed Critical Zhong Chao Great Wall Financial Equipment Holding Co ltd
Priority to CN202110270942.1A priority Critical patent/CN113033635A/en
Publication of CN113033635A publication Critical patent/CN113033635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method and a device for detecting invisible images and texts of coins. The method comprises the following steps: acquiring an image of the surface of a single coin to be detected on which invisible pictures and texts are located, and preprocessing the image; inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map; and preprocessing the defect classification chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified. The image acquired by the method is not influenced by the change of the brightness degree of the environment, the detection result is stable, the single imaging detection of the invisible image-text characteristics of the coin is realized, and the detection requirement that different invisible images-texts are presented at various angles is met.

Description

Coin invisible image-text detection method and device
Technical Field
The invention relates to a method for detecting invisible images and texts of a coin, and also relates to a corresponding device for detecting the invisible images and texts of the coin, belonging to the technical field of coin quality detection.
Background
With the rapid development of economy, a large amount of currency circulation is required in the market, wherein the circulation amount of coins in daily consumption is larger and larger, but the coins are different from paper money, the anti-counterfeiting function is relatively less, and counterfeit coins are continuously appeared in a fake and genuine phenomenon, so that the problem which needs to be solved is solved.
In order to effectively prevent counterfeit money from flowing into the market circulation link, the anti-counterfeiting function of the genuine money is improved. The technology for adding invisible patterns and characters (simply called pictures and texts) of coins as an anti-counterfeiting technology is applied, and is mainly characterized in that special picture and text symbols are hidden on the surface of the coins by utilizing a mold developed by a special technology, the hidden pictures and texts are usually hard to find by naked eyes in a normal view, the hidden pictures and texts can be seen only by obliquely looking at a certain specific angle, and the counterfeit coins can be very effectively distinguished because the hidden pictures and texts are fine in workmanship and hardly reach the effect of real coins.
The invisible image-text difficulty of the coin is high, the manufacturing precision of the invisible image-text of the coin needs to be detected in the production process link, and the correctness and consistency of the invisible image-text of the outgoing coin are ensured. In the field of coin distribution, the invisible images and texts of the coins also need to be detected to judge the authenticity of the coins.
The existing coin invisible image-text detection method generally adopts the following modes: firstly, light sources with different angles are adopted for shooting and imaging for multiple times; the second type, set up a plurality of stations and adopt the light source of several different angles to add the mode of camera and image, the third type sets up the illumination angle of single station control light source and cooperates the mode of camera multiple exposure to image. Imaging in the first way increases the size of the device; the imaging in the second and third modes requires higher requirements for parameters such as light source settings and camera frame rates.
Disclosure of Invention
The invention aims to provide a method for detecting invisible images and texts of coins.
The invention also aims to provide a coin invisible image-text detection device.
In order to achieve the purpose, the invention adopts the following technical scheme:
according to a first aspect of the embodiments of the present invention, there is provided a method for detecting invisible images and texts of a coin, including the following steps:
acquiring an image of the surface of a single coin to be detected on which invisible pictures and texts are located, and preprocessing the image;
inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map;
and preprocessing the defect type chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
Preferably, the image of the surface where the invisible image-text of the coin to be detected is obtained by using an area-array camera, the shooting surface of the area-array camera is parallel to the surface where the invisible image-text of the coin to be detected is located, a light source for illuminating the surface of the coin is arranged between the area-array camera and the surface where the invisible image-text of the coin to be detected is located, and light rays emitted by the light source are perpendicular to the area-array camera.
Preferably, the coin invisible image-text defect detection model is obtained by the following steps:
establishing an image data set of the surface where the invisible image-text of the coin is located, selecting images with a preset proportion as training set data, and taking the rest images as test set data;
jointly training a plurality of convolutional neural networks by adopting the training set data to obtain a coin invisible image-text defect detection model;
and testing the precision of the coin invisible image-text defect detection model by adopting the test set data.
Preferably, when the training set data is adopted to jointly train a plurality of convolutional neural networks, the convolutional neural networks specifically comprise a first convolutional neural network, a second convolutional neural network and a third convolutional neural network;
the first convolution neural network is used for extracting preset amounts of shallow layer, middle layer and high layer information from the image of the face where the input invisible image-text of the coin is located, and respectively expressing the shallow layer, the middle layer and the high layer information by four-dimensional feature vectors;
the second convolutional neural network is used for further extracting each layer information from a preset number of four-dimensional feature vectors representing different layers of information under different scales, and unifying each layer information into a four-dimensional feature vector corresponding to the high layer information for output;
and the third convolutional neural network is used for labeling the severity label of each pixel position in the invisible coin image-text image input to the first convolutional neural network after performing feature fusion on the four-dimensional feature vector output by the second convolutional neural network, calculating the probability that each pixel position belongs to the preset defect category, and then retaining the defect types with the probability greater than the preset defect probability to form a defect category map.
Preferably, when the four-dimensional feature vectors output by the second convolutional neural network are subjected to feature fusion, the floating point numbers of corresponding coordinate positions in all the four-dimensional feature vectors are respectively averaged or maximized.
Preferably, when the defect type map is preprocessed, the defect type map is converted into a binary map, and then the binary map is converted into a corresponding gray map according to different defect types; the gray scale values corresponding to different defect types presented by the gray scale map are 20 and 40 … … 20n, and n represents the nth defect type.
Preferably, the Blob analysis of the gray scale map comprises the following steps:
dividing pixels with the interval smaller than a first distance threshold value among the defective pixels in the small area of the gray scale image into the same defective area, and counting the number of the defective pixels in each defective area;
calculating the energy sum of all the defective pixels in each defective area as the energy value of the corresponding defective area;
clustering each defect area according to a second distance threshold value spaced among each defect area;
and judging the defect area after clustering.
Preferably, when the clustered defect regions are rejected, determining that the positions of the clustered defect regions fall into specific regions of a plurality of regions preset in the standard sample template, and if the energy value and the area value of the defect regions are greater than those of corresponding regions in the standard sample template, determining that the invisible image-text of the coin to be detected is unqualified.
Preferably, when the clustered defect regions are rejected, after the positions of the clustered defect regions are determined to fall into specific regions of a plurality of regions preset in a standard sample template, Euclidean distances between the energy values and the area values of the defect regions and the energy values and the area values of the regions in the standard sample template corresponding to the energy values and the area values are respectively calculated, and if the energy values and the area values are greater than the standard Euclidean distances, the invisible image-text of the coin to be detected is considered to be unqualified.
According to a second aspect of the embodiments of the present invention, there is provided a coin invisible image-text detection device, which is characterized by comprising a processor and a memory, wherein the processor reads a computer program or instructions in the memory, and is configured to perform the following operations:
acquiring an image of the surface of a single coin to be detected on which invisible pictures and texts are located, and preprocessing the image;
inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map;
and preprocessing the defect type chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
The method and the device for detecting the invisible images and texts of the coin firstly acquire the image of the surface of the invisible images and texts of a single coin to be detected and perform pretreatment; inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map; and finally, preprocessing the defect classification chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified. The image acquired in the method is not influenced by the change of the brightness degree of the environment, the detection result is stable, the single imaging detection of the coin invisible image-text characteristics is realized, and the detection requirement that different invisible images-texts are presented at various angles is met.
Drawings
Fig. 1 is a flowchart of a coin invisible image-text detection method provided by an embodiment of the invention;
fig. 2 is a detection schematic diagram of a coin invisible image-text detection method provided by an embodiment of the invention;
fig. 3 is a schematic diagram of an image of a surface on which invisible images and texts of a single coin to be detected are obtained in the coin invisible image and text detection method provided by the embodiment of the invention;
fig. 4 is a defect detection flowchart of a coin invisible image-text defect detection model in the coin invisible image-text detection method according to the embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a division of small defect areas when Blob analysis is performed on a gray scale image obtained after preprocessing in the invisible image-text detection method for coins according to the embodiment of the present invention;
fig. 6A and 6B are diagrams illustrating an example of clustering divided small defect regions in the invisible coin image-text detection method according to the embodiment of the present invention;
fig. 7 is a schematic structural diagram of a coin invisible image-text detection device provided by an embodiment of the invention.
Detailed Description
The technical contents of the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, the method for detecting invisible images and texts of a coin provided by the embodiment of the invention comprises the following steps:
and step S1, obtaining the image of the surface of the invisible image-text of the single coin to be detected, and preprocessing the image.
The method comprises the following steps that a photographing surface of an area-array camera is parallel to a surface where invisible pictures and texts of a coin to be detected are located, a light source used for illuminating the surface of the coin is arranged between the area-array camera and the surface where the invisible pictures and texts of the coin to be detected are located, and light rays emitted by the light source are perpendicular to the area-array camera (like axial light, and the illumination angle does not need to be controlled in a stroboscopic mode or adjusted); the area array camera is used for carrying out high-resolution single-sheet image acquisition on the surface where the invisible image-text of the coin to be detected is located, so that the problem of image distortion caused by line acquisition is solved. As shown in fig. 2, taking the example that the coin to be measured is placed on a conveyor belt for transmission, the invisible images and texts are provided on the front and back surfaces of the coin, the shooting surfaces of two area-array cameras are parallel to the surface where the invisible images and texts of the coin to be measured are located, the light emitted by the light source which is matched with the area-array cameras to collect single images is perpendicular to the area-array cameras, after the surface of the coin is illuminated by the corresponding light source, the two area-array cameras collect the images of the surface where the invisible images and texts of the single coin are located one by one, that is, the front and back surface images of each coin to be measured, such as the invisible images and texts of the pound coins shown.
The problems of noise, uneven light and the like inevitably occur in the image acquired by the area-array camera, so that the acquired image of the surface where the invisible image-text of the single coin to be detected is located needs to be preprocessed, and the image is normalized; the image preprocessing mainly includes noise reduction and correction processing of an image. The image filtering function is used for reducing the influence of noise on the subsequent invisible image-text detection of the coin; when correcting the image, correcting the image rotation to a uniform angle; and (3) using a ray correction algorithm to make the rays of the processed image as uniform as possible.
And step S2, inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map.
The invisible image-text defect detection model of the coin is obtained by the following steps:
and step S21, establishing an image data set of the surface on which the invisible images and texts of the coin are positioned, selecting images with a preset proportion from the image data set as training set data, and using the rest images as test set data.
Acquiring images of the faces of the coin invisible images and texts by using an area array camera, and preprocessing the images by using the image preprocessing method in the step S1 to obtain an image data set of the faces of the coin invisible images and texts; and selecting images with a preset proportion from the data set as training set data, and using the rest images as test set data. For example, the front and back of the coin are both provided with invisible pictures and texts, and 500 pictures of the front and back of the coin are acquired by an area-array camera; preprocessing the front and back images of the 1000 coins to obtain an image data set of the surface where the invisible images and texts of the coins are located, selecting 800 images from the image data set as training set data, and using the remaining 200 images as test set data.
And step S22, performing combined training on the plurality of convolutional neural networks by adopting training set data to obtain a coin invisible image-text defect detection model.
In the invention, 3 convolutional neural networks are jointly trained by adopting training set data to obtain a coin invisible image-text defect detection model. The 3 convolutional neural networks are respectively a first convolutional neural network, a second convolutional neural network and a third convolutional neural network.
The first convolutional neural network and the second convolutional neural network comprise the same number of network blocks; each network block in the first convolutional neural network consists of a convolutional layer with a fixed convolutional kernel size, a batch normalization layer and an activation function layer. Each network block in the second convolutional neural network is composed of a plurality of convolutional layers with convolutional kernel sizes, a batch normalization layer and an activation function layer, and each network block is correspondingly connected with a corresponding network block in the first convolutional neural network. The third convolutional neural network is realized by adopting a full convolutional neural network.
Randomly dividing the images in the training set data into a plurality of groups, and setting a first convolution neural network, a second convolution neural network and a third convolution neural networkThe iteration number (such as 500 iterations) and the learning rate (such as 10) of the network synchronization joint training-4) And after the image size is input, each group of images containing the invisible coin image-text in the training set data are input in the first convolutional neural network, parameters of each network block of the first convolutional neural network, the second convolutional neural network and the third convolutional neural network are synchronously updated once, and after the network parameters are iteratively updated for preset times, the invisible coin image-text defect detection model is obtained.
In order to facilitate understanding of the process of jointly training each convolutional neural network, the following describes the process of jointly training each convolutional neural network once in detail by taking an image randomly selected from the training set data as an example.
Randomly selecting an image from the training set data and inputting the image into a first convolutional neural network, and extracting preset amounts of shallow layer, middle layer and high layer information from the input image through the convolutional neural network. The shallow information is more focused on image texture features, and the high-level information is more focused on image shape category features. The shallow layer information, the middle layer information and the high layer information extracted by the first convolutional neural network are respectively expressed by four-dimensional feature vectors and serve as input vectors of the second convolutional neural network. The four-dimensional feature vector is specifically expressed as an input image batch (the number of images input into the network at once) × width × height of the input image × the number of channels of the image (image dimension). For example, as shown in fig. 4, 1 each of shallow layer, middle layer, and high layer information is extracted from an input image having a size of 480 × 480, and image dimensions corresponding to the shallow layer, middle layer, and high layer information are 16, 128, and 512, respectively, by a first convolutional neural network, the shallow layer information extracted and output by the convolutional neural network is represented as dimensions 1 × 480 × 480 × 16, the middle layer information is represented as dimensions 1 × 480 × 480 × 128, and the high layer information is represented as dimensions 1 × 480 × 480 × 512.
As shown in fig. 4, the four-dimensional feature vectors output by the first convolutional neural network and used for representing different levels of information are input into the second convolutional neural network, and the convolutional layers with the sizes of a plurality of convolutional kernels of the corresponding network blocks in the second convolutional neural network are utilized to further extract image textures and shape class features at different scales, so that the image textures and the shape class features occupying pixels with different areas in the image can be more comprehensively and accurately extracted, and the purpose of better adaptability is achieved. And the second convolutional neural network further extracts image texture and shape category characteristics under different scales according to the four-dimensional characteristic vectors which are output by the first convolutional neural network and used for expressing different levels of information, and then ensures that the shallow layer, the middle layer and the high layer information are expressed by the four-dimensional characteristic vectors corresponding to the high layer information after characteristics are extracted. As shown in fig. 4, the second convolutional neural network outputs 3 four-dimensional feature vectors corresponding to the feature-extracted high-level information, which represent the corresponding image texture and shape class features of the shallow, middle, and high-level information. For example, after the image texture and shape class features are further extracted under different scales through a second convolutional neural network, the shallow layer, the middle layer and the high layer information are represented by the dimensions of a four-dimensional feature vector 1 × 480 × 480 × 2048 corresponding to the high layer information after the features are extracted.
It should be emphasized that, in order to ensure that after the image texture and shape class features are further extracted under different scales by the second convolutional neural network, the shallow layer information and the middle layer information are represented by the four-dimensional feature vector corresponding to the high layer information after the features are extracted, the number of network blocks used for extracting the shallow layer information and the middle layer information in the first convolutional neural network may be adjusted, or the size and the number of convolutional kernels in the network blocks corresponding to the further extraction of the shallow layer information and the middle layer information in the second convolutional neural network may be adjusted. And after the characteristics are extracted, the four-dimensional characteristic vector corresponding to the high-level information is a four-dimensional characteristic vector formed by splicing the four-dimensional characteristic vectors output by the convolution kernels along the channel direction.
Inputting a plurality of same four-dimensional feature vectors corresponding to the shallow layer, the middle layer and the high layer information output by the second convolutional neural network into a third convolutional neural network, firstly performing feature fusion, and extracting key information for identifying invisible image-text defects in the image, namely fusing a plurality of same four-dimensional feature vectors corresponding to the shallow layer, the middle layer and the high layer information output by the second convolutional neural network into a new four-dimensional feature vector. When feature fusion is performed on a plurality of same four-dimensional feature vectors output by the second convolutional neural network and corresponding to the shallow layer, the middle layer and the high layer information, the feature fusion can be realized by respectively averaging floating point numbers of corresponding coordinate positions in all the four-dimensional feature vectors or taking the maximum value. Taking the dimensional coordinates in the 3 four-dimensional feature vectors as an example, when feature fusion is performed, floating point numbers corresponding to the dimensional coordinates in the 3 four-dimensional feature vectors may be added and then divided by 3 to obtain a new floating point number of the dimensional coordinates in the four-dimensional feature vector. Or, the maximum floating point number of the floating point numbers corresponding to the dimension coordinates in the 3 four-dimensional feature vectors may be used as the floating point number of the dimension coordinates in the new four-dimensional feature vector.
And after the third convolutional neural network converts the fused new four-dimensional feature vector into an intermediate feature vector, labeling each pixel position in the coin invisible image-text image input to the first convolutional neural network with a label of the severity of the preset defect type according to the preset coin invisible image-text defect severity standard, calculating the probability that each pixel position in the coin invisible image-text image input to the first convolutional neural network belongs to the preset defect type, and reserving the defect type with the probability greater than the preset defect probability to form a defect type graph. For example, when a coin invisible image-text defect detection model is required to detect whether a scratch defect exists in a coin invisible image-text, taking labeling of a certain pixel position in a coin invisible image-text image as an example, if the third convolutional neural network identifies that the scratch length of the pixel position is 1-2mm, and the scratch length set according to the scratch severity standard is greater than 5mm, the pixel position can be labeled as serious, and the probability that the pixel position belongs to the scratch defect is calculated to be 1.
Taking the preset defect probability as 0.5 as an example, from the calculated probability that each pixel position in the invisible coin image-text image input to the first convolution neural network belongs to the preset defect category, the defect category with the defect probability greater than 0.5 is reserved, and a corresponding defect category map is generated.
And step S23, testing the precision of the invisible image-text defect detection model of the coin by adopting the test set data.
And inputting the images in the test set data into the trained coin invisible image-text defect detection model, and verifying the precision of the coin invisible image-text defect detection model. The precision of the coin invisible image-text defect detection model tested by adopting the test set data is not the existing mature technology, and is not described again here.
And step S3, preprocessing the defect type graph, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
When the defect type graph is preprocessed, the defect type graph is converted into a binary graph, and then the binary graph is converted into a corresponding gray-scale graph according to different defect types. The gray scale values corresponding to different defect types presented by the gray scale map are 20 and 40 … … 20n, wherein n represents the nth defect type.
Blob in computer vision refers to a connected region in an image, and Blob analysis is to extract and mark the connected region of a binary image after foreground/background separation. Each Blob marked represents a foreground object, and then some relevant features of the Blob can be calculated, such as: the color and texture characteristics of the Blob can be calculated according to the geometric characteristics of the area, the centroid, the circumscribed rectangle and the like, and the characteristics can be used as the basis for tracking.
Performing Blob analysis on the preprocessed gray-scale image, comprising the following steps:
and step S31, dividing pixels with the interval smaller than a first distance threshold value among the defective pixels in the small area of the gray scale image into the same defective area, and counting the number of the defective pixels in each defective area.
As shown in fig. 5, in an embodiment of the present invention, two defective regions can be obtained by dividing 6 defective pixels in a small region of a grayscale map into the same defective region, where the interval between the pixels is less than 1 pixel, the first defective region includes defective pixels with an interval between 5 pixels being less than 1 pixel, and the second defective region includes 1 defective pixel.
And step S32, calculating the sum of the energies of all the defective pixels in each defective area as the energy value of the corresponding defective area.
The energy of each defective pixel is equal to the gray value of the defective pixel, the gray value and the basic parameter value of the pixel at the corresponding position in the actual image template are sequentially subtracted, and the absolute value of the result is taken and then multiplied by the corresponding scale value. It is emphasized that obtaining the energy of each defective pixel includes, but is not limited to, the above-described method.
And step S33, clustering the defect areas according to the second distance threshold value of the interval between the defect areas.
Clustering means that whether the adjacent regions at the positions have similarity or correlation under certain conditions is utilized, small regions have larger number of defective pixels, or have high integral intensity with the adjacent regions, or have density of defective pixels close to the adjacent regions, and the like, so that the defective pixels in the weak regions are clustered into the defective pixels in the strong regions. As shown in fig. 6A and 6B, fig. 6A shows each defect region before clustering, and fig. 6B shows a larger defect region after clustering.
And step S34, determining the defect area after clustering.
The Blob rejection method determines the sensitivity and detection performance of the image detection system. In image detection, the contrast difference between the detected label product and the image model can be evaluated through two layers. The first level is the contrast difference pixel number: when the contrast differences appear in the form of clusters, it is shown that these differences are significant, and thus a large number of obsolete clusters having a small obsolete value can be judged as obsolete, which is called regional obsolete; the second level is the contrast difference value, if the contrast difference value (invalidation value) of one or some adjacent pixels is very high, the invalidation of the product can be judged according to the reason, and the invalidation is called as singular invalidation.
In the present invention, the principle of human eye detection is simulated: and a residual error detection method combining intra-small block difference and overall difference. The overall difference is mainly determined by comparing with the tolerance range of a plurality of standard samples obtained by averaging. Specifically, the defect area after clustering can be determined to be defective by adopting the following two methods, and whether the invisible image-text of the coin to be detected is qualified or not is determined by adopting a proper waste determination method according to actual requirements. The waste judgment result of the invisible image-text of the coin to be detected can be stored in the database, so that the worker can conveniently call the waste judgment result at any time for analysis. In addition, in the production process, the coin to be detected can be subjected to bin division treatment according to the waste judgment result of the invisible image-text of the coin to be detected.
Specifically, the first rejection judging method comprises the steps of determining that the position of a clustered defect region falls into a specific region of a plurality of regions preset in a standard sample template, comparing the energy value and the area value of the defect region with the energy value and the area value of a corresponding region in the standard sample template, and if the energy value and the area value of the defect region are larger than the energy value and the area value of the corresponding region in the standard sample template, determining that the invisible image-text of the coin to be detected is unqualified. And the second waste judgment method comprises the steps of determining that the positions of the clustered defect regions fall into specific regions of a plurality of regions preset in a standard sample template, respectively calculating Euclidean distances between the energy values and the area values of the defect regions and the energy values and the area values of the regions in the standard sample template, comparing the Euclidean distances with the standard Euclidean distances, and if the Euclidean distances are greater than the standard Euclidean distances, determining that the invisible images and texts of the coin to be detected are unqualified.
In addition, as shown in fig. 7, an embodiment of the present invention further provides a coin invisible image-text detecting device, which includes a processor 32 and a memory 31, and may further include a communication component, a sensor component, a power component, a multimedia component, and an input/output interface according to actual needs. The memory, communication components, sensor components, power components, multimedia components, and input/output interfaces are all connected to the processor 32. As mentioned above, the memory 31 may be a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read Only Memory (EEPROM), an Erasable Programmable Read Only Memory (EPROM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a magnetic memory, a flash memory, etc.; the processor 32 may be a Central Processing Unit (CPU), Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Digital Signal Processing (DSP) chip, or the like. Other communication components, sensor components, power components, multimedia components, etc. may be implemented using common components found in existing smartphones and are not specifically described herein.
In addition, the coin invisible image-text detection device provided by the embodiment of the present invention includes a processor 32 and a memory 31, wherein the processor 32 reads a computer program or an instruction in the memory 31 to execute the following operations:
and acquiring an image of the surface where the invisible image-text of the single coin to be detected is located, and preprocessing the image.
Inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to form a defect category map.
And preprocessing the defect type chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where instructions are stored on the computer-readable storage medium, and when the instructions are run on a computer, the computer is enabled to execute the coin invisible image-text detection method described in fig. 1, and details of a specific implementation of the method are not described herein again.
In addition, an embodiment of the present invention further provides a computer program product including instructions, which when run on a computer, causes the computer to execute the method for detecting invisible images and texts of a coin as described in fig. 1, and details of a specific implementation thereof are not repeated here.
The method and the device for detecting the invisible images and texts of the coin firstly acquire the image of the surface where the invisible images and texts of the coin to be detected are located and perform pretreatment; inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map; and finally, preprocessing the defect classification chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified. The image acquired in the method is not influenced by the change of the brightness degree of the environment, the detection result is stable, the single imaging detection of the coin invisible image-text characteristics is realized, and the detection requirement that different invisible images-texts are presented at various angles is met.
The method and the device for detecting invisible images and texts of the coin provided by the invention are explained in detail above. It will be apparent to those skilled in the art that various modifications can be made without departing from the spirit of the invention.

Claims (10)

1. A coin invisible image-text detection method is characterized by comprising the following steps:
acquiring an image of the surface of a single coin to be detected on which invisible pictures and texts are located, and preprocessing the image;
inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map;
and preprocessing the defect type chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
2. The invisible image-text detection method for coins of claim 1, wherein:
and when the image of the surface where the invisible image-text of the coin to be detected is located is acquired, an area-array camera is adopted for realizing, the shooting surface of the area-array camera is parallel to the surface where the invisible image-text of the coin to be detected is located, a light source for illuminating the surface of the coin is arranged between the area-array camera and the surface where the invisible image-text of the coin to be detected is located, and light rays emitted by the light source are perpendicular to the area-array camera.
3. The invisible image-text detection method for coins of claim 1, wherein:
the invisible image-text defect detection model of the coin is obtained through the following steps:
establishing an image data set of the surface where the invisible image-text of the coin is located, selecting images with a preset proportion as training set data, and taking the rest images as test set data;
jointly training a plurality of convolutional neural networks by adopting the training set data to obtain a coin invisible image-text defect detection model;
and testing the precision of the coin invisible image-text defect detection model by adopting the test set data.
4. The invisible image-text detection method of the coin according to claim 3, characterized in that:
when the training set data is adopted to jointly train a plurality of convolutional neural networks, the convolutional neural networks specifically comprise a first convolutional neural network, a second convolutional neural network and a third convolutional neural network;
the first convolution neural network is used for extracting preset amounts of shallow layer, middle layer and high layer information from the image of the face where the input invisible image-text of the coin is located, and respectively expressing the shallow layer, the middle layer and the high layer information by four-dimensional feature vectors;
the second convolutional neural network is used for further extracting each layer information from a preset number of four-dimensional feature vectors representing different layers of information under different scales, and unifying each layer information into a four-dimensional feature vector corresponding to the high layer information for output;
and the third convolutional neural network is used for labeling the severity label of each pixel position in the invisible coin image-text image input to the first convolutional neural network after performing feature fusion on the four-dimensional feature vector output by the second convolutional neural network, calculating the probability that each pixel position belongs to the preset defect category, and then retaining the defect types with the probability greater than the preset defect probability to form a defect category map.
5. The invisible image-text detection method for coins of claim 4, wherein:
and when the four-dimensional feature vectors output by the second convolutional neural network are subjected to feature fusion, averaging or maximizing floating point numbers of corresponding coordinate positions in all the four-dimensional feature vectors respectively.
6. The invisible image-text detection method for coins of claim 1, wherein:
when the defect type graph is preprocessed, the defect type graph is converted into a binary graph, and then the binary graph is converted into a corresponding gray-scale graph according to different defect types; the gray scale values corresponding to different defect types presented by the gray scale map are 20 and 40 … … 20n, and n represents the nth defect type.
7. The invisible image-text detection method of the coin according to claim 6, characterized in that:
performing Blob analysis on the gray-scale map, comprising the following steps:
dividing pixels with the interval smaller than a first distance threshold value among the defective pixels in the small area of the gray scale image into the same defective area, and counting the number of the defective pixels in each defective area;
calculating the energy sum of all the defective pixels in each defective area as the energy value of the corresponding defective area;
clustering each defect area according to a second distance threshold value spaced among each defect area;
and judging the defect area after clustering.
8. The invisible image-text detection method of the coin according to claim 7, characterized in that:
and when the clustered defect regions are rejected, determining that the positions of the clustered defect regions fall into specific regions of a plurality of regions preset in a standard sample template, and if the energy value and the area value of the defect regions are greater than those of the corresponding regions in the standard sample template, determining that the invisible image-text of the coin to be detected is unqualified.
9. The invisible image-text detection method of the coin according to claim 7, characterized in that:
and when the clustered defect regions are rejected, determining that the positions of the clustered defect regions fall into specific regions of a plurality of regions preset in a standard sample template, respectively calculating Euclidean distances between the energy values and the area values of the defect regions, which correspond to the energy values and the area values of the regions in the standard sample template, and if the energy values and the area values are greater than the standard Euclidean distances, determining that the invisible image-text of the coin to be detected is unqualified.
10. A coin invisible image-text detection device, characterized by comprising a processor and a memory, wherein the processor reads a computer program or instructions in the memory to execute the following operations:
acquiring an image of the surface of a single coin to be detected on which invisible pictures and texts are located, and preprocessing the image;
inputting the preprocessed image into a trained coin invisible image-text defect detection model to obtain the probability that each pixel position in the image belongs to a preset defect, and reserving the defect types larger than the preset defect probability to generate a defect category map;
and preprocessing the defect type chart, and performing Blob analysis to obtain whether the invisible image-text of the coin to be detected is qualified.
CN202110270942.1A 2021-03-12 2021-03-12 Coin invisible image-text detection method and device Pending CN113033635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270942.1A CN113033635A (en) 2021-03-12 2021-03-12 Coin invisible image-text detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270942.1A CN113033635A (en) 2021-03-12 2021-03-12 Coin invisible image-text detection method and device

Publications (1)

Publication Number Publication Date
CN113033635A true CN113033635A (en) 2021-06-25

Family

ID=76470396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270942.1A Pending CN113033635A (en) 2021-03-12 2021-03-12 Coin invisible image-text detection method and device

Country Status (1)

Country Link
CN (1) CN113033635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554633A (en) * 2021-07-30 2021-10-26 深圳中科飞测科技股份有限公司 Defect clustering method and device, detection device and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107966447A (en) * 2017-11-14 2018-04-27 浙江大学 A kind of Surface Flaw Detection method based on convolutional neural networks
CN109389615A (en) * 2018-09-29 2019-02-26 佳都新太科技股份有限公司 Coin discriminating method and processing terminal based on deep learning convolutional neural networks
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN110415425A (en) * 2019-07-16 2019-11-05 广州广电运通金融电子股份有限公司 Detecting of coin and recognition methods, system and storage medium based on image
CN110689011A (en) * 2019-09-29 2020-01-14 河北工业大学 Solar cell panel defect detection method of multi-scale combined convolution neural network
CN111582294A (en) * 2019-03-05 2020-08-25 慧泉智能科技(苏州)有限公司 Method for constructing convolutional neural network model for surface defect detection and application thereof
CN111583502A (en) * 2020-05-08 2020-08-25 辽宁科技大学 Renminbi (RMB) crown word number multi-label identification method based on deep convolutional neural network
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107966447A (en) * 2017-11-14 2018-04-27 浙江大学 A kind of Surface Flaw Detection method based on convolutional neural networks
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109389615A (en) * 2018-09-29 2019-02-26 佳都新太科技股份有限公司 Coin discriminating method and processing terminal based on deep learning convolutional neural networks
CN111582294A (en) * 2019-03-05 2020-08-25 慧泉智能科技(苏州)有限公司 Method for constructing convolutional neural network model for surface defect detection and application thereof
CN110415425A (en) * 2019-07-16 2019-11-05 广州广电运通金融电子股份有限公司 Detecting of coin and recognition methods, system and storage medium based on image
CN110689011A (en) * 2019-09-29 2020-01-14 河北工业大学 Solar cell panel defect detection method of multi-scale combined convolution neural network
CN111583502A (en) * 2020-05-08 2020-08-25 辽宁科技大学 Renminbi (RMB) crown word number multi-label identification method based on deep convolutional neural network
CN111612763A (en) * 2020-05-20 2020-09-01 重庆邮电大学 Mobile phone screen defect detection method, device and system, computer equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张建伟等: "基于形态学的硬币镜面区域缺陷检测算法研究", 成都大学学报(自然科学版), vol. 35, no. 3, pages 245 - 247 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554633A (en) * 2021-07-30 2021-10-26 深圳中科飞测科技股份有限公司 Defect clustering method and device, detection device and readable storage medium

Similar Documents

Publication Publication Date Title
CN107808358B (en) Automatic detection method for image watermark
CN110060237B (en) Fault detection method, device, equipment and system
CN107123111B (en) Deep residual error network construction method for mobile phone screen defect detection
CN110473179B (en) Method, system and equipment for detecting surface defects of thin film based on deep learning
CN115082683A (en) Injection molding defect detection method based on image processing
CN108596166A (en) A kind of container number identification method based on convolutional neural networks classification
CN106875381A (en) A kind of phone housing defect inspection method based on deep learning
CN104992496A (en) Paper money face identification method and apparatus
CN111325717B (en) Mobile phone defect position identification method and equipment
CN107677679A (en) Sorting technique and device the defects of L0 pictures in a kind of AOI detection
CN115147409A (en) Mobile phone shell production quality detection method based on machine vision
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN113435407B (en) Small target identification method and device for power transmission system
CN112257709A (en) Signboard photo auditing method and device, electronic equipment and readable storage medium
CN114581432A (en) Tongue appearance tongue image segmentation method based on deep learning
CN114612418A (en) Method, device and system for detecting surface defects of mouse shell and electronic equipment
CN113033635A (en) Coin invisible image-text detection method and device
CN108806058A (en) A kind of paper currency detecting method and device
CN107301718B (en) A kind of image matching method and device
CN107024480A (en) A kind of stereoscopic image acquisition device
CN116681677A (en) Lithium battery defect detection method, device and system
CN116958960A (en) Egg dark spot detection method based on machine learning random forest algorithm
CN110910497A (en) Method and system for realizing augmented reality map
CN116228659A (en) Visual detection method for oil leakage of EMS trolley
CN115239663A (en) Method and system for detecting defects of contact lens, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination