CN110633733A - Intelligent image matching method and device and computer readable storage medium - Google Patents

Intelligent image matching method and device and computer readable storage medium Download PDF

Info

Publication number
CN110633733A
CN110633733A CN201910762047.4A CN201910762047A CN110633733A CN 110633733 A CN110633733 A CN 110633733A CN 201910762047 A CN201910762047 A CN 201910762047A CN 110633733 A CN110633733 A CN 110633733A
Authority
CN
China
Prior art keywords
original image
scale
feature set
image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910762047.4A
Other languages
Chinese (zh)
Other versions
CN110633733B (en
Inventor
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910762047.4A priority Critical patent/CN110633733B/en
Publication of CN110633733A publication Critical patent/CN110633733A/en
Application granted granted Critical
Publication of CN110633733B publication Critical patent/CN110633733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an intelligent image matching method, which comprises the following steps: receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set; performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set; inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result. The invention also provides an image intelligent matching device and a computer readable storage medium. The invention can realize rapid and accurate image matching.

Description

Intelligent image matching method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a method and a device for obtaining an image matching result based on intelligent image similarity calculation and a computer readable storage medium.
Background
The image matching is based on the acquired image set, similarity and consistency of textures, characteristics, structures and the like among the image set are found out through a matching algorithm, and then similar images are found out. The image matching is a hot point problem in the image field, and the high-precision image matching can better process the follow-up work of image splicing, target tracking, target identification and the like. Current image matching algorithms can be divided into 3 major categories: a grayscale-based image matching method, a transform-domain-based image matching method, and a feature-based image matching method. The feature-based matching method is good in noise resistance and good in robustness in the aspects of rotation and shielding of objects in the image. However, when the feature-based matching method is used for matching images, time and labor are wasted when the images are searched, and the matching accuracy needs to be further improved.
Disclosure of Invention
The invention provides an intelligent image matching method, an intelligent image matching device and a computer-readable storage medium, and mainly aims to present a quick and accurate image matching result to a user when the user inputs an image set.
In order to achieve the above object, the present invention provides an intelligent image matching method, which includes:
receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set;
inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
Optionally, the contrast enhancement method is:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope, b is the intercept, if a is greater than or equal to 1, then D isbCompared with the DaIf a is less than 1, then D is the samebCompared with the DaThe contrast of (2) produces a weakening effect;
the method for denoising comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
Figure BDA0002167079730000021
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,
Figure BDA0002167079730000022
is the noise total variance of the original image set,
Figure BDA0002167079730000023
is the pixel gray level mean value of (x, y),
Figure BDA0002167079730000024
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
Optionally, the performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, including:
establishing a spatial function from the set of pre-processed images I (x, y), the spatial function L (x, y, σ) being:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the preprocessed image set I (x, y);
establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
Optionally, the clustering operation comprises randomizing the class center positions and optimizing the class center positions;
the randomization of the class center position comprises determining the number of the class centers and randomly generating the coordinate position of the class center;
the optimized class center position dist (x)i,xj) Comprises the following steps:
Figure BDA0002167079730000025
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
Optionally, establishing a key-value pair relationship between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result for the original image according to the key-value pair relationship and the traversal result, where the method includes:
dividing the optimization feature set into a plurality of small feature sets by a MapReduce distributed programming model in the Hadoop big data processing library, and distributing the plurality of small feature sets to each subtask in a cluster in the Hadoop big data processing library;
each subtask finds a corresponding original image according to the received small feature set to complete the establishment of the key-value pair relation;
and according to a hash algorithm, converting the data length of each feature in the optimized feature set into a hash set with a fixed length, traversing the hash set to obtain traversal similarity, and outputting a matching result of the original image set according to the traversal similarity and by combining the key-value pair relation.
In addition, in order to achieve the above object, the present invention further provides an intelligent image matching device, which includes a memory and a processor, wherein the memory stores an intelligent image matching program operable on the processor, and the intelligent image matching program implements the following steps when executed by the processor:
receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set;
inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
Optionally, the contrast enhancement method is:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope, b is the intercept, if a is greater than or equal to 1, then D isbCompared with the DaThe contrast of (2) produces an enhancement effect if a is less than 1D isbCompared with the DaThe contrast of (2) produces a weakening effect;
the method for denoising comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
Figure BDA0002167079730000041
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,is the noise total variance of the original image set,
Figure BDA0002167079730000043
is the pixel gray level mean value of (x, y),
Figure BDA0002167079730000044
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
Optionally, the performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, including:
establishing a spatial function from the set of pre-processed images I (x, y), the spatial function L (x, y, σ) being:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the preprocessed image set I (x, y);
establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
Optionally, the clustering operation comprises randomizing the class center positions and optimizing the class center positions;
the randomization of the class center position comprises determining the number of the class centers and randomly generating the coordinate position of the class center;
the optimized class center position dist (x)i,xj) Comprises the following steps:
Figure BDA0002167079730000045
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium, which stores thereon an intelligent image matching program, which can be executed by one or more processors to implement the steps of the intelligent image matching method as described above.
According to the image intelligent matching method, the image intelligent matching device and the computer readable storage medium, the key value-to-value relation is established, so that the original image set can be directly accessed, the traversing speed of the features is higher in a Hadoop big data processing library, and the positioning performance can be fully utilized for data positioning; when the image binarization is normally performed, a lot of useful information in the original image can be lost, and compared with the image binarization operation, the local dynamic threshold value binarization operation can retain more useful information, so that the accuracy of image matching is improved; the clustering operation can keep flexibility and high efficiency and can converge to a local minimum value, so that high-efficiency clustering of data is realized, and the matching precision is further improved. Therefore, the invention can realize fast and accurate image matching for users.
Drawings
Fig. 1 is a schematic flowchart of an intelligent image matching method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an internal structure of an intelligent image matching device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an intelligent image matching program in the intelligent image matching device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an intelligent image matching method. Fig. 1 is a schematic flow chart of an image intelligent matching method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the image intelligent matching method includes:
s1, receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set.
In the preferred embodiment of the present invention, the original image sets include different types of pictures, such as animal images, face images, landscape images, etc., and the final objective of the present invention is to find the picture set with the highest matching degree from the original image sets, for example, if a golden hair dog has 3 pictures in the original image sets, the output result includes 3 golden hair dog pictures with the highest matching degree.
In a preferred embodiment of the present invention, the method for enhancing contrast ratio comprises:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope and b is the intercept. If a is greater than or equal to 1, then D isbCompared with the DaIf a is less than 1, then D is the samebCompared with the DaThe contrast of (2) produces a weakening effect.
In a preferred embodiment of the present invention, the denoising process is:
g(x,y)=η(x,y)+f(x,y)
Figure BDA0002167079730000061
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,
Figure BDA0002167079730000062
is the noise total variance of the original image set,
Figure BDA0002167079730000063
is the pixel gray level mean value of (x, y),
Figure BDA0002167079730000064
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
In a preferred embodiment of the present invention, the local threshold binarization is to determine whether a pixel difference of a field pixel in the original image set is greater than a preset binarization threshold, and if the pixel difference is greater than the preset binarization threshold, the field pixel value is changed to 0 or 1. For example, if two pixels A, B in a picture in the original image set are in a domain relationship, and the pixel difference value between the pixels A, B is greater than the preset binarization threshold, the pixel values of the pixels A, B are all changed to 0 or 1, and the local threshold binarization processing is completed.
S2, performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set.
The preferred embodiment of the present invention establishes a spatial function L (x, y, σ) from the set of preprocessed images I (x, y):
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein, (x, y) represents the coordinates of image pixel points, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the pre-processing image set I (x, y).
Preferably, in a preferred embodiment of the present invention, the gaussian function of the preprocessed image set I (x, y) is:
Figure BDA0002167079730000065
further, in the preferred embodiment of the present invention, a gaussian difference function is established according to the spatial function, and each extreme point is solved based on the gaussian difference function, wherein a set formed by each extreme point is referred to as a scale invariant feature set. The gaussian difference function D (x, y, σ) is:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
Further, in the preferred embodiment of the present invention, the clustering operation is performed on the scale-invariant feature set to obtain an optimized feature set.
In a preferred embodiment of the present invention, the clustering includes randomizing the class center locations and optimizing the class center locations. The randomizing the category center position comprises determining the number of category centers and randomly generating the coordinate position of the category center, wherein the number of the category centers is the sum of the category of the basic data set and the category of the scene data set;
the optimization category center positions are as follows:
Figure BDA0002167079730000071
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
The preferred embodiment of the present invention obtains the optimized feature set J by using the sum of squared errors criterion according to the optimized class center position. The sum of squared errors criterion is:
Figure BDA0002167079730000072
wherein K is the number of the scale invariant feature sets, CkThe number of scale invariant features included in the class center is counted.
S3, inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set by the Hadoop big data processing library, traversing the optimized feature set based on a Hash lookup method to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
In the preferred embodiment of the present invention, a key-value pair relationship is established for the optimized feature set and the original image set through a MapReduce distributed programming model and a cluster in the Hadoop big data processing library, where the expression form of the key-value pair relationship is < key, value >, where key is each image in the original image set, and value is an optimized feature corresponding to each image in the original image set.
The MapReduce distributed programming model divides the optimization feature set into a plurality of small feature sets, then distributes the small feature sets to each subtask in the cluster, and each subtask finds a corresponding original image according to the received small feature sets to complete the establishment of the key-value pair relationship. Due to the Hadoop big data processing library, even if the data volume of the original image set is huge, the establishment speed of the key-value pair relation is not greatly influenced.
Since the data lengths of the features in the optimized feature set are different and it is difficult to perform matching comparison, the data lengths of the features in the optimized feature set need to be unified to be a fixed length. In the preferred embodiment of the invention, the data length of each feature in the optimized feature set is converted into the hash set with fixed length according to the hash algorithm, the hash set is traversed to obtain the traversal similarity, and the matching result is output according to the traversal similarity.
Preferably, the hashing Algorithm employs a Message-Digest Algorithm (as if it is less straightforward) (Message-Digest Algorithm 5) that converts the optimized feature set into the original sixteen-run data set, and then by padding data, reaches the fixed-length hash set. The filling mode is as follows:
original sixteen-way dataset + padding (1-512 bits) — hash set
The data form of the hash set in the preferred embodiment of the present invention is as follows:
1、e695606fc5e31b2ff9038a48a3d363f4c21a3d86;
2、f58da9a820e3fd9d84ab2ca2f1b467ac265038f93;
3、a0c641e92b10d8bcca1ed1bf84ca80340fdefee6。
the preferred embodiment of the present invention traverses the hash data in the hash set according to the position relationship, such that the rightmost values of the hash data 1 and the hash data 3 are both 6, adding one to the traversal score, sequencing all the scattered data according to the traversal score from large to small to obtain a score set, extracting the score set to be larger than a preset score threshold value to obtain a final traversal result set, wherein each traversal result in the traversal result set corresponds to each scattered data, and each hash data is derived from transforming features within the optimized feature set to a fixed length, establishing a key-value pair relationship between each feature in the optimized feature set and the original image set in the Hadoop big data processing library, therefore, according to the key-value pair relation and the traversal result, the matching result of the original image can be obtained and output. If 3 numerical values in the score set A are 34, 38 and 39 respectively, and the preset score threshold A is 33, traversing the result set A to extract the 3 numerical values in the score set A, wherein the 3 numerical values all correspond to corresponding features in the optimization feature set A, and the features in the optimization feature set A establish a key-value pair relationship with the original image set A in the Hadoop big data processing library, so that three original images can be extracted from the original image set A to complete intelligent matching.
The invention also provides an intelligent image matching device. Fig. 2 is a schematic diagram of an internal structure of an intelligent image matching device according to an embodiment of the present invention.
In the present embodiment, the image intelligent matching device 1 may be a PC (Personal Computer), a terminal device such as a smart phone, a tablet Computer, or a mobile Computer, or may be a server. The image intelligent matching device 1 at least comprises a memory 11, a processor 12, a communication bus 13 and a network interface 14.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the image intelligent matching apparatus 1, for example a hard disk of the image intelligent matching apparatus 1. The memory 11 may also be an external storage device of the image intelligent matching apparatus 1 in other embodiments, such as a plug-in hard disk provided on the image intelligent matching apparatus 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 11 may also include both an internal storage unit and an external storage device of the image smart matching apparatus 1. The memory 11 may be used not only to store application software installed in the intelligent image matching apparatus 1 and various types of data, such as a code of the intelligent image matching program 01, but also to temporarily store data that has been output or is to be output.
Processor 12, which in some embodiments may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, is configured to execute program code or process data stored in memory 11, such as executing image intelligent matching program 01.
The communication bus 13 is used to realize connection communication between these components.
The network interface 14 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), typically used to establish a communication link between the apparatus 1 and other electronic devices.
Optionally, the apparatus 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-emitting diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the image intelligent matching apparatus 1 and for displaying a visual user interface.
Fig. 2 shows only the image intelligent matching apparatus 1 with the components 11-14 and the image intelligent matching program 01, and it will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the image intelligent matching apparatus 1, and may include fewer or more components than those shown, or combine some components, or a different arrangement of components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores an image intelligent matching program 01; the processor 12, when executing the image intelligent matching program 01 stored in the memory 11, implements the following steps:
the method comprises the steps of receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set.
In the preferred embodiment of the present invention, the original image sets include different types of pictures, such as animal images, face images, landscape images, etc., and the final objective of the present invention is to find the picture set with the highest matching degree from the original image sets, for example, if a golden hair dog has 3 pictures in the original image sets, the output result includes 3 golden hair dog pictures with the highest matching degree.
In a preferred embodiment of the present invention, the method for enhancing contrast ratio comprises:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope and b is the intercept. If a is greater than or equal to 1, then D isbCompared with the DaIf a is less than 1, then D is the samebCompared with the DaThe contrast of (2) produces a weakening effect.
In a preferred embodiment of the present invention, the denoising process is:
g(x,y)=η(x,y)+f(x,y)
Figure BDA0002167079730000101
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,
Figure BDA0002167079730000102
is the noise total variance of the original image set,is the pixel gray level mean value of (x, y),
Figure BDA0002167079730000104
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
In a preferred embodiment of the present invention, the local threshold binarization is to determine whether a pixel difference of a field pixel in the original image set is greater than a preset binarization threshold, and if the pixel difference is greater than the preset binarization threshold, the field pixel value is changed to 0 or 1. For example, if two pixels A, B in a picture in the original image set are in a domain relationship, and the pixel difference value between the pixels A, B is greater than the preset binarization threshold, the pixel values of the pixels A, B are all changed to 0 or 1, and the local threshold binarization processing is completed.
And secondly, performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set.
The preferred embodiment of the present invention establishes a spatial function L (x, y, σ) from the set of preprocessed images I (x, y):
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein, (x, y) represents the coordinates of image pixel points, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the pre-processing image set I (x, y).
Preferably, in a preferred embodiment of the present invention, the gaussian function of the preprocessed image set I (x, y) is:
Figure BDA0002167079730000111
further, in the preferred embodiment of the present invention, a gaussian difference function is established according to the spatial function, and each extreme point is solved based on the gaussian difference function, wherein a set formed by each extreme point is referred to as a scale invariant feature set. The gaussian difference function D (x, y, σ) is:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
Further, in the preferred embodiment of the present invention, the clustering operation is performed on the scale-invariant feature set to obtain an optimized feature set.
In a preferred embodiment of the present invention, the clustering includes randomizing the class center locations and optimizing the class center locations. The randomizing the category center position comprises determining the number of category centers and randomly generating the coordinate position of the category center, wherein the number of the category centers is the sum of the category of the basic data set and the category of the scene data set;
the optimization category center positions are as follows:
Figure BDA0002167079730000112
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
The preferred embodiment of the present invention obtains the optimized feature set J by using the sum of squared errors criterion according to the optimized class center position. The sum of squared errors criterion is:
wherein K is the number of the scale invariant feature sets, CkThe number of scale invariant features included in the class center is counted.
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set by the Hadoop big data processing library, traversing the optimized feature set based on a Hash lookup method to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
In the preferred embodiment of the present invention, a key-value pair relationship is established for the optimized feature set and the original image set through a MapReduce distributed programming model and a cluster in the Hadoop big data processing library, where the expression form of the key-value pair relationship is < key, value >, where key is each image in the original image set, and value is an optimized feature corresponding to each image in the original image set.
The MapReduce distributed programming model divides the optimization feature set into a plurality of small feature sets, then distributes the small feature sets to each subtask in the cluster, and each subtask finds a corresponding original image according to the received small feature sets to complete the establishment of the key-value pair relationship. Due to the Hadoop big data processing library, even if the data volume of the original image set is huge, the establishment speed of the key-value pair relation is not greatly influenced.
Since the data lengths of the features in the optimized feature set are different and it is difficult to perform matching comparison, the data lengths of the features in the optimized feature set need to be unified to be a fixed length. In the preferred embodiment of the invention, the data length of each feature in the optimized feature set is converted into the hash set with fixed length according to the hash algorithm, the hash set is traversed to obtain the traversal similarity, and the matching result is output according to the traversal similarity.
Preferably, the hashing Algorithm employs a Message-Digest Algorithm (as if it is less straightforward) (Message-Digest Algorithm 5) that converts the optimized feature set into the original sixteen-run data set, and then by padding data, reaches the fixed-length hash set. The filling mode is as follows:
original sixteen-way dataset + padding (1-512 bits) — hash set
The data form of the hash set in the preferred embodiment of the present invention is as follows:
1、e695606fc5e31b2ff9038a48a3d363f4c21a3d86;
2、f58da9a820e3fd9d84ab2ca2f1b467ac265038f93;
3、a0c641e92b10d8bcca1ed1bf84ca80340fdefee6。
the preferred embodiment of the present invention traverses the hash data in the hash set according to the position relationship, such that the rightmost values of the hash data 1 and the hash data 3 are both 6, adding one to the traversal score, sequencing all the scattered data according to the traversal score from large to small to obtain a score set, extracting the score set to be larger than a preset score threshold value to obtain a final traversal result set, wherein each traversal result in the traversal result set corresponds to each scattered data, and each hash data is derived from transforming features within the optimized feature set to a fixed length, establishing a key-value pair relationship between each feature in the optimized feature set and the original image set in the Hadoop big data processing library, therefore, according to the key-value pair relation and the traversal result, the matching result of the original image can be obtained and output. If 3 numerical values in the score set A are 34, 38 and 39 respectively, and the preset score threshold A is 33, traversing the result set A to extract the 3 numerical values in the score set A, wherein the 3 numerical values all correspond to corresponding features in the optimization feature set A, and the features in the optimization feature set A establish a key-value pair relationship with the original image set A in the Hadoop big data processing library, so that three original images can be extracted from the original image set A to complete intelligent matching.
Alternatively, in other embodiments, the image intelligent matching program may be further divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (in this embodiment, the processor 12) to implement the present invention, where the module referred to in the present invention refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the image intelligent matching program in the image intelligent matching apparatus.
For example, referring to fig. 3, a schematic diagram of program modules of an intelligent image matching program in an embodiment of the intelligent image matching device of the present invention is shown, in this embodiment, the intelligent image matching program may be divided into a data receiving module 10, a feature extracting module 20, and a matching result outputting module 30, which exemplarily:
the data receiving module 10 is configured to: and performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set.
The feature extraction module 20 is configured to: and the data integration layer extracts the enterprise data from the database according to the serial number, and extracts asset characteristics from the enterprise data through a factor analysis model.
The matching result output module 30 is configured to: inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
The functions or operation steps implemented by the program modules such as the data receiving module 10, the feature extracting module 20, and the matching result outputting module 30 when executed are substantially the same as those of the above embodiments, and are not repeated herein.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an image intelligent matching program is stored on the computer-readable storage medium, where the image intelligent matching program is executable by one or more processors to implement the following operations:
receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set;
inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
The embodiment of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the image intelligent matching apparatus and method, and will not be described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An intelligent image matching method, characterized in that the method comprises:
receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set;
inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
2. The intelligent image matching method according to claim 1, wherein the contrast enhancement method is as follows:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope, b is the intercept, if a is greater than or equal to 1, then D isbCompared with the DaIf a is less than 1, then D is the samebCompared with the DaThe contrast of (2) produces a weakening effect;
the method for denoising comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
Figure FDA0002167079720000011
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,
Figure FDA0002167079720000012
is the noise total variance of the original image set,
Figure FDA0002167079720000013
is a stand forThe average value of the pixel gradations of the above (x, y),
Figure FDA0002167079720000014
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
3. The intelligent image matching method according to claim 1 or 2, wherein the performing of scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set comprises:
establishing a spatial function from the set of pre-processed images I (x, y), the spatial function L (x, y, σ) being:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the preprocessed image set I (x, y);
establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
4. The intelligent image matching method of claim 3, wherein the clustering operation includes randomizing the class center positions and optimizing the class center positions;
the randomization of the class center position comprises determining the number of the class centers and randomly generating the coordinate position of the class center;
the optimized class center position dist (x)i,xj) Comprises the following steps:
Figure FDA0002167079720000021
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
5. The intelligent image matching method according to claim 1, wherein the establishing of a key-value pair relationship between the optimized feature set and the original image set and the traversal of the optimized feature set to obtain a traversal result, and the outputting of the matching result for the original image according to the key-value pair relationship and the traversal result comprise:
dividing the optimization feature set into a plurality of small feature sets by a MapReduce distributed programming model in the Hadoop big data processing library, and distributing the plurality of small feature sets to each subtask in a cluster in the Hadoop big data processing library;
each subtask finds a corresponding original image according to the received small feature set to complete the establishment of the key-value pair relation;
and according to a hash algorithm, converting the data length of each feature in the optimized feature set into a hash set with a fixed length, traversing the hash set to obtain traversal similarity, and outputting a matching result of the original image set according to the traversal similarity and by combining the key-value pair relation.
6. An intelligent image matching device, comprising a memory and a processor, wherein the memory stores an intelligent image matching program operable on the processor, and the intelligent image matching program, when executed by the processor, implements the following steps:
receiving an original image set, and carrying out preprocessing operations including contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale invariant feature transformation on the preprocessed image set to obtain a scale invariant feature set, and performing clustering operation on the scale invariant feature set to obtain an optimized feature set;
inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key-value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversal result, and outputting a matching result of the original image according to the key-value pair relation and the traversal result.
7. The intelligent image matching device as claimed in claim 6, wherein the contrast enhancement method is as follows:
Db=f(Da)=a*Da+b
wherein D isaRepresenting the gray value of the input image, DbRepresenting the gray value of the output image, a is the linear slope, b is the intercept, if a is greater than or equal to 1, then D isbCompared with the DaIf a is less than 1, then D is the samebCompared with the DaThe contrast of (2) produces a weakening effect;
the method for denoising comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
Figure FDA0002167079720000031
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set,
Figure FDA0002167079720000032
is the noise total variance of the original image set,
Figure FDA0002167079720000033
is the pixel gray level mean value of (x, y),
Figure FDA0002167079720000034
and L represents the coordinate of the current pixel point, wherein the pixel gray variance of (x, y) is shown.
8. The intelligent image matching device according to claim 6 or 7, wherein the performing of scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set includes:
establishing a spatial function from the set of pre-processed images I (x, y), the spatial function L (x, y, σ) being:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, σ represents a scale parameter, and G (x, y, σ) is a Gaussian function of the preprocessed image set I (x, y);
establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
where n is the number of pixels in the (x, y) domain, n σ is a scale layer, G (x, y, n σ) represents a gaussian function at the same scale layer as the G (x, y, σ), and L (x, y, n σ) represents a spatial function at the same scale layer as the L (x, y, σ).
9. The apparatus for image intelligent matching as claimed in claim 8, wherein the clustering operation includes randomizing the class center position and optimizing the class center position;
the randomization of the class center position comprises determining the number of the class centers and randomly generating the coordinate position of the class center;
the optimized class center position dist (x)i,xj) Comprises the following steps:
Figure FDA0002167079720000041
wherein x isi,xjIs the data of the scale-invariant feature set, dist (x)i,xj) The position distance between the data of the scale-invariant feature set is shown, D is the number of the class centers, xi,d,xj,dData representing a set of scale-invariant features under the center of each category.
10. A computer-readable storage medium, having stored thereon an intelligent image matching program, the intelligent image matching program being executable by one or more processors to implement the steps of the intelligent image matching method as claimed in any one of claims 1 to 5.
CN201910762047.4A 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium Active CN110633733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910762047.4A CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910762047.4A CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110633733A true CN110633733A (en) 2019-12-31
CN110633733B CN110633733B (en) 2024-05-03

Family

ID=68970506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910762047.4A Active CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110633733B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860661A (en) * 2020-07-24 2020-10-30 中国平安财产保险股份有限公司 Data analysis method and device based on user behavior, electronic equipment and medium
CN112416890A (en) * 2020-11-24 2021-02-26 杭州电子科技大学 Insect robot mass image data parallel processing platform
CN114338957A (en) * 2022-03-14 2022-04-12 杭州雄迈集成电路技术股份有限公司 Video denoising method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102194133A (en) * 2011-07-05 2011-09-21 北京航空航天大学 Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method
CN102496033A (en) * 2011-12-05 2012-06-13 西安电子科技大学 Image SIFT feature matching method based on MR computation framework
US20170154056A1 (en) * 2014-06-24 2017-06-01 Beijing Qihoo Technology Company Limited Matching image searching method, image searching method and devices
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium
CN109711284A (en) * 2018-12-11 2019-05-03 江苏博墨教育科技有限公司 A kind of test answer sheet system intelligent recognition analysis method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102194133A (en) * 2011-07-05 2011-09-21 北京航空航天大学 Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method
CN102496033A (en) * 2011-12-05 2012-06-13 西安电子科技大学 Image SIFT feature matching method based on MR computation framework
US20170154056A1 (en) * 2014-06-24 2017-06-01 Beijing Qihoo Technology Company Limited Matching image searching method, image searching method and devices
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium
CN109711284A (en) * 2018-12-11 2019-05-03 江苏博墨教育科技有限公司 A kind of test answer sheet system intelligent recognition analysis method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860661A (en) * 2020-07-24 2020-10-30 中国平安财产保险股份有限公司 Data analysis method and device based on user behavior, electronic equipment and medium
CN111860661B (en) * 2020-07-24 2024-04-30 中国平安财产保险股份有限公司 Data analysis method and device based on user behaviors, electronic equipment and medium
CN112416890A (en) * 2020-11-24 2021-02-26 杭州电子科技大学 Insect robot mass image data parallel processing platform
CN114338957A (en) * 2022-03-14 2022-04-12 杭州雄迈集成电路技术股份有限公司 Video denoising method and system
CN114338957B (en) * 2022-03-14 2022-07-29 杭州雄迈集成电路技术股份有限公司 Video denoising method and system

Also Published As

Publication number Publication date
CN110633733B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
JP6774137B2 (en) Systems and methods for verifying the authenticity of ID photos
WO2020199478A1 (en) Method for training image generation model, image generation method, device and apparatus, and storage medium
JP5261501B2 (en) Permanent visual scene and object recognition
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN110633733B (en) Image intelligent matching method, device and computer readable storage medium
CN109948397A (en) A kind of face image correcting method, system and terminal device
US11822595B2 (en) Incremental agglomerative clustering of digital images
US11714921B2 (en) Image processing method with ash code on local feature vectors, image processing device and storage medium
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2020125100A1 (en) Image search method, apparatus, and device
JP2015036939A (en) Feature extraction program and information processing apparatus
Huang et al. Fine-art painting classification via two-channel deep residual network
JP5430636B2 (en) Data acquisition apparatus, method and program
CN111488479B (en) Hypergraph construction method and device, computer system and medium
CN110147460B (en) Three-dimensional model retrieval method and device based on convolutional neural network and multi-view map
JP6699048B2 (en) Feature selecting device, tag related area extracting device, method, and program
CN108536769B (en) Image analysis method, search method and device, computer device and storage medium
Ramos-Arredondo et al. PhotoId-Whale: Blue whale dorsal fin classification for mobile devices
CN110765917A (en) Active learning method, device, terminal and medium suitable for face recognition model training
CN114821140A (en) Image clustering method based on Manhattan distance, terminal device and storage medium
CN113591969B (en) Face similarity evaluation method, device, equipment and storage medium
CN111695441B (en) Image document processing method, device and computer readable storage medium
CN106469437B (en) Image processing method and image processing apparatus
CN112036501A (en) Image similarity detection method based on convolutional neural network and related equipment thereof
CN113762059A (en) Image processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant