CN110633733B - Image intelligent matching method, device and computer readable storage medium - Google Patents

Image intelligent matching method, device and computer readable storage medium Download PDF

Info

Publication number
CN110633733B
CN110633733B CN201910762047.4A CN201910762047A CN110633733B CN 110633733 B CN110633733 B CN 110633733B CN 201910762047 A CN201910762047 A CN 201910762047A CN 110633733 B CN110633733 B CN 110633733B
Authority
CN
China
Prior art keywords
original image
scale
image
feature set
image set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910762047.4A
Other languages
Chinese (zh)
Other versions
CN110633733A (en
Inventor
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201910762047.4A priority Critical patent/CN110633733B/en
Publication of CN110633733A publication Critical patent/CN110633733A/en
Application granted granted Critical
Publication of CN110633733B publication Critical patent/CN110633733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an intelligent image matching method, which comprises the following steps: receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set; performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set; inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result. The invention also provides an intelligent image matching device and a computer readable storage medium. The invention can realize rapid and accurate image matching.

Description

Image intelligent matching method, device and computer readable storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a method, an apparatus, and a computer readable storage medium for obtaining an image matching result based on intelligent image similarity calculation.
Background
The image matching is based on the acquired image sets, and the similarity and consistency of textures, features, structures and the like among the image sets are found out through a matching algorithm, so that similar images are found out. Image matching is a hot spot problem in the image field, and high-precision image matching can better process follow-up works such as image stitching, target tracking and recognition. Current image matching algorithms can be divided into 3 major classes: gray-scale-based image matching method, transform domain-based image matching method, and feature-based image matching method. The feature-based matching method is good in noise resistance and has better robustness on rotation and shielding of objects in the image. However, when the image is matched by the feature-based matching method, time and labor are wasted when the image is searched, and the matching precision needs to be further improved.
Disclosure of Invention
The invention provides an intelligent image matching method, an intelligent image matching device and a computer readable storage medium, which mainly aim to present a quick and accurate image matching result to a user when the user inputs an image set.
In order to achieve the above object, the present invention provides an image intelligent matching method, comprising:
receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set;
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
Optionally, the method for enhancing contrast is as follows:
Db=f(Da)=a*Da+b
Wherein D a represents an input image gray value, D b represents an output image gray value, a is a linear slope, b is an intercept, if a is greater than or equal to 1, the contrast ratio of D b to D a is enhanced, and if a is less than 1, the contrast ratio of D b to D a is attenuated;
the noise reduction processing method comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
Optionally, the performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set includes:
establishing a space function according to the preprocessed image set I (x, y), wherein the space function L (x, y, sigma) is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
wherein (x, y) represents pixel point coordinates within the original image set, σ represents a scale parameter, and G (x, y, σ) represents a gaussian function of the preprocessed image set I (x, y);
Establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
Optionally, the clustering operation includes randomizing a category center position and optimizing a category center position;
The randomized category center position comprises determining the number of category centers and randomly generating the coordinate positions of the category centers;
the optimized class center position dist (x i,xj) is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
Optionally, establishing a key value pair relationship between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result for the original image according to the key value pair relationship and the traversing result, including:
The MapReduce distributed programming model in the Hadoop big data processing library divides the optimized feature set into a plurality of small feature sets, and distributes the small feature sets to each subtask in a cluster in the Hadoop big data processing library;
Each subtask finds out a corresponding original image to complete establishment of the key value pair relation according to the received small feature set;
According to a hash algorithm, converting the data length of each feature in the optimized feature set into a hash set with fixed length, traversing the hash set to obtain traversal similarity, and outputting a matching result of the original image set according to the traversal similarity and combining the key value pair relation.
In addition, in order to achieve the above object, the present invention also provides an image intelligent matching apparatus, which includes a memory and a processor, wherein the memory stores an image intelligent matching program that can be executed on the processor, and the image intelligent matching program when executed by the processor implements the steps of:
receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set;
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
Optionally, the method for enhancing contrast is as follows:
Db=f(Da)=a*Da+b
Wherein D a represents an input image gray value, D b represents an output image gray value, a is a linear slope, b is an intercept, if a is greater than or equal to 1, the contrast ratio of D b to D a is enhanced, and if a is less than 1, the contrast ratio of D b to D a is attenuated;
the noise reduction processing method comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
Optionally, the performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set includes:
establishing a space function according to the preprocessed image set I (x, y), wherein the space function L (x, y, sigma) is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Wherein (x, y) represents pixel point coordinates within the original image set, σ represents a scale parameter, and G (x, y, σ) represents a gaussian function of the preprocessed image set I (x, y);
Establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
Wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
Optionally, the clustering operation includes randomizing a category center position and optimizing a category center position;
The randomized category center position comprises determining the number of category centers and randomly generating the coordinate positions of the category centers;
the optimized class center position dist (x i,xj) is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an image intelligent matching program executable by one or more processors to implement the steps of the image intelligent matching method as described above.
According to the intelligent image matching method, the intelligent image matching device and the computer-readable storage medium, the original image set can be directly accessed because the key value pair relation is established, and in the Hadoop big data processing library, the speed of traversing the features is higher, so that the positioning performance can be fully utilized for data positioning; normally performing image binarization can lose a lot of useful information in an original image, and compared with the image binarization operation, the local dynamic threshold binarization operation can retain more useful information, so that the accuracy of image matching is improved; the clustering operation can keep the scalability and the high efficiency, can be converged to a local minimum, further realizes the high-efficiency clustering of data, and further improves the matching precision. Therefore, the invention can realize rapid and accurate image matching for the user.
Drawings
FIG. 1 is a flow chart of an intelligent image matching method according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of an internal structure of an intelligent image matching device according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an image intelligent matching program in the image intelligent matching device according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides an intelligent image matching method. Referring to fig. 1, a flow chart of an intelligent image matching method according to an embodiment of the invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the image intelligent matching method includes:
S1, receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set.
In the preferred embodiment of the present invention, the original image set includes different types of pictures, such as animal images, face images, scenic pictures, etc., and the final objective of the present invention is to find the picture set with the highest matching degree from the original image set, for example, if one gold hair dog has 3 pictures in the original image set, the output result includes 3 gold hair dog pictures with the highest matching degree.
In a preferred embodiment of the present invention, the method for enhancing contrast is as follows:
Db=f(Da)=a*Da+b
Where D a represents the input image gray value, D b represents the output image gray value, a is the linear slope, and b is the intercept. If a is greater than or equal to 1, then the contrast ratio of D b to D a is enhanced, and if a is less than 1, then the contrast ratio of D b to D a is attenuated.
In a preferred embodiment of the present invention, the noise reduction process is:
g(x,y)=η(x,y)+f(x,y)
Wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
In a preferred embodiment of the present invention, the local threshold binarization is to determine whether a pixel difference between the pixels in the domain in the original image set is greater than a preset binarization threshold, and if the pixel difference is greater than the preset binarization threshold, the pixel value in the domain is changed to 0 or 1. For example, if there are two pixels A, B in one image in the original image set, and the pixel difference between A, B is greater than the preset binarization threshold, the pixel values of the pixels A, B are changed to 0 or 1, so as to complete the local threshold binarization processing.
S2, performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set.
The preferred embodiment of the present invention builds a spatial function L (x, y, σ) from the preprocessed image set I (x, y):
L(x,y,σ)=G(x,y,σ)*I(x,y)
Where (x, y) represents the image pixel coordinates, σ represents the scale parameter, and G (x, y, σ) is a gaussian function of the preprocessed image set I (x, y).
Preferably, the gaussian function of the preprocessed image set I (x, y) according to the preferred embodiment of the present invention is:
Further, in the preferred embodiment of the present invention, a gaussian difference function is established according to the spatial function, and each extreme point is solved based on the gaussian difference function, where a set formed by each extreme point is called a scale invariant feature set. The gaussian difference function D (x, y, σ) is:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
Wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
Further, the preferred embodiment of the present invention performs a clustering operation on the scale-invariant feature set to obtain an optimized feature set.
In a preferred embodiment of the present invention, the clustering includes randomizing the class center position and optimizing the class center position. The randomized class center position comprises determining the number of class centers and randomly generating the coordinate positions of the class centers, wherein the number of the class centers is the sum of the types of the basic data set and the scene data set;
The optimized category center position is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
The preferred embodiment of the invention obtains the optimization feature set J by adopting an error square sum criterion according to the optimization class center position. The error square sum criterion is:
Wherein K is the number of the scale-invariant feature sets, and C k is the number of scale-invariant features included in the class center.
S3, inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set by the Hadoop big data processing library, traversing the optimized feature set based on a Hash search method to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
In the preferred embodiment of the invention, a key value pair relation is established between the optimized feature set and the original image set through a MapReduce distributed programming model and clusters in the Hadoop big data processing library, the representation form of the key value pair relation is < key, value >, wherein key is each image in the original image set, and value is the optimized feature corresponding to each image in the original image set.
The MapReduce distributed programming model divides the optimized feature set into a plurality of small feature sets, then distributes the small feature sets into each subtask in the cluster, and each subtask finds a corresponding original image according to the received small feature sets to complete establishment of the key value pair relation. Because of the Hadoop big data processing library, the establishment speed of the key value pair relation is not greatly influenced even if the data volume of the original image set is huge.
Because the data lengths of the features in the optimized feature set are different, and matching comparison is difficult to perform, the data lengths of the features in the optimized feature set need to be unified to be a fixed length. According to the preferred embodiment of the invention, the data length of each feature in the optimized feature set is transformed into a hash set with fixed length according to a hash algorithm, the hash set is traversed to obtain traversal similarity, and a matching result is output according to the traversal similarity.
Preferably, the hashing Algorithm employs a Message-Digest Algorithm (as if it were less popular) (Message-Digest Algorithm 5) that converts the optimized feature set into the original hexadecimal data set and then reaches the fixed-length hash set by populating the data. The filling mode is as follows:
original hexadecimal dataset + pad (1-512 bits) =hashed set
The data form of the hash set in the preferred embodiment of the present invention is as follows:
1、e695606fc5e31b2ff9038a48a3d363f4c21a3d86;
2、f58da9a820e3fd9d84ab2ca2f1b467ac265038f93;
3、a0c641e92b10d8bcca1ed1bf84ca80340fdefee6。
According to the preferred embodiment of the invention, the hash data in the hash set is traversed according to the position relation, if the rightmost numerical value of the hash data 1 and the hash data 3 is 6, the traversing score is increased by one, after all the hash data are traversed, the score set is obtained by sorting from big to small according to the traversing score, the score set is extracted to be larger than a preset score threshold value to obtain a final traversing result set, each traversing result in the traversing result set corresponds to each hash data, each hash data is obtained by transforming each feature in the optimizing feature set into a fixed length, and each feature in the optimizing feature set and the original image set establish a key value pair relation in the Hadoop big data processing library, so that the matching result of the original image can be obtained and output according to the key value pair relation and the traversing result. If the score set a has 3 values of 34, 38 and 39 and the preset score threshold a is 33, the 3 values in the score set a are extracted from the traversal result set a, and the 3 values all correspond to the corresponding features in the optimized feature set a, and the key value pair relationship between each feature in the optimized feature set a and the original image set a is established in the Hadoop big data processing library, so that three original images can be extracted from the original image set a, and intelligent matching is completed.
The invention also provides an intelligent image matching device. Referring to fig. 2, an internal structure diagram of an intelligent image matching device according to an embodiment of the invention is shown.
In this embodiment, the image intelligent matching apparatus 1 may be a PC (Personal Computer ), or a terminal device such as a smart phone, a tablet computer, a portable computer, or a server. The image intelligent matching apparatus 1 comprises at least a memory 11, a processor 12, a communication bus 13, and a network interface 14.
The memory 11 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the image intelligent matching apparatus 1, for example a hard disk of the image intelligent matching apparatus 1. The memory 11 may also be an external storage device of the image intelligent matching apparatus 1 in other embodiments, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the image intelligent matching apparatus 1. Further, the memory 11 may also include both an internal memory unit and an external memory device of the image intelligent matching apparatus 1. The memory 11 may be used not only for storing application software installed in the image intelligent matching apparatus 1 and various types of data, such as codes of the image intelligent matching program 01, but also for temporarily storing data that has been output or is to be output.
Processor 12 may in some embodiments be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip for running program code or processing data stored in memory 11, such as executing image intelligent matching program 01, etc.
The communication bus 13 is used to enable connection communication between these components.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used to establish a communication connection between the apparatus 1 and other electronic devices.
Optionally, the device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-emitting diode) touch, or the like. The display may also be referred to as a display screen or a display unit, as appropriate, for displaying information processed in the image intelligent matching apparatus 1 and for displaying a visual user interface.
Fig. 2 shows only the image intelligent matching apparatus 1 with components 11-14 and image intelligent matching program 01, it will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the image intelligent matching apparatus 1, and may include fewer or more components than shown, or may combine certain components, or may be arranged in different components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores an image intelligent matching program 01; the processor 12 performs the following steps when executing the image intelligent matching program 01 stored in the memory 11:
Step one, receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set.
In the preferred embodiment of the present invention, the original image set includes different types of pictures, such as animal images, face images, scenic pictures, etc., and the final objective of the present invention is to find the picture set with the highest matching degree from the original image set, for example, if one gold hair dog has 3 pictures in the original image set, the output result includes 3 gold hair dog pictures with the highest matching degree.
In a preferred embodiment of the present invention, the method for enhancing contrast is as follows:
Db=f(Da)=a*Da+b
Where D a represents the input image gray value, D b represents the output image gray value, a is the linear slope, and b is the intercept. If a is greater than or equal to 1, then the contrast ratio of D b to D a is enhanced, and if a is less than 1, then the contrast ratio of D b to D a is attenuated.
In a preferred embodiment of the present invention, the noise reduction process is:
g(x,y)=η(x,y)+f(x,y)
Wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
In a preferred embodiment of the present invention, the local threshold binarization is to determine whether a pixel difference between the pixels in the domain in the original image set is greater than a preset binarization threshold, and if the pixel difference is greater than the preset binarization threshold, the pixel value in the domain is changed to 0 or 1. For example, if there are two pixels A, B in one image in the original image set, and the pixel difference between A, B is greater than the preset binarization threshold, the pixel values of the pixels A, B are changed to 0 or 1, so as to complete the local threshold binarization processing.
Step two, performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set.
The preferred embodiment of the present invention builds a spatial function L (x, y, σ) from the preprocessed image set I (x, y):
L(x,y,σ)=G(x,y,σ)*I(x,y)
Where (x, y) represents the image pixel coordinates, σ represents the scale parameter, and G (x, y, σ) is a gaussian function of the preprocessed image set I (x, y).
Preferably, the gaussian function of the preprocessed image set I (x, y) according to the preferred embodiment of the present invention is:
Further, in the preferred embodiment of the present invention, a gaussian difference function is established according to the spatial function, and each extreme point is solved based on the gaussian difference function, where a set formed by each extreme point is called a scale invariant feature set. The gaussian difference function D (x, y, σ) is:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)=L(x,y,nσ)-L(x,y,σ)
Wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
Further, the preferred embodiment of the present invention performs a clustering operation on the scale-invariant feature set to obtain an optimized feature set.
In a preferred embodiment of the present invention, the clustering includes randomizing the class center position and optimizing the class center position. The randomized class center position comprises determining the number of class centers and randomly generating the coordinate positions of the class centers, wherein the number of the class centers is the sum of the types of the basic data set and the scene data set;
The optimized category center position is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
The preferred embodiment of the invention obtains the optimization feature set J by adopting an error square sum criterion according to the optimization class center position. The error square sum criterion is:
Wherein K is the number of the scale-invariant feature sets, and C k is the number of scale-invariant features included in the class center.
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set by the Hadoop big data processing library, traversing the optimized feature set based on a Hash search method to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
In the preferred embodiment of the invention, a key value pair relation is established between the optimized feature set and the original image set through a MapReduce distributed programming model and clusters in the Hadoop big data processing library, the representation form of the key value pair relation is < key, value >, wherein key is each image in the original image set, and value is the optimized feature corresponding to each image in the original image set.
The MapReduce distributed programming model divides the optimized feature set into a plurality of small feature sets, then distributes the small feature sets into each subtask in the cluster, and each subtask finds a corresponding original image according to the received small feature sets to complete establishment of the key value pair relation. Because of the Hadoop big data processing library, the establishment speed of the key value pair relation is not greatly influenced even if the data volume of the original image set is huge.
Because the data lengths of the features in the optimized feature set are different, and matching comparison is difficult to perform, the data lengths of the features in the optimized feature set need to be unified to be a fixed length. According to the preferred embodiment of the invention, the data length of each feature in the optimized feature set is transformed into a hash set with fixed length according to a hash algorithm, the hash set is traversed to obtain traversal similarity, and a matching result is output according to the traversal similarity.
Preferably, the hashing Algorithm employs a Message-Digest Algorithm (as if it were less popular) (Message-Digest Algorithm 5) that converts the optimized feature set into the original hexadecimal data set and then reaches the fixed-length hash set by populating the data. The filling mode is as follows:
original hexadecimal dataset + pad (1-512 bits) =hashed set
The data form of the hash set in the preferred embodiment of the present invention is as follows:
1、e695606fc5e31b2ff9038a48a3d363f4c21a3d86;
2、f58da9a820e3fd9d84ab2ca2f1b467ac265038f93;
3、a0c641e92b10d8bcca1ed1bf84ca80340fdefee6。
According to the preferred embodiment of the invention, the hash data in the hash set is traversed according to the position relation, if the rightmost numerical value of the hash data 1 and the hash data 3 is 6, the traversing score is increased by one, after all the hash data are traversed, the score set is obtained by sorting from big to small according to the traversing score, the score set is extracted to be larger than a preset score threshold value to obtain a final traversing result set, each traversing result in the traversing result set corresponds to each hash data, each hash data is obtained by transforming each feature in the optimizing feature set into a fixed length, and each feature in the optimizing feature set and the original image set establish a key value pair relation in the Hadoop big data processing library, so that the matching result of the original image can be obtained and output according to the key value pair relation and the traversing result. If the score set a has 3 values of 34, 38 and 39 and the preset score threshold a is 33, the 3 values in the score set a are extracted from the traversal result set a, and the 3 values all correspond to the corresponding features in the optimized feature set a, and the key value pair relationship between each feature in the optimized feature set a and the original image set a is established in the Hadoop big data processing library, so that three original images can be extracted from the original image set a, and intelligent matching is completed.
Alternatively, in other embodiments, the image intelligent matching program may be further divided into one or more modules, where one or more modules are stored in the memory 11 and executed by one or more processors (the processor 12 in this embodiment) to perform the present invention, and the modules referred to herein are a series of instruction segments of a computer program capable of performing a specific function, for describing the execution of the image intelligent matching program in the image intelligent matching apparatus.
For example, referring to fig. 3, a schematic program module of an image intelligent matching program in an embodiment of the image intelligent matching apparatus of the present invention is shown, where the image intelligent matching program may be divided into a data receiving module 10, a feature extracting module 20, and a matching result outputting module 30, and the matching result outputting module is exemplified:
the data receiving module 10 is configured to: and performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set.
The feature extraction module 20 is configured to: and the data integration layer extracts the enterprise data from the database according to the number, and extracts asset characteristics from the enterprise data through a factor analysis model.
The matching result output module 30 is configured to: inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
The functions or operation steps implemented when the program modules such as the data receiving module 10, the feature extracting module 20, the matching result outputting module 30 and the like are executed are substantially the same as those of the foregoing embodiments, and will not be described herein again.
In addition, an embodiment of the present invention further proposes a computer-readable storage medium, on which an image intelligent matching program is stored, the image intelligent matching program being executable by one or more processors to implement the following operations:
receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set;
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result.
The computer-readable storage medium of the present invention is substantially the same as the above-described embodiments of the image intelligent matching apparatus and method, and will not be described in detail herein.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. An intelligent image matching method is characterized by comprising the following steps:
receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set;
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result, wherein the method comprises the following steps: the MapReduce distributed programming model in the Hadoop big data processing library divides the optimized feature set into a plurality of small feature sets, distributes the plurality of small feature sets into each subtask in a cluster in the Hadoop big data processing library, each subtask finds out a corresponding original image to complete establishment of the key value pair relation according to the received small feature set, transforms the data length of each feature in the optimized feature set into a hash set with fixed length according to a hash algorithm, and traverses the hash set to obtain traversal similarity, and outputs a matching result of the original image set according to the traversal similarity and in combination with the key value pair relation, wherein each image in the original image set is a key of the key value pair relation, and the optimized feature corresponding to each image in the original image set is a value of the key value pair relation.
2. The intelligent image matching method according to claim 1, wherein the contrast enhancement method is as follows:
Db=f(Da)=a*Da+b
Wherein D a represents an input image gray value, D b represents an output image gray value, a is a linear slope, b is an intercept, if a is greater than or equal to 1, the contrast ratio of D b to D a is enhanced, and if a is less than 1, the contrast ratio of D b to D a is attenuated;
the noise reduction processing method comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,/>Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
3. The method for intelligently matching images according to claim 1 or 2, wherein the performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set includes:
establishing a space function according to the preprocessed image set I (x, y), wherein the space function L (x, y, sigma) is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Wherein (x, y) represents pixel point coordinates within the original image set, σ represents a scale parameter, and G (x, y, σ) represents a gaussian function of the preprocessed image set I (x, y);
Establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)
=L(x,y,nσ)-L(x,y,σ)
wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
4. The intelligent image matching method according to claim 3, wherein the clustering operation includes randomizing a category center position and optimizing a category center position;
The randomized category center position comprises determining the number of category centers and randomly generating the coordinate positions of the category centers;
the optimized class center position dist (x i,xj) is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
5. An image intelligent matching device, characterized in that the device comprises a memory and a processor, wherein the memory stores an image intelligent matching program capable of running on the processor, and the image intelligent matching program realizes the following steps when being executed by the processor:
receiving an original image set, and performing preprocessing operations comprising contrast enhancement, noise reduction and local threshold binarization on the original image set to obtain a preprocessed image set;
performing scale-invariant feature transformation on the preprocessed image set to obtain a scale-invariant feature set, and performing clustering operation on the scale-invariant feature set to obtain an optimized feature set;
Inputting the optimized feature set and the original image set into a Hadoop big data processing library, establishing a key value pair relation between the optimized feature set and the original image set, traversing the optimized feature set to obtain a traversing result, and outputting a matching result of the original image according to the key value pair relation and the traversing result, wherein the method comprises the following steps: the MapReduce distributed programming model in the Hadoop big data processing library divides the optimized feature set into a plurality of small feature sets, distributes the plurality of small feature sets into each subtask in a cluster in the Hadoop big data processing library, each subtask finds out a corresponding original image to complete establishment of the key value pair relation according to the received small feature set, transforms the data length of each feature in the optimized feature set into a hash set with fixed length according to a hash algorithm, and traverses the hash set to obtain traversal similarity, and outputs a matching result of the original image set according to the traversal similarity and in combination with the key value pair relation, wherein each image in the original image set is a key of the key value pair relation, and the optimized feature corresponding to each image in the original image set is a value of the key value pair relation.
6. The intelligent image matching apparatus according to claim 5, wherein the contrast enhancement method is as follows:
Db=f(Da)=a*Da+b
Wherein D a represents an input image gray value, D b represents an output image gray value, a is a linear slope, b is an intercept, if a is greater than or equal to 1, the contrast ratio of D b to D a is enhanced, and if a is less than 1, the contrast ratio of D b to D a is attenuated;
the noise reduction processing method comprises the following steps:
g(x,y)=η(x,y)+f(x,y)
wherein (x, y) represents coordinates of pixel points in the original image set, f (x, y) is output data after the noise reduction processing, η (x, y) is noise, g (x, y) is the original image set, For the noise total variance of the original image set,/>Is the pixel gray level average value of the (x, y)/>For the pixel gray variance of (x, y), L represents the current pixel point coordinates.
7. The intelligent image matching apparatus according to claim 5 or 6, wherein said performing scale-invariant feature transformation on said preprocessed image set to obtain a scale-invariant feature set comprises:
establishing a space function according to the preprocessed image set I (x, y), wherein the space function L (x, y, sigma) is as follows:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Wherein (x, y) represents pixel point coordinates within the original image set, σ represents a scale parameter, and G (x, y, σ) represents a gaussian function of the preprocessed image set I (x, y);
Establishing a Gaussian difference function according to the space function, and solving each extreme point based on the Gaussian difference function, wherein a set formed by each extreme point is called a scale invariant feature set, and the Gaussian difference function D (x, y, sigma) is as follows:
D(x,y,σ)=[G(x,y,nσ)-G(x,y,σ)]*I(x,y)
=L(x,y,nσ)-L(x,y,σ)
wherein n is the number of pixels in the (x, y) field, nσ is a scale layer, G (x, y, nσ) represents a gaussian function at the same scale layer as G (x, y, σ), and L (x, y, nσ) represents a spatial function at the same scale layer as L (x, y, σ).
8. The image intelligent matching apparatus of claim 7, wherein said clustering operations include randomizing class center positions and optimizing class center positions;
The randomized category center position comprises determining the number of category centers and randomly generating the coordinate positions of the category centers;
the optimized class center position dist (x i,xj) is:
Wherein x i,xj is the data of the scale-invariant feature set, dist (x i,xj) is the position distance between the data of the scale-invariant feature set, D is the number of class centers, and x i,d,xj,d is the data of the scale-invariant feature set under each class center.
9. A computer-readable storage medium having stored thereon an image intelligent matching program executable by one or more processors to implement the steps of the image intelligent matching method of any of claims 1 to 4.
CN201910762047.4A 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium Active CN110633733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910762047.4A CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910762047.4A CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110633733A CN110633733A (en) 2019-12-31
CN110633733B true CN110633733B (en) 2024-05-03

Family

ID=68970506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910762047.4A Active CN110633733B (en) 2019-08-14 2019-08-14 Image intelligent matching method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110633733B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860661B (en) * 2020-07-24 2024-04-30 中国平安财产保险股份有限公司 Data analysis method and device based on user behaviors, electronic equipment and medium
CN112416890B (en) * 2020-11-24 2022-10-25 杭州电子科技大学 Insect robot mass image data parallel processing platform
CN114338957B (en) * 2022-03-14 2022-07-29 杭州雄迈集成电路技术股份有限公司 Video denoising method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102194133A (en) * 2011-07-05 2011-09-21 北京航空航天大学 Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method
CN102496033A (en) * 2011-12-05 2012-06-13 西安电子科技大学 Image SIFT feature matching method based on MR computation framework
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium
CN109711284A (en) * 2018-12-11 2019-05-03 江苏博墨教育科技有限公司 A kind of test answer sheet system intelligent recognition analysis method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196964A1 (en) * 2014-06-24 2015-12-30 北京奇虎科技有限公司 Matching picture search method, picture search method and apparatuses

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102194133A (en) * 2011-07-05 2011-09-21 北京航空航天大学 Data-clustering-based adaptive image SIFT (Scale Invariant Feature Transform) feature matching method
CN102496033A (en) * 2011-12-05 2012-06-13 西安电子科技大学 Image SIFT feature matching method based on MR computation framework
CN109101867A (en) * 2018-06-11 2018-12-28 平安科技(深圳)有限公司 A kind of image matching method, device, computer equipment and storage medium
CN109711284A (en) * 2018-12-11 2019-05-03 江苏博墨教育科技有限公司 A kind of test answer sheet system intelligent recognition analysis method

Also Published As

Publication number Publication date
CN110633733A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
JP6774137B2 (en) Systems and methods for verifying the authenticity of ID photos
Leng et al. Person re-identification with content and context re-ranking
TWI528196B (en) Similar image recognition method and apparatus
CN110633733B (en) Image intelligent matching method, device and computer readable storage medium
Farinella et al. Representing scenes for real-time context classification on mobile devices
US8737737B1 (en) Representing image patches for matching
WO2022111069A1 (en) Image processing method and apparatus, electronic device and storage medium
JP5261501B2 (en) Permanent visual scene and object recognition
CN110738203B (en) Field structured output method, device and computer readable storage medium
US11714921B2 (en) Image processing method with ash code on local feature vectors, image processing device and storage medium
WO2021237570A1 (en) Image auditing method and apparatus, device, and storage medium
JP2015504215A (en) Method and system for comparing images
CN110008997B (en) Image texture similarity recognition method, device and computer readable storage medium
CN111831844A (en) Image retrieval method, image retrieval device, image retrieval apparatus, and medium
CN115630236A (en) Global fast retrieval positioning method of passive remote sensing image, storage medium and equipment
JP5430636B2 (en) Data acquisition apparatus, method and program
CN110705547B (en) Method and device for recognizing text in image and computer readable storage medium
Kang et al. Combining random forest with multi-block local binary pattern feature selection for multiclass head pose estimation
CN108536769B (en) Image analysis method, search method and device, computer device and storage medium
CN112287140A (en) Image retrieval method and system based on big data
Ramos-Arredondo et al. PhotoId-Whale: Blue whale dorsal fin classification for mobile devices
CN112348008A (en) Certificate information identification method and device, terminal equipment and storage medium
CN110738175A (en) Face image processing method and device, computer equipment and storage medium
CN114329016B (en) Picture label generating method and text mapping method
CN112308027B (en) Image matching method, biological recognition chip and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant