CN115033721A - Image retrieval method based on big data - Google Patents

Image retrieval method based on big data Download PDF

Info

Publication number
CN115033721A
CN115033721A CN202210754888.2A CN202210754888A CN115033721A CN 115033721 A CN115033721 A CN 115033721A CN 202210754888 A CN202210754888 A CN 202210754888A CN 115033721 A CN115033721 A CN 115033721A
Authority
CN
China
Prior art keywords
image
rgb image
retrieved
sub
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210754888.2A
Other languages
Chinese (zh)
Inventor
朱玉龙
张化杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Fenghao Qiyu Network Technology Co ltd
Original Assignee
Nanjing Fenghao Qiyu Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Fenghao Qiyu Network Technology Co ltd filed Critical Nanjing Fenghao Qiyu Network Technology Co ltd
Priority to CN202210754888.2A priority Critical patent/CN115033721A/en
Publication of CN115033721A publication Critical patent/CN115033721A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The invention relates to the technical field of electronic digital data processing, in particular to an image retrieval method based on big data, which comprises the following steps: acquiring a standard target area image corresponding to the RGB image to be retrieved, determining the definition corresponding to the standard target area image, and further determining a self-adaptive threshold value corresponding to the RGB image to be retrieved; determining a one-dimensional characteristic sequence corresponding to the RGB image to be retrieved and each sub-block image of each RGB image to be matched, and further determining the similarity between the RGB image to be retrieved and each RGB image to be matched; and determining a retrieval image corresponding to the RGB image to be retrieved according to each RGB image to be matched, each subblock image of the RGB image to be retrieved, the similarity degree and the self-adaptive threshold value corresponding to the RGB image to be retrieved. The invention effectively improves the retrieval accuracy and the retrieval speed of the image by utilizing the electronic digital data processing technology.

Description

Image retrieval method based on big data
Technical Field
The invention relates to the technical field of electronic digital data processing, in particular to an image retrieval method based on big data.
Background
The development of mobile internet, smart phones and social networks brings massive information, and images as an important information carrier have the characteristics of being visual, rich in content and the like. If the image cannot be effectively described, a large amount of information is buried in the sea of information and cannot be retrieved when needed. Therefore, research on image retrieval is becoming a research focus in the field of information retrieval.
Image retrieval is a hot topic in the internet field, most of traditional image matching retrieval methods obtain a final retrieval result based on global variables, for example, extraction is performed on the basis of the shape, color and texture of an image to establish an index, but the process of obtaining such features is often complex and is limited by features such as a main body position and an object posture, and the image retrieval result is not accurate; the image matching retrieval method based on local variables is commonly used for corner detection, which can solve the problem that global variables are difficult to solve, but the image matching retrieval method has large calculated amount, needs to generate a plurality of feature descriptors, and is difficult to meet the purpose of fast and accurate retrieval under mass data.
Disclosure of Invention
In order to solve the problem of poor accuracy of the conventional image matching retrieval, the invention aims to provide an image retrieval method based on big data.
The invention provides an image retrieval method based on big data, which comprises the following steps:
acquiring an RGB image to be retrieved, further acquiring an attention degree heat map corresponding to the RGB image to be retrieved, and determining a standard target area image corresponding to the RGB image to be retrieved according to the attention degree heat map corresponding to the RGB image to be retrieved;
determining the definition corresponding to the standard target area image according to the gray value of each pixel point in the standard target area image, and further determining the self-adaptive threshold corresponding to the RGB image to be retrieved;
determining a blocking coefficient corresponding to the standard target area image according to the definition and the size corresponding to the standard target area image, and further determining each sub-block image of the RGB image to be retrieved;
acquiring each RGB image to be matched in the RGB image library, and obtaining each sub-block image of each RGB image to be matched in the RGB image library according to the process of determining each sub-block image of the RGB image to be retrieved;
acquiring neighborhood pixel points of each pixel point in each sub-block image of the RGB image to be retrieved and each RGB image to be matched, and performing binarization processing on each sub-block image according to the gray values of each pixel point and the neighborhood pixel point in each sub-block image to obtain a binary feature matrix corresponding to each sub-block image, so as to obtain a one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and each RGB image to be matched;
determining the similarity degree between the RGB image to be retrieved and each RGB image to be matched according to the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched;
and determining a retrieval image corresponding to the RGB image to be retrieved according to the RGB image to be retrieved, each sub-block image of each RGB image to be matched, the self-adaptive threshold value corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each RGB image to be matched.
Further, the step of determining a normative target area image corresponding to the RGB image to be retrieved includes:
judging whether the attention degree value of each pixel point in the attention degree heat map is smaller than an attention degree threshold value or not according to the attention degree value of each pixel point in the attention degree heat map corresponding to the RGB image to be retrieved, if the attention degree value of the pixel point is not smaller than the attention degree threshold value, judging the pixel point as a foreground pixel point in the attention degree heat map, and if not, judging the pixel point as a background pixel point in the attention degree heat map, thereby obtaining a binary mask map corresponding to the attention degree heat map;
acquiring a gray scale image corresponding to the RGB image to be retrieved, and multiplying a binary mask image corresponding to the attention degree heat image by the gray scale image corresponding to the RGB image to be retrieved so as to obtain a target area connected domain in the attention degree heat image;
and carrying out standardization processing on the target area connected domain so as to obtain a target area connected domain in a two-dimensional matrix form corresponding to the attention degree heat map, and taking the target area connected domain in the two-dimensional matrix form as a standard target area image corresponding to the RGB image to be retrieved.
Further, the calculation formula for determining the definition corresponding to the normalized target area image is as follows:
D(f)=Σ yx (|f(x,y)-f(x+1,y)|*|f(x,y)-f(x,y+1)|)
wherein, d (f) is the definition corresponding to the standard target area image, f (x, t) is the gray value of the pixel point in the x-th row and the y-th column in the standard target area image, f (x +1, y) is the gray value of the pixel point in the x + 1-th row and the y-th column in the standard target area image, and f (x, y +1) is the gray value of the pixel point in the x-th row and the y + 1-th column in the standard target area image.
Further, the calculation formula for determining the adaptive threshold corresponding to the RGB image to be retrieved is:
Figure BDA0003722097450000021
wherein T' is an adaptive threshold corresponding to the RGB image to be retrieved, T is a conventional threshold, T is a lowest empirical threshold, D (f) max To normalize the standard maximum definition for the target area image, D (f) min The standard minimum definition corresponding to the standard target area image is obtained, and D (f) the definition corresponding to the standard target area image is obtained.
Further, the calculation formula for determining the block coefficient corresponding to the normative target area image is as follows:
Figure BDA0003722097450000022
wherein, N' is a block coefficient corresponding to a standard target area image, D (f) max For standard maximum definition of the normalized target area image, D (f) for definition of the normalized target area image, D (f) min The standard minimum definition corresponding to the standard target area image is defined, N is the number of sub-block types corresponding to the standard target area image,
Figure BDA0003722097450000038
to round down.
Further, a calculation formula for performing binarization processing on each sub-block image is as follows:
Figure BDA0003722097450000031
wherein, K X,Y The gray value G of the pixel point of the Xth row and the Yth column in each sub-block image after the binarization processing X,Y Is the gray value of the pixel points in the X row and the Y column in each sub-block image before binarization processing, m is the number of the neighborhood pixel points of the pixel points in the X row and the Y column in each sub-block image before binarization processing,
Figure BDA0003722097450000032
the gray value of the a-th neighborhood pixel point of the X-th row and the Y-th column in each sub-block image before binarization processing is obtained.
Further, the step of determining the degree of similarity between the RGB image to be retrieved and each of the RGB images to be matched includes:
determining the similarity between the one-dimensional characteristic sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched according to the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched;
according to the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, determining a similarity mean value between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, and taking the similarity mean value as the similarity degree between the RGB image to be retrieved and the corresponding RGB image to be matched.
Further, a calculation formula for determining the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched is as follows:
Figure BDA0003722097450000033
Figure BDA0003722097450000034
wherein Q is o A similarity between the one-dimensional characteristic sequence corresponding to the o sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to the o sub-block image of each RGB image to be matched, a 2 The number of elements in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of the RGB image to be retrieved,
Figure BDA0003722097450000035
for the matching degree of the q-th pair of elements in the one-dimensional feature sequence corresponding to the RGB image to be retrieved and the o-th sub-block image of each RGB image to be matched,
Figure BDA0003722097450000036
the q-th element in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of the RGB image to be retrieved,
Figure BDA0003722097450000037
and the q-th element in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of each RGB image to be matched.
Further, the step of determining the retrieval image corresponding to the RGB image to be retrieved includes:
screening each RGB image to be matched in the RGB image library according to the RGB image to be searched and the number of each sub-block image of each RGB image to be matched in the RGB image library, so as to obtain each first candidate search RGB image in the RGB image library;
selecting a plurality of second candidate retrieval RGB images with the similarity not less than the adaptive threshold from each first candidate retrieval RGB image according to the adaptive threshold corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each first candidate retrieval RGB image;
and taking the second candidate retrieval RGB image with the maximum similarity as the retrieval image of the RGB image to be retrieved.
The invention has the following beneficial effects:
the method determines the standard target area image in the attention degree heat map corresponding to the RGB image to be retrieved through the RGB image to be retrieved. And determining the definition and the self-adaptive threshold value corresponding to the standard target area image, and further determining the one-dimensional characteristic sequence corresponding to the RGB image to be retrieved and each sub-block image of each RGB image to be matched, so as to determine the similarity between the RGB image to be retrieved and each RGB image to be matched. And determining a retrieval image corresponding to the RGB image to be retrieved according to the RGB image to be retrieved and each sub-block image of each RGB image to be matched in the RGB image library, the self-adaptive threshold value corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each RGB image to be matched.
According to the invention, the standard target area image corresponding to the RGB image to be retrieved is obtained through an electronic digital data processing technology, the image characteristic information of the standard target area image can represent the image characteristic information of the RGB image to be retrieved, the whole image is converted into the target area image of the image, the influence of external factors on the image characteristic information can be effectively avoided, and the accuracy of image retrieval is improved. The definition corresponding to the standard target area image is obtained by calculating data related to the pixel gray value in the target area image, and the self-adaptive threshold value corresponding to the RGB image to be retrieved is determined according to the definition corresponding to the standard target area image. Therefore, the adaptive threshold value corresponding to the RGB image to be retrieved can better determine the retrieval image corresponding to the RGB image to be retrieved. The method comprises the steps of determining a RGB image to be retrieved and a one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, calculating the similarity between the RGB image to be retrieved and each RGB image to be matched through the one-dimensional feature sequence corresponding to each sub-block image, wherein the one-dimensional feature sequence corresponding to each sub-block image can accurately represent key feature information in the image, and the accuracy of the similarity between the RGB image to be retrieved and each RGB image to be matched is improved. According to the method, the retrieval image corresponding to the RGB image to be retrieved is determined according to the RGB image to be retrieved, each sub-block image of each RGB image to be matched, the self-adaptive threshold value corresponding to the RGB image to be retrieved, and the similarity between the RGB image to be retrieved and each RGB image to be matched.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a big data based image retrieval method of the present invention;
FIG. 2 is a schematic diagram of a target area connected domain in a two-dimensional matrix form in an embodiment of the present invention;
fig. 3 shows distribution positions of neighboring pixel points of pixel points located at an edge of a sliding window according to an embodiment of the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
First, the styles of images are various, colors, gradients, shapes, and the like change with the change of the images, and when the images are affected by illumination, the colors and gradients of the images change greatly, which makes it difficult to complete matching search for the same images. However, the external influence such as illumination is approximately uniform in the local area of the image in terms of the change of the color and the gradient of the local area. Therefore, in the embodiment, the one-dimensional feature sequence of the image is generated by converting the image into a simple and easy-to-process binary feature map, in order to ensure intra-class aggregation and inter-class separation of the one-dimensional feature sequence, a blocking rule is adaptively obtained, the degree of similarity between the images is obtained by calculating the similarity between the one-dimensional feature sequences corresponding to the image blocks, and a retrieval image corresponding to the image is obtained according to the threshold value obtained by the image adaptation and the degree of similarity between the images, so that the accuracy of image retrieval is improved.
Based on the above analysis, the present embodiment provides an image retrieval method based on big data, as shown in fig. 1, the method includes the following steps:
(1) the method comprises the steps of obtaining an RGB image to be retrieved, further obtaining an attention degree heat map corresponding to the RGB image to be retrieved, and determining a standard target area image corresponding to the RGB image to be retrieved according to the attention degree heat map corresponding to the RGB image to be retrieved, wherein the method comprises the following steps:
and (1-1) acquiring the RGB image to be retrieved, and further acquiring a attention degree heat map corresponding to the RGB image to be retrieved.
An RGB (Red Green Blue, three primary colors) image which needs to be subjected to image retrieval operation is obtained and called as an RGB image to be retrieved, and the RGB image is a visible light image. In order to facilitate subsequent image matching retrieval of the RGB image to be retrieved, the embodiment performs a preprocessing operation on the RGB image to be retrieved, and after the preprocessing operation, a attention degree heat map corresponding to the RGB image to be retrieved can be obtained. If the attention degree value of a certain pixel point in the attention degree heat map is larger, the attention degree of the pixel point in the attention degree heat map is larger.
The method comprises the following steps of preprocessing an RGB image to be retrieved: inputting the RGB image to be retrieved into a pre-constructed and trained DNN (Deep Neural Networks) Neural network, firstly obtaining image characteristic information corresponding to the RGB image to be retrieved by using convolution and pooling operations through an encoder, then reconstructing the image characteristic information corresponding to the RGB image to be retrieved by using deconvolution and anti-pooling operations through a decoder to obtain an attention degree heat map, and outputting the attention degree heat map corresponding to the RGB image to be retrieved.
It should be noted that the model structure of the DNN neural network is in an Encoder-Decoder form, the training data of the network is a plurality of RGB images, the label data of the network is an attention heat map with gradual change characteristics corresponding to the plurality of RGB images, and the loss function of the network is a mean square error loss function. The construction and training process of the DNN neural network is prior art and is not within the scope of the present invention, and will not be described in detail herein.
And (1-2) determining a standard target area image corresponding to the RGB image to be retrieved according to the attention degree heat map corresponding to the RGB image to be retrieved.
In the conventional image retrieval, the whole image is used as an input to perform search matching, and the search mode has high requirements on the quality of the image. In fact, the key feature information in the image plays a main role in the image retrieval process, and the background feature information in the image only has a certain influence on the image retrieval accuracy. Therefore, in order to reduce the influence of the background feature information in the image on the image retrieval and improve the retrieval accuracy and the retrieval speed of the image, the embodiment performs segmentation processing on the attention degree heat map corresponding to the RGB image to be retrieved to obtain the normative target area image corresponding to the RGB image to be retrieved, and the steps include:
(1-2-1) judging whether the attention degree value of each pixel point in the attention degree heat map is smaller than an attention degree threshold value or not according to the attention degree value of each pixel point in the attention degree heat map corresponding to the RGB image to be retrieved, if the attention degree value of the pixel point is not smaller than the attention degree threshold value, judging the pixel point to be a foreground pixel point in the attention degree heat map, and if not, judging the pixel point to be a background pixel point in the attention degree heat map, thereby obtaining a binary mask map corresponding to the attention degree heat map.
In order to obtain a binary mask map corresponding to the attention degree heat map subsequently, the attention degree threshold is preset in this embodiment, so as to perform binarization processing on the attention degree heat map. And judging whether the pixel points not smaller than the attention degree threshold exist in the RGB image to be retrieved or not according to the attention degree value of each pixel point in the attention degree heat map corresponding to the RGB image to be retrieved and a preset attention degree threshold.
And when the attention degree value of any pixel is not less than the attention degree threshold, judging that the pixel is a pixel in the foreground characteristic in the RGB image to be retrieved, wherein the pixel in the foreground characteristic is also a foreground pixel, and otherwise, judging that the pixel is a pixel in the background characteristic in the RGB image to be retrieved, and the pixel in the background characteristic is also a background pixel. The foreground feature is composed of pixels of which the attention degree values are larger than or equal to the attention degree threshold, the background feature is composed of pixels of which the attention degree values are smaller than the attention degree threshold, the pixels in the foreground feature are marked as 1, the pixels in the background feature are marked as 0, and therefore a binary mask image corresponding to the attention degree heat map is generated.
(1-2-2) acquiring a gray scale image corresponding to the RGB image to be retrieved, and multiplying the binary mask image corresponding to the attention degree heat image and the gray scale image corresponding to the RGB image to be retrieved to obtain a target area connected domain in the attention degree heat image.
In this embodiment, the RGB image to be retrieved is subjected to graying processing, so as to obtain a grayscale image corresponding to the RGB image to be retrieved. The graying process is prior art and is not within the scope of the present invention, and will not be described in detail herein. And then, multiplying the binary mask image corresponding to the attention degree heat map and the gray scale image corresponding to the RGB image to be retrieved to obtain a multiplied image, wherein the multiplied image is used as a target area connected domain in the attention degree heat map. Thus, the image segmentation processing of the RGB image to be retrieved is completed.
(1-2-3) carrying out standardization processing on the target area connected domain so as to obtain a target area connected domain in a two-dimensional matrix form corresponding to the attention degree heat map, and taking the target area connected domain in the two-dimensional matrix form as a standard target area image corresponding to the RGB image to be retrieved.
Due to the fact that connection of target areas obtained by image segmentation is irregular, processing difficulty of the images is high, and normalization processing is difficult to perform. In this embodiment, in order to improve the subsequent image retrieval matching speed, a normalization process is performed on a target area connected domain, specifically, a zero padding operation is performed on an irregular target area connected domain to generate a target area connected domain in a two-dimensional matrix form, as shown in fig. 2, fig. 2 is a schematic diagram of a target area connected domain in a two-dimensional matrix form, and the target area connected domain in the two-dimensional matrix form is used as a normalized target area image corresponding to an RGB image to be retrieved.
It should be noted that, obtaining the target area connected domain in the form of the two-dimensional matrix is beneficial to reducing the calculation amount for subsequently determining the similarity of the one-dimensional feature sequence, and the normalized target area image is more beneficial to generating sequence data with equal length, which is beneficial to improving the retrieval matching speed of the image.
(2) Determining the definition corresponding to the standard target area image according to the gray value of each pixel point in the standard target area image, and further determining the self-adaptive threshold corresponding to the RGB image to be retrieved, wherein the method comprises the following steps of:
and (2-1) determining the definition corresponding to the standard target area image according to the gray value of each pixel point in the standard target area image.
It should be noted that the image with higher definition has more high frequency components, which results in larger pixel difference between the abrupt change pixel and the adjacent pixel, and the image with higher blur is opposite. For an image with higher definition, the subsequent selection of a smaller sub-block image can enable a one-dimensional feature sequence to achieve better inter-class separation, while for a blurred image, due to the fact that the pixel difference value between a sudden change pixel and an adjacent pixel is smaller, the repetition rate of features acquired by images of different size types is higher, and a larger sub-block image needs to be selected to acquire a better feature point in order to enlarge and distinguish. Therefore, the present embodiment needs to determine the corresponding definition of the canonical target area image.
In this embodiment, the gray scale difference between each pixel point in the normalized target area image and the right side neighborhood pixel point thereof is calculated, the gray scale difference between each pixel point in the normalized target area image and the lower side neighborhood pixel point is calculated, the gray scale difference between the right side neighborhood pixel point of each pixel point and the gray scale difference between the lower side neighborhood pixel point are multiplied and accumulated one by one, the larger the obtained result is, the larger the definition of the normalized target area image is, and the calculation formula for determining the definition corresponding to the normalized target area image is:
D(f)=Σ yx (|f(x,y)-f(x+1,y)|*|f(x,y)-f(x,y+1)|)
wherein, d (f) is the definition corresponding to the normalized target area image, f (x, y) is the gray scale value of the pixel point in the x-th row and the y-th column in the normalized target area image, f (x +1, y) is the gray scale value of the pixel point in the x + 1-th row and the y-th column in the normalized target area image, and f (x, y +1) is the gray scale value of the pixel point in the x-th row and the y + 1-th column in the normalized target area image.
And (2-2) determining an adaptive threshold value corresponding to the RGB image to be retrieved according to the definition corresponding to the standard target area image.
It should be noted that the adaptive threshold corresponding to the normalized target region image is used for subsequently screening each RGB image to be matched in the RGB image library, that is, used for subsequently comparing the RGB image to be retrieved with the similarity between each RGB image to be matched and each RGB image to be matched, and the calculation formula for determining the adaptive threshold corresponding to the RGB image to be retrieved is as follows:
Figure BDA0003722097450000081
wherein T' is an adaptive threshold corresponding to the RGB image to be retrieved, T is a conventional threshold, T is a lowest empirical threshold, D (f) max To normalize the standard maximum definition for the target area image, D (f) min The standard minimum definition corresponding to the standard target area image is obtained, and D (f) the definition corresponding to the standard target area image is obtained.
It should be noted that when the definitions are different, the similarity degrees may be different, and therefore, the threshold needs to be adaptively adjusted through the definitions. The clearer the standard target area image corresponding to the RGB image to be retrieved is, the stricter the threshold constraint thereof should be, that is, the larger the adaptive threshold corresponding to the standard target area image should be; the less clear the normalized target area image corresponding to the RGB image to be retrieved is, which means that the change of the image pixel is large due to external interference, so the constraint of the threshold value may be slightly relaxed, i.e. the smaller the adaptive threshold value corresponding to the normalized target area image is.
(3) And determining a blocking coefficient corresponding to the standard target area image according to the definition and the size corresponding to the standard target area image, and further determining each sub-block image of the RGB image to be retrieved.
And (3-1) determining a blocking coefficient corresponding to the standard target area image according to the definition and the size corresponding to the standard target area image.
It should be noted that, the size of each sub-block image in the normative target area image is adaptively obtained according to the corresponding definition and size of the normative target area image, and the size types of the sub-block images are usually 8 × 8, 16 × 16, 32 × 32 … (8 × 2) n )×(8×2 n ). The higher the definition of the image is, the smaller the sub-block image corresponding to the image is, the sub-block image with the minimum size of 8 x 8 is, the lower the definition of the image is, the larger the sub-block image corresponding to the image is, the maximum size of the sub-block image is close to the size of the original image (8 x 2) n )×(8×2 n ) A size sub-block image.
First, according to the size corresponding to the normalized target area image, the selectable subblock size types corresponding to the normalized target area image can be obtained, the size of the selectable subblock types cannot be larger than the size corresponding to the normalized target area image, for example, when the size of the normalized target area image is 128 × 128, there are 5 subblock types corresponding to the normalized target area image, which are: 8 × 8, 16 × 16, 32 × 32, 64 × 64, 128 × 128. And then, obtaining standard minimum definition and standard maximum definition corresponding to the standard target area image through the standard target area image, wherein the standard minimum definition and the standard maximum definition refer to the definition of the standard target area image under extreme conditions. Determining a blocking coefficient corresponding to the standard target area image according to the selectable subblock size type, definition, standard minimum definition and standard maximum definition corresponding to the standard target area image, wherein the calculation formula is as follows:
Figure BDA0003722097450000091
wherein, N' is a block coefficient corresponding to a standard target area image, D (g) max For standard maximum definition of the normalized target area image, D (f) for definition of the normalized target area image, D (f) min The standard minimum definition corresponding to the standard target area image is defined, N is the number of sub-block types corresponding to the standard target area image,
Figure BDA0003722097450000092
represent rounding down on.
When the definition corresponding to the standard target area image is larger, the blocking coefficient N 'corresponding to the standard target area image is smaller, and when the definition corresponding to the standard target area image is smaller, the blocking coefficient N' corresponding to the standard target area image is larger.
And (3-2) determining each sub-block image of the RGB image to be retrieved according to the block coefficient corresponding to the standard target area image.
It should be noted that, the number of elements in the numerical sequence generated by acquiring the feature points is different due to the inconsistent image sizes, and the number of elements in the numerical sequence generated based on the feature points is high, the computational complexity is high, and the response speed is low. The calculation complexity can be reduced by partitioning the images, but because the sub-block images contain fewer image feature points, the collision rate is easily improved, namely when the similarity of two different images is measured by using the feature sequence difference, the different images are regarded as the same image, so that the sub-block images with different sizes need to be obtained in a self-adaptive manner according to the difference of the image resolution, the calculation difficulty is reduced, and the intra-class aggregation and the inter-class separation of the feature sequence are ensured.
In this embodiment, each sub-block image of the RGB image to be retrieved is determined by the block coefficient N' corresponding to the normative target area image obtained in step (3-1), and a calculation formula thereof is:
a×a=(8×2 N′-1 )×(8×2 N′-1 )
where, a × a is the size of each sub-block image of the RGB image to be retrieved, and N' is a blocking coefficient corresponding to the normative target area image.
When the size of the sub-block image is not enough to reach a × a, zero padding operation is performed to obtain a plurality of sub-block images of the same size as a × a, and at this time, each sub-block image of the RGB image to be retrieved is obtained in this embodiment. It should be noted that the larger the blocking coefficient N 'corresponding to the specification target area image is, the larger the size a × a of each sub-block image of the RGB image to be retrieved is, and the smaller the blocking coefficient N' corresponding to the specification target area image is, the smaller the size a × a of each sub-block image of the RGB image to be retrieved is.
(4) And acquiring each RGB image to be matched in the RGB image library, and obtaining each sub-block image of each RGB image to be matched in the RGB image library according to the process of determining each sub-block image of the RGB image to be retrieved.
It should be noted that each sub-block image of each RGB image to be matched in the RGB image library may be used to perform primary screening on each RGB image to be matched in the RGB image library, that is, to discard the RGB images to be matched, which have a large difference with the number of each sub-block image of the RGB image to be retrieved, in the RGB image library.
And (3) acquiring each sub-block image of each RGB image to be matched in the RGB image library, and according to each sub-block image of each RGB image to be matched in the RGB image library, referring to the determining process of each sub-block image of the RGB image to be retrieved from the step (1) to the step (3), determining each sub-block image of each RGB image to be matched in the RGB image library. The step of obtaining each sub-block image of each RGB image to be matched in the RGB image library has a similarity to the step of obtaining each sub-block image of the RGB image to be retrieved, and will not be elaborated herein.
(5) Acquiring neighborhood pixel points of each pixel point in each sub-block image of the RGB image to be retrieved and each RGB image to be matched, carrying out binarization processing on each sub-block image according to the gray values of each pixel point and the neighborhood pixel point in each sub-block image to obtain a binary feature matrix corresponding to each sub-block image, and further obtaining a one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and each RGB image to be matched.
It should be noted that, the value range of the gray value of the pixel point is [0,255], and if the gray value of the pixel point at this time is directly used for image similarity matching, not only the number of times of the similarity matching is increased, but also the calculation amount of the similarity matching is increased, thereby reducing the speed of determining the retrieval image. Based on the above analysis, the present embodiment performs binarization processing on each sub-block image, and includes the steps of:
and (5-1) acquiring the RGB image to be retrieved and the neighborhood pixel point of each pixel point in each sub-block image of each RGB image to be matched.
In this embodiment, a sliding window with a size of 3 × 3 is established on each sub-block image, and according to each sliding window region corresponding to each sub-block image, neighborhood pixels of each pixel point in each sub-block image in the corresponding sliding window region are obtained, where the neighborhood pixels are pixel points adjacent to the pixel points in the sliding window region, positions of the pixel points in the sliding window region are different, and the number of the neighborhood pixels corresponding to the pixel points is different. For example, if a pixel point is located at the center of the sliding window region corresponding to the pixel point, the number of neighborhood pixel points of the pixel point is 8, but if the pixel point is located at the edge of the sliding window region corresponding to the pixel point, the number of neighborhood pixel points of the pixel point is less than 8, as shown in fig. 3, fig. 3 is the distribution position of the neighborhood pixel points of the pixel point located at the edge of the sliding window, in fig. 3, the color of the pixel point located at the edge of the sliding window is black, and the number of neighborhood pixel points of the pixel point located at the edge of the sliding window is 5.
And (5-2) carrying out binarization processing on each sub-block image according to the gray values of each pixel point and the neighborhood pixel points in each sub-block image to obtain a binary characteristic matrix corresponding to each sub-block image, and further obtaining the RGB image to be retrieved and a one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched.
In this embodiment, according to the gray value of each pixel point and its neighborhood pixel point in each sub-block image, each sub-block image is binarized, and the calculation formula for binarizing each sub-block image is as follows:
Figure BDA0003722097450000111
wherein, K X,Y The gray value G of the pixel point of the Xth row and the Yth column in each sub-block image after binarization processing X,Y Is the gray value of the pixel points in the X row and the Y column in each sub-block image before binarization processing, m is the number of the neighborhood pixel points of the pixel points in the X row and the Y column in each sub-block image before binarization processing,
Figure BDA0003722097450000112
the gray value of the a-th neighborhood pixel point of the X-th row and the Y-th column in each sub-block image before binarization processing is obtained.
By carrying out binarization processing on each sub-block image, the gray value of each pixel point in each sub-block image after binarization processing can be obtained. And obtaining a binary characteristic matrix of each sub-block image based on the gray value of each pixel point in each sub-block image after binarization processing, and converting the binary characteristic matrix of each sub-block image into a one-dimensional characteristic sequence so as to obtain the one-dimensional characteristic sequence corresponding to each sub-block image.
The step of converting the binary feature matrix of each sub-block image comprises the following steps: splicing second rows of pixel points in the binary feature matrix of each sub-block image behind the first rows, splicing third rows of pixel points behind the second rows of pixel points, and so on to obtain the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched. In this embodiment, the one-dimensional feature sequence corresponding to the o-th sub-block image corresponding to the RGB image to be retrieved is recorded as M o
Figure BDA0003722097450000113
And recording the one-dimensional sequence corresponding to the o sub-block image corresponding to each RGB image to be matched as S o
Figure BDA0003722097450000114
(6) Determining the similarity between the RGB image to be retrieved and each RGB image to be matched according to the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched, wherein the steps comprise:
and (6-1) according to the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, determining the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched.
In this embodiment, taking the determination of the similarity between the one-dimensional feature sequence corresponding to the o-th sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to the o-th sub-block image of each RGB image to be matched as an example, according to the one-dimensional feature sequence corresponding to the o-th sub-block image corresponding to the RGB image to be retrieved
Figure BDA0003722097450000121
And each isOne-dimensional characteristic sequence corresponding to the o sub-block image corresponding to the RGB image to be matched
Figure BDA0003722097450000122
Calculating the similarity between the one-dimensional characteristic sequence corresponding to the o-th sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to the o-th sub-block image of each RGB image to be matched, wherein the calculation formula is as follows:
Figure BDA0003722097450000123
Figure BDA0003722097450000124
wherein Q is o A similarity between the one-dimensional feature sequence corresponding to the o sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to the o sub-block image of each RGB image to be matched, a 2 The number of elements in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of the RGB image to be retrieved,
Figure BDA0003722097450000125
for the matching degree of the q-th pair of elements in the one-dimensional feature sequence corresponding to the RGB image to be retrieved and the o-th sub-block image of each RGB image to be matched,
Figure BDA0003722097450000126
for the q-th element in the one-dimensional feature sequence corresponding to the o-th sub-block image of the RGB image to be retrieved,
Figure BDA0003722097450000127
and the q-th element in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of each RGB image to be matched.
And referring to a determination process of the similarity between the one-dimensional feature sequence corresponding to the o-th sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to the o-th sub-block image of each RGB image to be matched, so as to obtain the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched.
(6-2) according to the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, determining a similarity mean value between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, and taking the similarity mean value as the similarity degree between the RGB image to be retrieved and the corresponding RGB image to be matched.
Calculating the similarity mean value between the one-dimensional characteristic sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched, wherein the calculation formula is as follows:
Figure BDA0003722097450000128
wherein the content of the first and second substances,
Figure BDA0003722097450000129
the similarity mean value Q between the one-dimensional characteristic sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched o And the similarity between the one-dimensional characteristic sequence corresponding to the O-th sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to the O-th sub-block image of each RGB image to be matched is determined, wherein O is the number of the sub-block images of the RGB image to be retrieved.
It should be noted that the similarity mean value between each sub-block image of the RGB image to be retrieved and each sub-block image of each RGB image to be matched may represent the similarity degree between the RGB image to be retrieved and each RGB image to be matched, and at this time, the similarity degree between the RGB image to be retrieved and each RGB image to be matched is obtained in this embodiment.
(7) According to the RGB image to be retrieved and each sub-block image of each RGB image to be matched in the RGB image library, the self-adaptive threshold value corresponding to the RGB image to be retrieved and the similarity degree between the RGB image to be retrieved and each RGB image to be matched, the retrieval image corresponding to the RGB image to be retrieved is determined, and the steps comprise:
(7-1) screening each RGB image to be matched in the RGB image library according to the RGB image to be searched and the number of each sub-block image of each RGB image to be matched in the RGB image library, so as to obtain each first candidate search RGB image in the RGB image library.
In this embodiment, according to the RGB image to be retrieved and the number of each sub-block image of each RGB image to be matched in the RGB image library, the RGB image to be matched, which has the same number as the sub-block images of the RGB image to be retrieved, is selected from the RGB images to be matched in the RGB image library, and the RGB image to be matched, which has the same number of sub-block images, is used as the first candidate retrieval RGB image, thereby obtaining each first candidate retrieval RGB image in the RGB image library.
And (7-2) selecting a plurality of second candidate retrieval RGB images with the similarity not less than the adaptive threshold from each first candidate retrieval RGB image according to the adaptive threshold corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each first candidate retrieval RGB image.
In this embodiment, the magnitude of the degree of similarity between the RGB image to be retrieved and each of the first candidate retrieval RGB images is determined by the adaptive threshold corresponding to the RGB image to be retrieved, and when the degree of similarity between the RGB image to be retrieved and any one of the first candidate retrieval RGB images is determined
Figure BDA0003722097450000131
Not less than the adaptive threshold T' corresponding to the RGB image to be retrieved, i.e.
Figure BDA0003722097450000132
The RGB image to be retrieved and the first candidate are consideredSelecting the retrieval RGB image to be matched, and taking the first candidate retrieval RGB image at the moment as a second candidate retrieval RGB image; when the degree of similarity between the RGB image to be retrieved and any one of the first candidate retrieval RGB images
Figure BDA0003722097450000133
Less than the adaptive threshold T' corresponding to the RGB image to be retrieved, i.e.
Figure BDA0003722097450000134
The RGB image to be retrieved is considered not to match the first candidate retrieval RGB image and the first candidate retrieval RGB image is discarded. To this end, the present embodiment obtains a plurality of second search RGB candidate images whose degree of similarity in each of the first search RGB candidate images is not less than the adaptive threshold.
And (7-3) taking the second candidate retrieval RGB image with the maximum similarity as the retrieval image of the RGB image to be retrieved.
In this embodiment, the
Figure BDA0003722097450000135
Corresponding second candidate search RGB images are subjected to similarity ranking, for example, L in the RGB image library
Figure BDA0003722097450000136
And the corresponding second candidate retrieval RGB images are used for sorting the similarity degrees between the RGB image to be retrieved and the L second candidate retrieval RGB images, preferentially outputting the second candidate retrieval RGB image with the maximum similarity degree, and taking the second candidate retrieval RGB image with the maximum similarity degree as the retrieval image corresponding to the RGB image to be retrieved. In addition, when the degree of similarity between the RGB image to be retrieved and a second candidate retrieval RGB image
Figure BDA0003722097450000141
When the number of the sub-block images is 1, the sub-block images of the RGB image to be searched are completely matched with the sub-block images of the second candidate search RGB image, and the second candidate search RGB image is the RGB image to be searchedThe corresponding best search image, i.e., the second candidate search RGB image, is most likely to be the RGB image desired by the user.
Thus, the embodiment obtains the retrieval image of the RGB image to be retrieved.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (9)

1. An image retrieval method based on big data is characterized by comprising the following steps:
acquiring an RGB image to be retrieved, further acquiring an attention degree heat map corresponding to the RGB image to be retrieved, and determining a standard target area image corresponding to the RGB image to be retrieved according to the attention degree heat map corresponding to the RGB image to be retrieved;
determining the definition corresponding to the standard target area image according to the gray value of each pixel point in the standard target area image, and further determining the self-adaptive threshold corresponding to the RGB image to be retrieved;
determining a blocking coefficient corresponding to the standard target area image according to the definition and the size corresponding to the standard target area image, and further determining each sub-block image of the RGB image to be retrieved;
acquiring each RGB image to be matched in the RGB image library, and obtaining each sub-block image of each RGB image to be matched in the RGB image library according to the process of determining each sub-block image of the RGB image to be retrieved;
acquiring neighborhood pixel points of each pixel point in each sub-block image of the RGB image to be retrieved and each RGB image to be matched, and performing binarization processing on each sub-block image according to the gray values of each pixel point and the neighborhood pixel point in each sub-block image to obtain a binary feature matrix corresponding to each sub-block image, so as to obtain a one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and each RGB image to be matched;
determining the similarity degree between the RGB image to be retrieved and each RGB image to be matched according to the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched;
and determining a retrieval image corresponding to the RGB image to be retrieved according to the RGB image to be retrieved, each sub-block image of each RGB image to be matched, the self-adaptive threshold value corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each RGB image to be matched.
2. The image retrieval method based on big data as claimed in claim 1, wherein the step of determining the canonical target area image corresponding to the RGB image to be retrieved comprises:
judging whether the attention degree value of each pixel point in the attention degree heat map is smaller than an attention degree threshold value or not according to the attention degree value of each pixel point in the attention degree heat map corresponding to the RGB image to be retrieved, if the attention degree value of the pixel point is not smaller than the attention degree threshold value, judging the pixel point as a foreground pixel point in the attention degree heat map, and if not, judging the pixel point as a background pixel point in the attention degree heat map, thereby obtaining a binary mask map corresponding to the attention degree heat map;
acquiring a gray scale image corresponding to the RGB image to be retrieved, and multiplying a binary mask image corresponding to the attention degree heat image by the gray scale image corresponding to the RGB image to be retrieved so as to obtain a target area connected domain in the attention degree heat image;
and carrying out standardization processing on the target area connected domain so as to obtain a target area connected domain in a two-dimensional matrix form corresponding to the attention degree heat map, and taking the target area connected domain in the two-dimensional matrix form as a standard target area image corresponding to the RGB image to be retrieved.
3. The image retrieval method based on big data according to claim 1, wherein the calculation formula for determining the definition corresponding to the normative target area image is as follows:
D(f)=∑ yx (|f(x,y)-f(x+1,y)|*|f(x,y)-f(x,y+1)|)
wherein, d (f) is the definition corresponding to the normalized target area image, f (x, y) is the gray scale value of the pixel point in the x-th row and the y-th column in the normalized target area image, f (x +1, y) is the gray scale value of the pixel point in the x + 1-th row and the y-th column in the normalized target area image, and f (x, y +1) is the gray scale value of the pixel point in the x-th row and the y + 1-th column in the normalized target area image.
4. The image retrieval method based on big data as claimed in claim 3, wherein the calculation formula for determining the adaptive threshold corresponding to the RGB image to be retrieved is:
Figure FDA0003722097440000021
wherein T' is an adaptive threshold corresponding to the RGB image to be retrieved, T is a conventional threshold, T is a lowest empirical threshold, D (f) max To normalize the standard maximum definition for the target area image, D (f) min The standard minimum definition corresponding to the standard target area image is obtained, and D (f) the definition corresponding to the standard target area image is obtained.
5. The image retrieval method based on big data as claimed in claim 1, wherein the calculation formula for determining the blocking coefficient corresponding to the normative target area image is:
Figure FDA0003722097440000022
wherein, N' is a block coefficient corresponding to a standard target area image, D (f) max For normalizing target area mapsThe standard maximum definition corresponding to the image, D (f) is the definition corresponding to the standard target area image, D (f) min The standard minimum definition corresponding to the standard target area image is defined, N is the number of sub-block types corresponding to the standard target area image,
Figure FDA0003722097440000025
to round down.
6. The big data-based image retrieval method according to claim 1, wherein a calculation formula for performing binarization processing on each sub-block image is as follows:
Figure FDA0003722097440000023
wherein, K X,Y The gray value G of the pixel point of the Xth row and the Yth column in each sub-block image after binarization processing X,Y Is the gray value of the pixel points in the X row and the Y column in each sub-block image before binarization processing, m is the number of the neighborhood pixel points of the pixel points in the X row and the Y column in each sub-block image before binarization processing,
Figure FDA0003722097440000024
the gray value of the a-th neighborhood pixel point of the X-th row and the Y-th column in each sub-block image before binarization processing is obtained.
7. The big-data-based image retrieval method according to claim 1, wherein the step of determining the degree of similarity between the RGB image to be retrieved and each of the RGB images to be matched comprises:
determining the similarity between the one-dimensional characteristic sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched according to the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to each sub-block image of each RGB image to be matched;
according to the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, determining a similarity mean value between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched, and taking the similarity mean value as the similarity degree between the RGB image to be retrieved and the corresponding RGB image to be matched.
8. The big-data-based image retrieval method according to claim 7, wherein the calculation formula for determining the similarity between the one-dimensional feature sequence corresponding to each sub-block image of the RGB image to be retrieved and the one-dimensional feature sequence corresponding to each sub-block image of each RGB image to be matched is as follows:
Figure FDA0003722097440000031
Figure FDA0003722097440000032
wherein Q is o A similarity between the one-dimensional characteristic sequence corresponding to the o sub-block image of the RGB image to be retrieved and the one-dimensional characteristic sequence corresponding to the o sub-block image of each RGB image to be matched, a 2 The number of elements in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of the RGB image to be retrieved,
Figure FDA0003722097440000033
for the matching degree of the q-th pair of elements in the one-dimensional feature sequence corresponding to the RGB image to be retrieved and the o-th sub-block image of each RGB image to be matched,
Figure FDA0003722097440000034
to be examinedThe qth element in the one-dimensional feature sequence corresponding to the o-th sub-block image of the retrieved RGB image,
Figure FDA0003722097440000035
and the q-th element in the one-dimensional characteristic sequence corresponding to the o-th sub-block image of each RGB image to be matched.
9. The image retrieval method based on big data as claimed in claim 1, wherein the step of determining the retrieval image corresponding to the RGB image to be retrieved comprises:
screening each RGB image to be matched in the RGB image library according to the RGB image to be searched and the number of each sub-block image of each RGB image to be matched in the RGB image library, so as to obtain each first candidate search RGB image in the RGB image library;
selecting a plurality of second candidate retrieval RGB images with the similarity not less than the adaptive threshold from each first candidate retrieval RGB image according to the adaptive threshold corresponding to the RGB image to be retrieved and the similarity between the RGB image to be retrieved and each first candidate retrieval RGB image;
and taking the second candidate retrieval RGB image with the maximum similarity as the retrieval image of the RGB image to be retrieved.
CN202210754888.2A 2022-06-30 2022-06-30 Image retrieval method based on big data Withdrawn CN115033721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210754888.2A CN115033721A (en) 2022-06-30 2022-06-30 Image retrieval method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210754888.2A CN115033721A (en) 2022-06-30 2022-06-30 Image retrieval method based on big data

Publications (1)

Publication Number Publication Date
CN115033721A true CN115033721A (en) 2022-09-09

Family

ID=83126909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210754888.2A Withdrawn CN115033721A (en) 2022-06-30 2022-06-30 Image retrieval method based on big data

Country Status (1)

Country Link
CN (1) CN115033721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291945A (en) * 2023-11-24 2023-12-26 山东省济宁生态环境监测中心(山东省南四湖东平湖流域生态环境监测中心) Soil corrosion pollution detection and early warning method based on image data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117291945A (en) * 2023-11-24 2023-12-26 山东省济宁生态环境监测中心(山东省南四湖东平湖流域生态环境监测中心) Soil corrosion pollution detection and early warning method based on image data
CN117291945B (en) * 2023-11-24 2024-02-13 山东省济宁生态环境监测中心(山东省南四湖东平湖流域生态环境监测中心) Soil corrosion pollution detection and early warning method based on image data

Similar Documents

Publication Publication Date Title
CN109154978B (en) System and method for detecting plant diseases
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN107680054B (en) Multi-source image fusion method in haze environment
CN109558806B (en) Method for detecting high-resolution remote sensing image change
CN107967482A (en) Icon-based programming method and device
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN108960404B (en) Image-based crowd counting method and device
CN111161222B (en) Printing roller defect detection method based on visual saliency
CN112966691A (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN110503140B (en) Deep migration learning and neighborhood noise reduction based classification method
Li et al. Example-based image colorization via automatic feature selection and fusion
CN113888536B (en) Printed matter double image detection method and system based on computer vision
WO2021003378A1 (en) Computer vision systems and methods for blind localization of image forgery
CN109978848A (en) Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image
Srinivas et al. Remote sensing image segmentation using OTSU algorithm
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN110472632B (en) Character segmentation method and device based on character features and computer storage medium
CN115033721A (en) Image retrieval method based on big data
CN110956184A (en) Abstract diagram direction determination method based on HSI-LBP characteristics
Han et al. Low contrast image enhancement using convolutional neural network with simple reflection model
CN113139544A (en) Saliency target detection method based on multi-scale feature dynamic fusion
CN112329793A (en) Significance detection method based on structure self-adaption and scale self-adaption receptive fields
CN110020986B (en) Single-frame image super-resolution reconstruction method based on Euclidean subspace group double-remapping
CN109299295B (en) Blue printing layout database searching method
CN112070116B (en) Automatic artistic drawing classification system and method based on support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220909

WW01 Invention patent application withdrawn after publication