CN105426914A - Image similarity detection method for position recognition - Google Patents

Image similarity detection method for position recognition Download PDF

Info

Publication number
CN105426914A
CN105426914A CN201510807729.4A CN201510807729A CN105426914A CN 105426914 A CN105426914 A CN 105426914A CN 201510807729 A CN201510807729 A CN 201510807729A CN 105426914 A CN105426914 A CN 105426914A
Authority
CN
China
Prior art keywords
image
super
block
pixel
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510807729.4A
Other languages
Chinese (zh)
Other versions
CN105426914B (en
Inventor
李科
李钦
游雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Information Engineering University
Original Assignee
PLA Information Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Information Engineering University filed Critical PLA Information Engineering University
Priority to CN201510807729.4A priority Critical patent/CN105426914B/en
Publication of CN105426914A publication Critical patent/CN105426914A/en
Application granted granted Critical
Publication of CN105426914B publication Critical patent/CN105426914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image similarity detection method for position recognition, and belongs to the technical field of image recognition. The image similarity detection method comprises the steps of firstly segmenting images in a super-pixel manner, combining a CNN model to generate the characteristic patterns of the images, and computing the description vector of each super-pixel block; then dividing the images to be detected into uniform image blocks, and computing each image block description vector according to the super-pixel blocks included in each image block to form the description matrixes of the images; and computing the similarity between corresponding image blocks in the two images by using each obtained image block description vector, wherein the mean value of the similarities of the corresponding image blocks is the similarity between the two images requested by the invention. The image similarity detection method for position recognition is high in robustness, can achieve effective and accurate recognition even the content of the same scene changes, and meanwhile also can find the most similar images timely and accurately from an image sequence.

Description

A kind of image similarity detection method of facing position identification
Technical field
The present invention relates to a kind of image similarity detection method of facing position identification, belong to image identification technical field.
Background technology
Image similarity detection is the core link in soil phase coupling, image retrieval, pattern-recognition, in SLAM (SimultaneousLocalizationsandMapping) application, need to carry out closed circuit detection, detected by the similarity of head and the tail image exactly and determine whether that Same Scene has been come; Zero-bit, in robot autonomous navigator fix, when robot, second time is come in a certain environment, robot needs to determine oneself position in the environment, but indoor, up to some special screnes such as Around Buildings, low caves in positioning equipment cannot use time, just need to use robot interior sensing to determine position, the Same Scene that the method that now image similarity can be utilized to detect is found out when to arrive this environment for the first time with robot positions.
The key calculating the similarity of two width images is for picture construction one can the vector of Description Image essential characteristic or matrix.Generally speaking, the method built as description vectors can be divided into two classes: a kind of method is integrally described by image, such as color of image histogram, image aggregated vector and GIST.Image histogram can be regarded as the global characteristics of image, obtains and understands, therefore apply Description Image widely because it is easy to surreptitiously closely question.But image histogram does not consider the spatial relation between pixel, different images may have similar histogram.In addition, lack robustness with histogram Description Image, when the resolution of image, ambient lighting change, in scene fractional object disappear or new object occurs time, also can there is obvious change in image histogram.
Second method adopts local feature description's image, such as SIFT (ScaleInvariantFeatureTransform), SURF (Speed-UpRobustFeature), some image blocks comprising unique point in Description Image, and then reach the object of Description Image.Typical method adopts BoW (bag-ofwords) model, projected by all unique point description vectors of image to vocabulary, the final description vectors being picture construction one reflection image and comprising vocabulary situation.BoW model all achieves good effect in image retrieval (CBIR (the Content-basedimageretrieval)) task of image recognition classification, target recognition memory image content-based.FAB-MAP (FastAppearanceBasedMapping) is the technology of a location recognition and map structuring, is widely used in closed circuit test problems, and wherein BoW model is used for for each frame of test video builds description vectors.First extract the unique point on all frames of test video, calculate each unique point description vectors; Adopt K-means method to carry out cluster to all proper vectors extracted and build vocabulary; Unique point on each frame is projected as on vocabulary each frame and builds description vectors.The method of this employing BoW model construction picture frame description vectors generally can consume a large amount of time and internal memory, too huge sometimes for the number of features building vocabulary, and the process adopting K-means to carry out cluster has been difficult to.
Summary of the invention
The object of this invention is to provide a kind of image similarity detection method of facing position identification, detect to solve current image similarity the problem that robustness is low, calculated amount is large.
The present invention solves the problems of the technologies described above the image similarity detection method providing a kind of facing position identification, and this detection method comprises the following steps:
1) super-pixel segmentation is carried out to original image to be detected, obtain super-pixel block;
2) utilize the characteristic pattern of convolutional neural networks model generation original image to be detected, the characteristic pattern each super-pixel block being mapped to every layer calculates the description vectors of each super-pixel block;
3) original image to be detected is carried out being divided into uniform image block, calculate each image block description vectors according to the super-pixel block that image block comprises;
4) utilize the similarity in each image block description vectors calculating two width images to be detected obtained between correspondence image block, the average of each correspondence image block similarity is similarity between image.
Described step 2) computation process of each super-pixel block description vectors is as follows:
A. volume machine neural network model is acted on original image and generate some middle layers, choose the characteristic pattern of all characteristic patterns in M output layer as original image to be detected, and be adjusted to original image size;
B. the information entropy of all pixels in corresponding region on each bottom convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is bottom convolution output layer characteristic pattern number;
C. the mean value of all pixels in corresponding region on each higher convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is higher convolution output layer characteristic pattern number;
D. the description vectors obtained in combining step B and C is each super-pixel block description vectors.
In described step B, in corresponding region, the information entropy H of all pixels is:
H = - Σ i = 1 b i n s p i * log 2 p i
p i=n i/total
Wherein p ifor the probability that each bins occurs, bins is the pixel range divided at equal intervals between pixel maxima and minima in statistical regions, n ifor dropping on the number of pixels in each bins in statistical regions, tatal is area pixel sum.
Described step 3) in each image block description vectors for:
Wherein num is the super-pixel block number comprised in image block, weight ibe the weight of i-th piece of super-pixel, it is the description vectors of i-th piece of super-pixel.
The weight weight of described each super-pixel block is:
w e i g h t = s p _ n u m t o t a l _ n u m
Wherein sp_num is the number of pixels that super-pixel block comprises in image block areas, and total_num is the sum of all pixels in image block areas.
Described step 4) between each image block similarity pat_simi be:
Wherein for the normalized description vectors of image block 1, for the normalized description vectors of image block 2.
Described step 1) be adopt the method for linear iteration cluster to carry out super-pixel segmentation.
The image block description vectors composition Description Matrix that image can comprise when calculating by described image block pixel degree, by Description Matrix and the second width iamge description transpose of a matrix dot product of piece image, obtains similar matrix S, wherein the element S of the i-th row jth row of S ijstate the similarity on i-th image block and the second width image between a jth image block on piece image, in S, each diagonal entry is the similarity of correspondence image block.
The invention has the beneficial effects as follows: first the present invention carries out super-pixel segmentation to image, in conjunction with the characteristic pattern of CNN model generation image, and calculate the description vectors of each super-pixel block; Then image to be detected is divided into uniform image block, calculates each image block description vectors, the Description Matrix of composing images according to the super-pixel block that image block comprises; Utilize each image block description vectors obtained to calculate similarity in two width images to be detected between correspondence image block, the average of each correspondence image block similarity is similarity between two width images required by the present invention.The present invention has higher robustness, and calculated amount is little, easily realizes, even if Same Scene content there occurs change, can both identify effectively accurately, promptly and accurately can also find most similar image from sequential images simultaneously.
Accompanying drawing explanation
Fig. 1 is the calculation flow chart of super-pixel block description vectors;
Fig. 2-a is the 1# image from Same Scene in experimental example 1;
Fig. 2-b is the 2# image from Same Scene in experimental example 1;
Fig. 2-c is similar matrix schematic diagram right from Same Scene image in experimental example 1;
Fig. 3-a is the 1# image from different scene in experimental example 1;
Fig. 3-b is the 2# image from different scene in experimental example 1;
Fig. 3-c is similar matrix schematic diagram right from different scene image in experimental example 1;
Fig. 4 is test pattern selected in experimental example 2;
Fig. 5 is the most similar two field picture that experimental example 2 utilizes the present invention to find;
Fig. 6 obtains similarity curve in experimental example 2.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further.
First the present invention carries out super-pixel segmentation to original image to be detected, obtains super-pixel block; Then utilize the characteristic pattern of convolutional neural networks model generation original image to be detected, the characteristic pattern each super-pixel block being mapped to every layer calculates the description vectors of each super-pixel block; Original image to be detected is carried out being divided into uniform image block, calculates each image block description vectors according to the super-pixel block that image block comprises; Finally utilize the similarity between correspondence image block in each image block description vectors calculating two width images to be detected obtained, the average of each correspondence image block similarity is similarity between image.The concrete implementation step of the method is as follows:
1. treat detected image and carry out super-pixel segmentation
Super-pixel is exactly the zonule that pixel that in image, adjacent and color, brightness, Texture eigenvalue are similar by a series of position forms, these zonules remain the effective information carrying out Iamge Segmentation further mostly, and generally can not break the boundary information of the object in ring image.For piece image, single pixel does not have practical significance, and the mankind are from the region that many pixels combine to obtain the relevant information of image.Therefore, only have combination of pixels identical for some character is only the mankind together significant.Meanwhile, because super-pixel number is far smaller than number of pixels, directly super-pixel is expressed and also substantially increase counting yield.The present embodiment adopts the method for simple linear iteration cluster (SLSC) to carry out super-pixel segmentation, and to produce compact, regular super-pixel block, and the super-pixel block produced retains the boundary information of object.
2. utilize convolutional neural networks to calculate the description vectors of super-pixel block
Convolutional neural networks (CNNs) is a kind of multi-level network structure model, it is formed by multiple stage-training, usual each stage comprises convolution operation, non-linear transfer and pondization three parts, the output of high level during the input of bottom, the input of the bottom is exactly the most original image, and more high-rise information is more abstract, and semantic information is abundanter, every one deck all comprises a large amount of characteristic patterns, and each characteristic pattern is from different aspects reflection image information.Linear operation, the nonlinear operation (as sigmoid, tanh functional operation) of some row regarded as by the CNNs model of a L layer and pondization operation (pool) forms, this process can be defined as:
F l=Pool(tanh(W l*F l-1+b l))(1)
Wherein F lbe that l layer exports, l ∈ 1 ..., L, b lfor the offset parameter of l layer, W lfor the convolution kernel of l layer.Source images can be looked at as F 0.
In order to obtain the characteristic pattern of every one deck, the present invention carries out up-sampling to characteristic pattern, makes the characteristic pattern of every one deck and source images have same size, and stacking all characteristic patterns form a three-dimensional matrice F, F ∈ R n × H × W, wherein H is picture altitude, and W is picture traverse, and N is the quantity of characteristic pattern, and F can be expressed as:
F=[up(F 1),up(F 2),…,up(F L)](2)
Wherein up is up-sampling operation, n lbe l layer characteristic pattern number, for any one pixel on image, its description can be expressed as p ∈ R n.
The information of characteristic pattern used is utilized to be described each super-pixel block, super-pixel block is made to have stronger ability to express, owing to there is redundant information between some characteristic patterns, counting yield can be reduced, the present embodiment only selection portion divides the characteristic pattern of convolutional layer for building the description vectors of super-pixel block, also guarantees the quality of feature interpretation while improving counting yield.As shown in Figure 1, detailed process is as follows for the building process of super-pixel block description vectors:
A. volume machine neural network model is acted on original image and generate some middle layers, choose the characteristic pattern of all characteristic patterns in M output layer as original image to be detected, and be adjusted to original image size.
CNN (convolutional neural networks) model is selected to act on the some middle layers of Computer image genration, choose all characteristic patterns (altogether 64+256+256=576 characteristic pattern) in some convolution output layers (choosing the 5th, 13,16 layer in the present embodiment), and 576 layers of characteristic pattern are readjusted to original image size.Wherein the characteristic pattern of 1-64 layer belongs to bottom convolution output layer, maintains the boundary information of image, and 65 to 576 layers of characteristic pattern belong to higher convolution output layer, have stronger abstract semantics information.
B. the information entropy of all pixels in corresponding region on each bottom convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is bottom convolution output layer characteristic pattern number.
In the present embodiment, bottom convolution output layer is 1-64 layer, calculates the information entropy of all pixels in corresponding region on 1-64 layer characteristic pattern.In statistical regions, the minimum and maximum value of pixel value, divides some bins at equal intervals, drops on the number of pixels n in each bins in statistical regions i, i=1,2,3 ..., bins, calculates the Probability p that each bins occurs i=n i/ total, (total is area pixel sum); And according to the information entropy H of all pixels in probability calculation region obtained.
H = - Σ i = 1 b i n s p i * log 2 p i - - - ( 3 )
Find the corresponding region of each super-pixel block on every layer of characteristic pattern on original image, (every one deck characteristic pattern is all adjusted to original image size, each super-pixel block region on original image maps directly to characteristic pattern), calculate the information entropy of all pixels in the corresponding region of each super-pixel block on characteristic pattern on original image, for each super-pixel block produces the description vectors of 64 dimensions.
C. the mean value of all pixels in corresponding region on each higher convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is higher convolution output layer characteristic pattern number.
In the present embodiment, higher convolution output layer is 65 to 576 layers of characteristic pattern, add up by the method for zone leveling, namely calculate the mean value of all pixels in the corresponding region of each super-pixel block on characteristic pattern, for each super-pixel block produces the description vectors of 512 dimensions.
D. by above-mentioned calculating, the vector of one 576 dimension can finally be produced each super-pixel block is described.
3. image is divided into the image block of uniform-dimension, calculates the description vectors of each image block according to the super-pixel block comprised in each image block.
Image can be divided into 4 × 4 image blocks of uniform-dimension by the present embodiment, add up the super-pixel block comprised in each image block, according to the region area that super-pixel block is shared in image block areas, namely the number of pixels that super-pixel block comprises accounts for the proportion of the sum of all pixels that image block comprises, and gives each super-pixel block corresponding weight w eight.
w e i g h t = s p _ n u m t o t a l _ n u m - - - ( 4 )
Wherein, sp_num is the number of pixels that super-pixel block comprises in image block areas, and total_num is the sum of all pixels in image block areas.
The description vectors of each image block is calculated according to the weight w eight of each super-pixel block obtained
Wherein num is the super-pixel block number comprised in image block, weight ibe the weight of i-th piece of super-pixel, it is the description vectors of i-th piece of super-pixel.
576 dimension description vectors of each image block can be obtained by above-mentioned steps, operation is normalized to each image block vector, obtains finally describing corresponding image block.
4., according to obtaining the similarity that image block description vectors calculates two width image correspondence image blocks, the mean value of each correspondence image block similarity is the similarity of two required width images.
Similarity between two width images can adopt the similarity of correspondence image block to represent, similarity degree between image block reflects by the included angle cosine (cos) between corresponding description vectors, and cosine value is larger, image block is more similar, if image block is completely the same, cosine value is 1.Because image block description vectors has all carried out normalization operation, its mould is long is 1, then image block description vectors dot product is its included angle cosine.
In actual computation, the image block description vectors composition Description Matrix that can directly be comprised by image, by Description Matrix and the second width iamge description transpose of a matrix dot product of piece image, obtains the similar matrix S of 16*16 dimension, wherein the element S of the i-th row jth row of S ijstate the similarity on i-th image block and the second width image between a jth image block on piece image, in S, 16 diagonal entries are the similarity of correspondence image block.
Similarity Simi between two width images obtains by the mean value calculating each correspondence image block similarity, and the similarity Simi between two width images in the present embodiment is:
S i m i = 1 16 Σ i = 1 16 p a t _ s i m i ( i ) - - - ( 7 )
By said process, obtain Simi and be image similarity required by the present invention.
Experimental analysis
Experimental example 1
The object of this experimental example is checking robustness of the present invention.The Same Scene image that the present invention have chosen content changes locally respectively to the image from different scene to carrying out Similarity Measure.Two groups of selected presentation graphicses are to respectively as shown in Fig. 2-a, Fig. 2-b, Fig. 3-a and Fig. 3-b.Image wherein in Fig. 2-a and Fig. 2-b, to from Same Scene, is that picture material there occurs localized variation; In Fig. 3-a and Fig. 3-b, image is to from different scenes.The present invention is utilized image to be divided into the image block of 4 × 4, the similarity of computed image interblock, composition similar matrix is respectively as shown in Fig. 2-c and Fig. 3-c, element in similar matrix on diagonal line is the similarity of correspondence image block, and the similarity utilizing formula (7) to calculate two groups of images right is respectively 0.9434,0.5254.
According to the above results, for the image pair from Same Scene, the similarity obtained is apparently higher than the image pair of different scene.For the image pair of Same Scene, element in its similar matrix on diagonal line is apparently higher than off diagonal element, image in Fig. 2-b is to there occurs localized variation (namely having occurred a chest in Fig. 2 (b)), the diagonal entry value of the image block that local is changed is starkly lower than the value of other image blocks, according to the data of similar matrix, the Position Approximate that Same Scene image pair changes can be detected.And for the image pair from different scene, its diagonal entry value is relatively low, and do not have obvious difference with off diagonal element value, the image calculating gained is also lower to similarity.
Experimental example 2
The object of this experimental example is checking the present invention stability in actual applications and feasibility.Utilize similarity detection method of the present invention to search for a frame the most similar to test pattern from captured video, whether observations can accept.An indoor scene is that example is tested below, and designed experimental procedure is as follows:
(1) arbitrarily around this scene capture scene video (clapping video that scene be 2395 frames in experiment).
(2) anticipating this scene video, is each frame computed image Description Matrix, i.e. the matrix of 16 × 576 dimensions of image block description vectors composition, and stores (three-dimensional matrice generating 2395 × 16 × 576 dimensions in experiment).
(3) again come this scene, arbitrarily a shooting test pattern, the picture material captured by requirement is included in the scene domain of video capture, calculates the Description Matrix of this test pattern.
(4) three-dimensional matrice prestored in traversal step 2, utilizes algorithm herein to find a frame the most similar to test pattern.
(5) photographed scene image again, finds the most similar corresponding frame according to formula 3,4.
As shown in Figure 4, Fig. 5 is the frame the most similar to Fig. 4 test pattern found from video to a width test pattern wherein, and Fig. 6 is each frame in 2395 frame videos of shooting and the similarity curve of this test pattern.In addition, on detection time, in video, the iamge description vector of 2395 frames builds in advance, be not counted in detection consuming time, consuming timely mainly comprise, calculate the Description Matrix of test pattern and traversal frame of video and find most similar image two parts, within 0.75 second consuming time, (experimental situation is 64 LinuxDebian7.5 to this process herein, Intel (R) Core (TM) i7-3632QMCPU2.20GHz processor, 4G internal memory).
Experimental result shows from video, find the image the most similar to test pattern to be the 566th frame, and similarity curve shows that image near 566 frames and test pattern still have very high similarity, this is because close on frame in video generally have identical content.But the 566th frame and test pattern similarity the highest (0.82), and apparently higher than other numerical value, testing result is in the main true and consuming time also less.
To sum up, the present invention has higher robustness, even if Same Scene content there occurs change, can both identify effectively accurately, promptly and accurately can also find most similar image from sequential images simultaneously.

Claims (8)

1. an image similarity detection method for facing position identification, is characterized in that, this detection method comprises the following steps:
1) super-pixel segmentation is carried out to original image to be detected, obtain super-pixel block;
2) utilize the characteristic pattern of convolutional neural networks model generation original image to be detected, the characteristic pattern each super-pixel block being mapped to every layer calculates the description vectors of each super-pixel block;
3) original image to be detected is carried out being divided into uniform image block, calculate each image block description vectors according to the super-pixel block that image block comprises;
4) utilize the similarity in each image block description vectors calculating two width images to be detected obtained between correspondence image block, the average of each correspondence image block similarity is similarity between image.
2. the image similarity detection method of facing position identification according to claim 1, is characterized in that, described step 2) computation process of each super-pixel block description vectors is as follows:
A. volume machine neural network model is acted on original image and generate some middle layers, choose the characteristic pattern of all characteristic patterns in M output layer as original image to be detected, and be adjusted to original image size;
B. the information entropy of all pixels in corresponding region on each bottom convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is bottom convolution output layer characteristic pattern number;
C. the mean value of all pixels in corresponding region on each higher convolution output layer characteristic pattern of each super-pixel block on original image is calculated, for each super-pixel block produces the description vectors that dimension is higher convolution output layer characteristic pattern number;
D. the description vectors obtained in combining step B and C is each super-pixel block description vectors.
3. the image similarity detection method of facing position identification according to claim 2, is characterized in that, in described step B, in corresponding region, the information entropy H of all pixels is:
H = - Σ i = 1 b i n s p i * log 2 p i
p i=n i/total
Wherein p ifor the probability that each bins occurs, bins is the pixel range divided at equal intervals between pixel maxima and minima in statistical regions, n ifor dropping on the number of pixels in each bins in statistical regions, tatal is area pixel sum.
4. the image similarity detection method of facing position identification according to claim 3, is characterized in that, described step 3) in each image block description vectors for:
Wherein num is the super-pixel block number comprised in image block, weight ibe the weight of i-th piece of super-pixel, it is the description vectors of i-th piece of super-pixel.
5. the image similarity detection method of facing position identification according to claim 4, is characterized in that, the weight weight of described each super-pixel block is:
w e i g h t = s p _ n u m t o t a l _ n u m
Wherein sp_num is the number of pixels that super-pixel block comprises in image block areas, and total_num is the sum of all pixels in image block areas.
6. the image similarity detection method of facing position identification according to claim 5, is characterized in that, described step 4) between each image block similarity pat_simi be:
Wherein for the normalized description vectors of image block 1, for the normalized description vectors of image block 2.
7. the image similarity detection method of facing position identification according to claim 6, is characterized in that, described step 1) be adopt the method for linear iteration cluster to carry out super-pixel segmentation.
8. the image similarity detection method of facing position identification according to claim 6, it is characterized in that, the image block description vectors composition Description Matrix that image can comprise when calculating by described image block pixel degree, by Description Matrix and the second width iamge description transpose of a matrix dot product of piece image, obtain similar matrix S, wherein the element S of the i-th row jth row of S ijstate the similarity on i-th image block and the second width image between a jth image block on piece image, in S, each diagonal entry is the similarity of correspondence image block.
CN201510807729.4A 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification Active CN105426914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510807729.4A CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510807729.4A CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Publications (2)

Publication Number Publication Date
CN105426914A true CN105426914A (en) 2016-03-23
CN105426914B CN105426914B (en) 2019-03-15

Family

ID=55505112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510807729.4A Active CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Country Status (1)

Country Link
CN (1) CN105426914B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956597A (en) * 2016-05-04 2016-09-21 浙江大学 Binocular stereo matching method based on convolution neural network
CN106709462A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Indoor positioning method and device
CN107330127A (en) * 2017-07-21 2017-11-07 湘潭大学 A kind of Similar Text detection method retrieved based on textual image
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107992848A (en) * 2017-12-19 2018-05-04 北京小米移动软件有限公司 Obtain the method, apparatus and computer-readable recording medium of depth image
CN108829826A (en) * 2018-06-14 2018-11-16 清华大学深圳研究生院 A kind of image search method based on deep learning and semantic segmentation
CN109214235A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 outdoor scene classification method and system
CN109271870A (en) * 2018-08-21 2019-01-25 平安科技(深圳)有限公司 Pedestrian recognition methods, device, computer equipment and storage medium again
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN110050243A (en) * 2016-12-21 2019-07-23 英特尔公司 It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning
CN110322472A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of multi-object tracking method and terminal device
CN110334226A (en) * 2019-04-25 2019-10-15 吉林大学 The depth image search method of fusion feature Distribution Entropy
CN110866532A (en) * 2019-11-07 2020-03-06 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN112907644A (en) * 2021-02-03 2021-06-04 中国人民解放军战略支援部队信息工程大学 Machine map-oriented visual positioning method
CN113139589A (en) * 2021-04-12 2021-07-20 网易(杭州)网络有限公司 Picture similarity detection method and device, processor and electronic device
CN113657415A (en) * 2021-10-21 2021-11-16 西安交通大学城市学院 Object detection method oriented to schematic diagram

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148619A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Superpixel segmentation methods and systems
CN104408405A (en) * 2014-11-03 2015-03-11 北京畅景立达软件技术有限公司 Face representation and similarity calculation method
CN104504055A (en) * 2014-12-19 2015-04-08 常州飞寻视讯信息科技有限公司 Commodity similarity calculation method and commodity recommending system based on image similarity
CN105005987A (en) * 2015-06-23 2015-10-28 中国人民解放军国防科学技术大学 SAR image superpixel generating method based on general gamma distribution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148619A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Superpixel segmentation methods and systems
CN104408405A (en) * 2014-11-03 2015-03-11 北京畅景立达软件技术有限公司 Face representation and similarity calculation method
CN104504055A (en) * 2014-12-19 2015-04-08 常州飞寻视讯信息科技有限公司 Commodity similarity calculation method and commodity recommending system based on image similarity
CN105005987A (en) * 2015-06-23 2015-10-28 中国人民解放军国防科学技术大学 SAR image superpixel generating method based on general gamma distribution

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956597A (en) * 2016-05-04 2016-09-21 浙江大学 Binocular stereo matching method based on convolution neural network
CN110050243A (en) * 2016-12-21 2019-07-23 英特尔公司 It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning
CN110050243B (en) * 2016-12-21 2022-09-20 英特尔公司 Camera repositioning by enhanced neural regression using mid-layer features in autonomous machines
CN106709462A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Indoor positioning method and device
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN109214235A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 outdoor scene classification method and system
CN107330127A (en) * 2017-07-21 2017-11-07 湘潭大学 A kind of Similar Text detection method retrieved based on textual image
CN107330127B (en) * 2017-07-21 2020-06-05 湘潭大学 Similar text detection method based on text picture retrieval
CN107992848A (en) * 2017-12-19 2018-05-04 北京小米移动软件有限公司 Obtain the method, apparatus and computer-readable recording medium of depth image
CN107992848B (en) * 2017-12-19 2020-09-25 北京小米移动软件有限公司 Method and device for acquiring depth image and computer readable storage medium
CN110322472A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of multi-object tracking method and terminal device
CN108829826A (en) * 2018-06-14 2018-11-16 清华大学深圳研究生院 A kind of image search method based on deep learning and semantic segmentation
CN108829826B (en) * 2018-06-14 2020-08-07 清华大学深圳研究生院 Image retrieval method based on deep learning and semantic segmentation
CN109271870A (en) * 2018-08-21 2019-01-25 平安科技(深圳)有限公司 Pedestrian recognition methods, device, computer equipment and storage medium again
CN109271870B (en) * 2018-08-21 2023-12-26 平安科技(深圳)有限公司 Pedestrian re-identification method, device, computer equipment and storage medium
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109409418B (en) * 2018-09-29 2022-04-15 中山大学 Loop detection method based on bag-of-words model
CN110334226A (en) * 2019-04-25 2019-10-15 吉林大学 The depth image search method of fusion feature Distribution Entropy
CN110334226B (en) * 2019-04-25 2022-04-05 吉林大学 Depth image retrieval method fusing feature distribution entropy
CN110866532A (en) * 2019-11-07 2020-03-06 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN110866532B (en) * 2019-11-07 2022-12-30 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN112907644B (en) * 2021-02-03 2023-02-03 中国人民解放军战略支援部队信息工程大学 Machine map-oriented visual positioning method
CN112907644A (en) * 2021-02-03 2021-06-04 中国人民解放军战略支援部队信息工程大学 Machine map-oriented visual positioning method
CN113139589A (en) * 2021-04-12 2021-07-20 网易(杭州)网络有限公司 Picture similarity detection method and device, processor and electronic device
CN113139589B (en) * 2021-04-12 2023-02-28 网易(杭州)网络有限公司 Picture similarity detection method and device, processor and electronic device
CN113657415A (en) * 2021-10-21 2021-11-16 西安交通大学城市学院 Object detection method oriented to schematic diagram

Also Published As

Publication number Publication date
CN105426914B (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN105426914A (en) Image similarity detection method for position recognition
CN110059554B (en) Multi-branch target detection method based on traffic scene
CN107609601B (en) Ship target identification method based on multilayer convolutional neural network
Ding et al. A deeply-recursive convolutional network for crowd counting
CN109840556B (en) Image classification and identification method based on twin network
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN110533048A (en) The realization method and system of combination semantic hierarchies link model based on panoramic field scene perception
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
Yang et al. Visual SLAM based on semantic segmentation and geometric constraints for dynamic indoor environments
Ma et al. Scene invariant crowd counting using multi‐scales head detection in video surveillance
Wang et al. Ship target detection algorithm based on improved YOLOv3
Wang et al. Fusionnet: Coarse-to-fine extrinsic calibration network of lidar and camera with hierarchical point-pixel fusion
Lamba et al. A texture based mani-fold approach for crowd density estimation using Gaussian Markov Random Field
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN112906517B (en) Self-supervision power law distribution crowd counting method and device and electronic equipment
Li et al. Real-time crowd density estimation based on convolutional neural networks
Zhu et al. Multi-scale region-based saliency detection using W 2 distance on N-dimensional normal distributions
Hu et al. Convolutional neural networks with hybrid weights for 3D point cloud classification
Li et al. Research on YOLOv3 pedestrian detection algorithm based on channel attention mechanism
CN113139540B (en) Backboard detection method and equipment
Wang Motion recognition based on deep learning and human joint points
Zhang et al. Optimization research of UAV target detection algorithm based on deep learning
CN118229889B (en) Video scene previewing auxiliary method and device
Sahu et al. A deep learning-based classifier for remote sensing images
Norouzi Sefidmazgi et al. Improved background modeling of video sequences using spatio-temporal extension of fuzzy local binary pattern

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant