CN107609565B - Indoor visual positioning method based on image global feature principal component linear regression - Google Patents

Indoor visual positioning method based on image global feature principal component linear regression Download PDF

Info

Publication number
CN107609565B
CN107609565B CN201710861213.7A CN201710861213A CN107609565B CN 107609565 B CN107609565 B CN 107609565B CN 201710861213 A CN201710861213 A CN 201710861213A CN 107609565 B CN107609565 B CN 107609565B
Authority
CN
China
Prior art keywords
image
principal component
algorithm
database
linear regression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710861213.7A
Other languages
Chinese (zh)
Other versions
CN107609565A (en
Inventor
谭学治
殷锡亮
马琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201710861213.7A priority Critical patent/CN107609565B/en
Publication of CN107609565A publication Critical patent/CN107609565A/en
Application granted granted Critical
Publication of CN107609565B publication Critical patent/CN107609565B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an indoor visual positioning method based on image global feature principal component linear regression, and relates to an indoor visual positioning method. The invention aims to solve the problems that the traditional algorithm is poor in matching precision, time-consuming and inconstant in the initial stage and has high requirements on an offline database. The invention comprises the following steps: firstly, the method comprises the following steps: extracting characteristic information of the video database image by using an image global characteristic algorithm; II, secondly: extracting principal components of the database image feature information by using a principal component analysis algorithm for the feature information; thirdly, the method comprises the following steps: mapping the position information corresponding to the video frame and the extracted characteristic information by using a linear regression algorithm for the principal component of the database image characteristic information to generate a positioning model; fourthly, the method comprises the following steps: extracting characteristic information of the user positioning image by using a GIST algorithm; fifthly: and inputting the feature information extracted in the fourth step into the positioning model generated in the third step to obtain the user position and the database video frame to be accurately matched. The invention is used for the technical field of indoor positioning and image processing.

Description

Indoor visual positioning method based on image global feature principal component linear regression
Technical Field
The invention relates to the technical field of indoor positioning and image processing, in particular to an indoor visual positioning method.
Background
In the field of visual positioning, the visual positioning needs to complete positioning work by utilizing abundant image information, a large amount of comparison operation exists in an image matching algorithm of an online stage of any type of visual indoor positioning method, so that the time consumption of the whole positioning stage is long, a matching algorithm of a two-stage matching idea is adopted, the matching precision range is not considered in a rough matching stage, the matching speed is excessively pursued, and therefore a large positioning error occurs in an accurate positioning stage. The efficiency of the traditional positioning algorithm is in direct proportion to the scale of the off-line stage sampling data, so that the traditional algorithm has higher requirements on the off-line stage generated database. A corresponding algorithm needs to be additionally designed to control the capacity and quality of the off-line database.
A typical algorithm in the rough matching stage is a clustering algorithm, that is, images with similar features are divided into a cluster, so that images in a video database are divided into a plurality of clusters according to the requirement of locating key features of a scene or the requirement of locating precision, a representative image is selected from each cluster, and a feature vector of the representative image is used as a feature vector description of the cluster. When a user inputs an image to be positioned, calculating the Euclidean distance between the feature vector of the image and the representative feature vector of each cluster, sorting the Euclidean distances from small to large, taking the cluster representative with the minimum distance as output, and continuing to perform accurate positioning. The positioning algorithm of the type is obtained through theoretical analysis, the positioning accuracy is in inverse proportion to the speed, the positioning accuracy is in direct proportion to the number of clusters, and the positioning speed is in inverse proportion to the number of clusters, so that the type of algorithm can only balance the positioning accuracy and the positioning speed according to actual needs.
In order to improve the defects of the traditional algorithm, the invention adopts a new algorithm, which not only improves the positioning precision, but also can accelerate the positioning speed, and meanwhile, the output of the algorithm can be used as the final positioning result of a scene with the one-dimensional positioning requirement of less than 2m error, and the accurate positioning is not needed.
Disclosure of Invention
The invention aims to solve the problems that the matching precision of a traditional algorithm is poor in an initial stage, time consumption is not constant, and requirements for an offline database are high, and provides an indoor visual positioning method based on image global feature principal component linear regression.
An indoor visual positioning method based on image global feature principal component linear regression (PCLR-GIST) comprises the following steps:
the method comprises the following steps: extracting characteristic information of the video database image by using an image global characteristic algorithm to obtain a characteristic information sample matrix;
step two: extracting principal components of the database image feature information by using a principal component analysis algorithm for the feature information extracted in the step one;
step three: mapping the position information corresponding to the video frame and the characteristic information extracted in the first step by using a linear regression algorithm for the principal component of the database image characteristic information extracted in the second step to generate a positioning model;
step four: extracting characteristic information of the user positioning image by using a GIST algorithm;
step five: and inputting the feature information extracted in the fourth step into the positioning model generated in the third step to obtain the user position and the database video frame to be accurately matched.
The invention has the beneficial effects that:
when the method is used for indoor visual rough positioning, the required time is shorter, the precision is higher, under the same positioning scene, the method has no any requirement on an offline database, the traditional clustering algorithm needs to select a specific frame as a representative of a cluster according to the scene, the precise measurement needs to be carried out during selection, and the positioning speed is higher under the condition of adopting rough-precise matching. And in the off-line stage, a GIST algorithm, a PCA algorithm and an LR algorithm are adopted to generate a positioning model, and in the on-line stage, according to the GIST characteristics of the user positioning image, the rough estimation of the user position and the output of the video frame of the reference database required in the accurate positioning stage are completed in one step.
According to the experimental result, in the same positioning scene, the mean value of the time required by positioning is only 0.25ms, while the mean value of the time consumed by the traditional clustering algorithm is 0.35ms, when the positioning is carried out by using the method, the required time is reduced by about 30% compared with the traditional clustering algorithm, and the positioning precision is also greatly improved compared with the traditional clustering algorithm. The average error of one-dimensional positioning is 0.67m, the minimum positioning error is 0m, and the maximum positioning error is 1.98 m. The confidence probability of the positioning error of the invention within 1m is close to 80%, while the traditional clustering algorithm is only 60%.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a graph comparing the elapsed time of the clustering algorithm with the algorithm of the present invention when 1400 images are available in the video database;
FIG. 3 is a graph comparing elapsed time for clustering algorithm versus the algorithm of the present invention for 1200 images in a video database;
FIG. 4 is a graph comparing elapsed time for clustering algorithm and algorithm of the present invention for 800 images in a video database;
FIG. 5 is a graph comparing elapsed time of clustering algorithm and algorithm of the present invention for 600 images in a video database;
FIG. 6 is a comparison graph of the positioning accuracy of the clustering algorithm and the algorithm of the present invention.
Detailed Description
The first embodiment is as follows: as shown in fig. 1, an indoor visual positioning method based on image global feature principal component linear regression includes the following steps:
the method comprises the following steps: extracting characteristic information of the video database image by using an image global characteristic algorithm to obtain a characteristic information sample matrix;
step two: extracting principal components of the database image feature information by using a principal component analysis algorithm for the feature information extracted in the step one;
step three: mapping the position information corresponding to the video frame and the characteristic information extracted in the first step by using a linear regression algorithm for the principal component of the database image characteristic information extracted in the second step to generate a positioning model;
step four: extracting characteristic information of the user positioning image by using a GIST algorithm (the same as the step and the method);
step five: and inputting the feature information extracted in the fourth step into the positioning model generated in the third step to obtain the user position and the database video frame to be accurately matched.
The GIST algorithm is an image global feature algorithm.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the first step, feature information extraction is performed on the video database image by using an image global feature algorithm, and a specific process of obtaining a feature information sample matrix is as follows:
step one, preprocessing the video database image:
firstly, preprocessing an image, when an original input image is not square (rectangular), respectively cutting pixels equal to half of the number of pixels of a short side of the image from long sides at two sides of a symmetry axis by taking a connecting line of midpoints of the two long sides of the image as the symmetry axis to obtain a square image, and discarding the rest parts; if the original image is square, intercepting processing is not carried out; scaling the square image to 256 × 256 pixels and converting the image to a grayscale image;
step two, performing Gober (Gabor) filtering on the gray level images obtained in the step one by one:
performing two-dimensional discrete Fourier transform on the gray scale image, as shown in formula (1),
Figure BDA0001415113330000031
wherein i (x, y) is the gray value distribution of the image, x represents the x-axis coordinate value of the time domain, and y represents the y-axis coordinate value f of the time domainXIs a frequency variable of the X axis of the frequency domain, fYFor the frequency variation of the Y-axis in the frequency domain, h (x, Y) is a circular Hamming window function introduced to reduce edge effect, I (f)x,fy) The value of the gray image after two-dimensional discrete Fourier transform, N is the scaled pixel value, and j is an imaginary unit;
gabor function calculation is performed by equation (2),
Figure BDA0001415113330000032
where l is the scale of the gray scale image, θiIs the angle of each direction in the l-scale, thetalIs the total number of directions at the scale of the grayscale image, thetai=π(k-1)/θlI is a count value of the direction angle, and generally 16 directions are taken, k is 1,2, …, θl,σ2Is the variance of a gaussian function and is,
Figure BDA0001415113330000041
and
Figure BDA0001415113330000042
is an intermediate variable, calculated by equation (3):
Figure BDA0001415113330000043
two-dimensional discrete Fourier transform result I (f) of gray scale imageX,fY) And Gabor functionAnd performing product operation, and performing two-dimensional inverse Fourier transform to obtain a filtering result i' (x, y) of the gray image, as shown in formula (4):
Figure BDA0001415113330000044
step three, carrying out vector expression of image global features on the filtering result i' (x, y) of the gray level image:
dividing the filtered gray level image into 16 blocks according to 4 multiplied by 4 grid squares, sequentially counting the values of gray level histograms of filtering results in each block in different directions, and expressing the values by using a row vector G which is used as a global feature descriptor of the image and is feature information;
the dimension r (G) of the vector G is calculated from equation (5),
R(G)=n2σ (5)
wherein n is2Expressing the number of divided grids, wherein sigma is the number of scale layers and is the number of directions corresponding to each scale;
repeating the steps from one to three to obtain M row vectors G for the M images in the databaseMAnd forming a characteristic information sample matrix.
Other steps and parameters are the same as those in the first embodiment.
The third concrete implementation mode: the present embodiment differs from the first or second embodiment in that: in the second step, a principal component analysis algorithm is used for the feature information extracted in the first step, and the specific process of extracting the principal component of the database image feature information is as follows:
step two, in order to shield the influence of the data with larger numerical values in the GIST vector on the smaller data, normalizing the data in the characteristic information sample matrix obtained in the step one by using a standard deviation method, and obtaining a normalized characteristic information sample matrix X through normalization;
Figure BDA0001415113330000045
x*is the normalized data value, x is the initial data value, μ is the initial data meanσ is the initial data standard deviation;
secondly, solving a correlation matrix R of the standardized characteristic information sample matrix X;
Figure BDA0001415113330000051
step two and step three, solving the eigenvalue lambda of the correlation matrix R through the Jacobian method12,…λkAnd the feature vectors are sorted according to the size;
step two, calculating the cumulative contribution rate of the eigenvalues, giving a minimum value w satisfying the formula (6) according to the threshold value psi as the number of the selected eigenvalues, sorting the eigenvalues according to the step two, selecting the first w eigenvalues, and forming a matrix E by the first w corresponding eigenvectors;
Figure BDA0001415113330000052
step two, obtaining each element in a principal component load matrix L of the database image according to the formula (7);
Figure BDA0001415113330000053
ecbis the c-th row and b-th column element, lambda, of the matrix EcThe first one of the first w characteristic values is selected;
step two, obtaining a main component Z of the characteristic information according to the formula (8);
Z=LX (8)。
other steps and parameters are the same as those in the first or second embodiment.
The fourth concrete implementation mode: the difference between this embodiment mode and one of the first to third embodiment modes is: in the third step, the principal component of the database image characteristic information extracted in the second step is mapped with the position information corresponding to the video frame and the characteristic information extracted in the first step by using a linear regression algorithm to generate a positioning model:
using feature principal component information Z with videoMapping frame number P, and performing linear regression modeling by using least square algorithm according to mapping formula (9) to obtain mapping coefficient b0,b1,…,bq
P=b0Z0+b1Z1+…+bqZq(9)
Wherein Z1,…,ZqIs the first to the qth principal component, Z0Is constant and takes a value of 1.
Other steps and parameters are the same as those in one of the first to third embodiments.
The fifth concrete implementation mode: the difference between this embodiment and one of the first to fourth embodiments is: in the fifth step, the feature information extracted in the fourth step is input into the positioning model generated in the third step, and the step of obtaining the user position and the database video frame to be precision matched is specifically as follows:
according to the formula (8) and the formula (9), the user picture matching frame number is given, and according to the formula (10), the user position estimation is given:
Figure BDA0001415113330000061
wherein, R is the frame rate of the video database, V is the linear speed of the recording device along the travel path, and A is the travel distance of the user along the motion direction.
And a window function is used, a plurality of matched pictures are obtained within a given step length according to the upward and downward directions of the given video frame number, the matched pictures are used as reference pictures in the accurate positioning stage, and the rough positioning position can be given according to an average value algorithm, so that the further improvement of the precision of the rough positioning is facilitated.
Other steps and parameters are the same as in one of the first to fourth embodiments.
The sixth specific implementation mode: the difference between this embodiment and one of the first to fifth embodiments is: in the third step, the value of sigma is 1, the value of sigma is 32, and the value of n is 4.
Other steps and parameters are the same as those in one of the first to fifth embodiments.
The seventh embodiment: the difference between this embodiment and one of the first to sixth embodiments is: the value of the second four psi in the step is 0.96.
Other steps and parameters are the same as those in one of the first to sixth embodiments.
The first embodiment is as follows:
1. at level 2A of 12 of the department of sciences of the harbin industrial university, video acquisition equipment is used for acquiring videos of the positioning area.
2. And extracting features of the collected video file by using a GIST algorithm, extracting main components of the GIST features by using a PCA algorithm, and mapping the video frame number and the main components of the GIST features by using an LR algorithm.
3. And inputting a picture to be positioned by a user, extracting GIST characteristics, and giving a predicted position and a database matching frame number through a positioning model.
4. Fig. 2-5 show the time consumption comparison between the conventional clustering algorithm and the algorithm proposed by the present invention for different database video frame sampling examples.
5. The error accumulation probability curve of the algorithm according to the positioning result is obtained as shown in fig. 6.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (7)

1. An indoor visual positioning method based on image global feature principal component linear regression is characterized in that: the indoor visual positioning method based on principal component linear regression of image global features comprises the following steps:
the method comprises the following steps: extracting characteristic information of the video database image by using an image global characteristic algorithm to obtain a characteristic information sample matrix;
step two: extracting principal components of the database image feature information by using a principal component analysis algorithm for the feature information extracted in the step one;
step three: mapping the position information corresponding to the video frame and the characteristic information extracted in the first step by using a linear regression algorithm for the principal component of the database image characteristic information extracted in the second step to generate a positioning model;
step four: extracting characteristic information of the user positioning image by using a GIST algorithm;
step five: and inputting the feature information extracted in the fourth step into the positioning model generated in the third step to obtain the user position and the database video frame to be accurately matched.
2. The indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 1, wherein: in the first step, feature information extraction is performed on the video database image by using an image global feature algorithm, and a specific process of obtaining a feature information sample matrix is as follows:
step one, preprocessing the video database image:
when the original input image is not a square, taking a connecting line of the midpoints of two long sides of the image as a symmetry axis, respectively cutting pixels which are half of the number of pixels of the short side of the image from the long sides of the symmetry axis to obtain a square image, and discarding the rest parts; if the original image is square, intercepting processing is not carried out; scaling the square image and converting the image into a grey scale image;
step two, performing Bober filtering on the gray level images obtained in the step one by one:
performing two-dimensional discrete Fourier transform on the gray scale image, as shown in formula (1),
Figure FDA0001415113320000011
wherein i (x, y) is the gray value distribution of the image, x represents the x-axis coordinate value of the image space domain, and y represents the y-axis coordinate value f of the image space domainXIs a frequency variable of the X axis of the spatial frequency domain, fYFor the frequency variation of the Y-axis of the spatial frequency domain, h (x, Y) is the circular Hamming window function, I (f)x,fy) The value of the gray image after two-dimensional discrete Fourier transform, N is the scaled pixel value, and j is an imaginary unit;
gabor function calculation is performed by equation (2),
Figure FDA0001415113320000021
where l is the scale of the gray scale image, θiIs the i-th direction angle in the l-scale, θlIs the total number of directions at the scale of the grayscale image, thetai=π(k-1)/θlI is a count value of the direction angle, k is 1,2, …, θl,σ2Is the variance of a gaussian function and is,
Figure FDA0001415113320000024
and
Figure FDA0001415113320000025
is an intermediate variable, calculated by equation (3):
Figure FDA0001415113320000022
two-dimensional discrete Fourier transform result I (f) of gray scale imageX,fY) And (3) performing product operation with a Gabor function, and performing two-dimensional Fourier inverse transformation to obtain a filtering result i' (x, y) of the gray image, as shown in formula (4):
Figure FDA0001415113320000023
step three, carrying out vector expression of image global features on the filtering result i' (x, y) of the gray level image:
dividing the filtered gray level image into 16 blocks according to 4 multiplied by 4 grid squares, sequentially counting the values of gray level histograms of filtering results in each block in different directions, and expressing the values by using a row vector G which is used as a global feature descriptor of the image and is feature information;
the dimension r (G) of the vector G is calculated from equation (5),
R(G)=n2σ (5)
wherein n is2Expressing the number of divided grids, wherein sigma is the number of scale layers and is the number of directions corresponding to each scale;
repeating the steps from one to three to obtain M row vectors G for the M images in the databaseMAnd forming a characteristic information sample matrix.
3. The indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 2, wherein: in the second step, a principal component analysis algorithm is used for the feature information extracted in the first step, and the specific process of extracting the principal component of the database image feature information is as follows:
step two, normalizing the data in the characteristic information sample matrix obtained in the step one by using a standard deviation method, and obtaining a normalized characteristic information sample matrix X through normalization;
secondly, solving a correlation matrix R of the standardized characteristic information sample matrix X;
step two and step three, solving the eigenvalue lambda of the correlation matrix R through the Jacobian method12,…λkAnd the feature vectors are sorted according to the size;
step two, calculating the cumulative contribution rate of the eigenvalues, giving a minimum value w satisfying the formula (6) according to the threshold value psi as the number of the selected eigenvalues, sorting the eigenvalues according to the step two, selecting the first w eigenvalues, and forming a matrix E by the first w corresponding eigenvectors;
Figure FDA0001415113320000031
step two, obtaining each element in a principal component load matrix L of the database image according to the formula (7);
Figure FDA0001415113320000032
ecbis the c-th row and b-th column element, lambda, of the matrix EcThe first one of the first w characteristic values is selected;
step two, obtaining a main component Z of the characteristic information according to the formula (8);
Z=LX (8)。
4. the indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 3, wherein: in the third step, a linear regression algorithm is used for the principal components of the database image feature information extracted in the second step, the position information corresponding to the video frame is mapped with the feature information extracted in the first step, and the specific process of generating the positioning model comprises the following steps:
mapping the characteristic principal component information Z and the video frame number P, performing linear regression modeling according to a mapping formula (9) by using a least square algorithm to obtain a mapping coefficient b0,b1,…,bq
P=b0Z0+b1Z1+…+bqZq(9)
Wherein Z1,…,ZqIs the first to the qth principal component, Z0Is constant and takes a value of 1.
5. The indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 4, wherein: inputting the feature information extracted in the fourth step into the positioning model generated in the third step to obtain the user position and the database video frame to be accurately matched specifically:
according to the formula (8) and the formula (9), the user picture matching frame number is given, and according to the formula (10), the user position estimation is given:
Figure FDA0001415113320000041
wherein, R is the frame rate of the video database, V is the linear speed of the recording device along the travel path, and A is the travel distance of the user along the motion direction.
6. The indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 5, wherein: in the third step, the value of sigma is 1, the value of sigma is 32, and the value of n is 4.
7. The indoor visual positioning method based on image global feature principal component linear regression as claimed in claim 6, wherein: the value of the second four psi in the step is 0.96.
CN201710861213.7A 2017-09-21 2017-09-21 Indoor visual positioning method based on image global feature principal component linear regression Expired - Fee Related CN107609565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710861213.7A CN107609565B (en) 2017-09-21 2017-09-21 Indoor visual positioning method based on image global feature principal component linear regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710861213.7A CN107609565B (en) 2017-09-21 2017-09-21 Indoor visual positioning method based on image global feature principal component linear regression

Publications (2)

Publication Number Publication Date
CN107609565A CN107609565A (en) 2018-01-19
CN107609565B true CN107609565B (en) 2020-08-11

Family

ID=61061323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710861213.7A Expired - Fee Related CN107609565B (en) 2017-09-21 2017-09-21 Indoor visual positioning method based on image global feature principal component linear regression

Country Status (1)

Country Link
CN (1) CN107609565B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108981698B (en) * 2018-05-29 2020-07-14 杭州视氪科技有限公司 Visual positioning method based on multi-mode data
CN113467499A (en) * 2018-05-30 2021-10-01 深圳市大疆创新科技有限公司 Flight control method and aircraft
CN110827355B (en) * 2019-11-14 2023-05-09 南京工程学院 Moving target rapid positioning method and system based on video image coordinates

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188776B1 (en) * 1996-05-21 2001-02-13 Interval Research Corporation Principle component analysis of images for the automatic location of control points
CN104299247A (en) * 2014-10-15 2015-01-21 云南大学 Video object tracking method based on self-adaptive measurement matrix
CN104616035A (en) * 2015-03-12 2015-05-13 哈尔滨工业大学 Visual Map rapid matching method based on global image feature and SURF algorithm
CN106709409A (en) * 2015-11-17 2017-05-24 金华永沐软件研究所有限公司 Novel robot indoor positioning technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188776B1 (en) * 1996-05-21 2001-02-13 Interval Research Corporation Principle component analysis of images for the automatic location of control points
CN104299247A (en) * 2014-10-15 2015-01-21 云南大学 Video object tracking method based on self-adaptive measurement matrix
CN104616035A (en) * 2015-03-12 2015-05-13 哈尔滨工业大学 Visual Map rapid matching method based on global image feature and SURF algorithm
CN106709409A (en) * 2015-11-17 2017-05-24 金华永沐软件研究所有限公司 Novel robot indoor positioning technology

Also Published As

Publication number Publication date
CN107609565A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
EP2948877B1 (en) Content based image retrieval
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN103077512B (en) Based on the feature extracting and matching method of the digital picture that major component is analysed
CN111462120B (en) Defect detection method, device, medium and equipment based on semantic segmentation model
Dong et al. Multiscale sampling based texture image classification
CN107609565B (en) Indoor visual positioning method based on image global feature principal component linear regression
CN107123130B (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN107633065B (en) Identification method based on hand-drawn sketch
CN111507357B (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
CN107341505B (en) Scene classification method based on image significance and Object Bank
CN101561865A (en) Synthetic aperture radar image target identification method based on multi-parameter spectrum feature
Liang et al. Automatic defect detection of texture surface with an efficient texture removal network
CN107808391A (en) A kind of feature based selection and the smooth video dynamic object extracting method for representing cluster
CN109241932B (en) Thermal infrared human body action identification method based on motion variance map phase characteristics
CN108491883B (en) Saliency detection optimization method based on conditional random field
CN110083724A (en) A kind of method for retrieving similar images, apparatus and system
CN106373177A (en) Design method used for optimizing image scene illumination estimation
Kong et al. Multi-face detection based on downsampling and modified subtractive clustering for color images
CN113902779A (en) Point cloud registration method based on tensor voting method
CN113313694A (en) Surface defect rapid detection method based on light-weight convolutional neural network
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
Li et al. Research on YOLOv3 pedestrian detection algorithm based on channel attention mechanism
CN108664919A (en) A kind of Activity recognition and detection method based on single sample
CN109829377A (en) A kind of pedestrian's recognition methods again based on depth cosine metric learning
CN114926488A (en) Workpiece positioning method based on generalized Hough model and improved pyramid search acceleration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200811

Termination date: 20210921