CN105844651A - Image analyzing apparatus - Google Patents

Image analyzing apparatus Download PDF

Info

Publication number
CN105844651A
CN105844651A CN201610230473.XA CN201610230473A CN105844651A CN 105844651 A CN105844651 A CN 105844651A CN 201610230473 A CN201610230473 A CN 201610230473A CN 105844651 A CN105844651 A CN 105844651A
Authority
CN
China
Prior art keywords
image
points
module
point
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610230473.XA
Other languages
Chinese (zh)
Inventor
吴本刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610230473.XA priority Critical patent/CN105844651A/en
Publication of CN105844651A publication Critical patent/CN105844651A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2134Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
    • G06F18/21342Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis using statistical independence, i.e. minimising mutual information or maximising non-gaussianity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image analyzing apparatus which comprises a pre-treatment module for images, an image extreme point detection module, an image characteristic point positioning module, a principal direction determining module and a characteristics extracting module. The image characteristic point positioning module determines the extreme points of characteristics points by sorting out the low contrast ratio points sensitive to noises and the unstable marginal points. The principal direction determining module connects any two adjacent peak values in the histograms of oriented gradients for the characteristic points to form a plurality of sub-line sections wherein adjacent sub-line sections with similar gradients are consolidated in the length direction to form line sections. The direction of the most optimal one of the line sections is regarded as the principal direction of the characteristic points. According to the invention, high identifying accuracy and high speed can be achieved.

Description

Image analysis device
Technical Field
The invention relates to the field of image analysis, in particular to an image analysis device.
Background
In the related art, the object to be detected in the image analysis device is often a face image of a person. However, when a wide range of contents stored in the memory is targeted, it is desirable to target a wide variety of object images such as vehicles, animals, buildings, graphics, and various articles. In addition, in order to process large-scale data, it is necessary to improve the analysis processing efficiency and accuracy.
Disclosure of Invention
In view of the above problems, the present invention provides an image analysis device with high recognition and detection speed and high accuracy.
The purpose of the invention is realized by adopting the following technical scheme:
an image analysis device for performing recognition detection on an image is provided, which includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 [ max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) ]
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , σ ) = 1 2 π σ e - x 2 / 2 σ 2 , G ( y , σ ) = 1 2 π σ e - y 2 / 2 σ 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , σ ) + ∂ D ( x , y , σ ) T ∂ x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The invention has the beneficial effects that:
1. the set image preprocessing module considers the visual habit and the nonlinear relation of the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately;
2. a simplified calculation formula of a Gaussian difference scale space is provided, the calculation amount is reduced, the calculation speed is improved, and the speed of image analysis is further improved;
3. the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points, so that the effectiveness of the characteristic points is guaranteed, the gray value of the image is enhanced, the stability of the image can be greatly improved, the low-contrast points are removed more accurately, and the accuracy of image analysis is further improved;
4. the method comprises the steps of setting a main direction determining module, providing a judging formula of an optimal line segment, taking the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the characteristic point as the main direction of the characteristic point, wherein the line segment is more stable relative to the point, so that a descriptor of the characteristic point corresponding to an image has repeatability, the accuracy of the descriptor of the characteristic is improved, the image can be identified and detected more quickly and accurately, and the robustness is high.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a schematic diagram of the connection of modules of the present invention.
Detailed Description
The invention is further described with reference to the following examples.
Example 1
Referring to fig. 1, the image analysis apparatus of the present embodiment includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The image preprocessing module provided by the embodiment considers the visual habit and the nonlinear relation between the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately; a simplified calculation formula of a Gaussian difference scale space is provided, the calculation amount is reduced, the calculation speed is improved, and the speed of image analysis is further improved; the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points to ensure characteristic pointsThe method has the advantages that the gray value of the image is enhanced, so that the stability of the image can be greatly improved, low-contrast points can be removed more accurately, and the accuracy of image analysis is improved; the method comprises the steps that a main direction determining module is set, a judging formula of an optimal line segment is provided, the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the feature point is used as the main direction of the feature point, the line segment is more stable relative to the point, so that a descriptor of the feature point corresponding to an image has repeatability, the accuracy of the feature descriptor is improved, the image can be identified and detected more quickly and accurately, and the robustness is high; this embodiment takes the threshold value T1=0.01,T2=10,T3The accuracy of the image analysis increased by 2% and the speed increased by 1% at 0.1.
Example 2
Referring to fig. 1, the image analysis apparatus of the present embodiment includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The image preprocessing module provided by the embodiment considers the visual habit and the nonlinear relation between the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately; proposes a Gaussian differenceThe simplified calculation formula of the scale space reduces the calculation amount, improves the calculation speed and further improves the speed of image analysis; the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points, so that the effectiveness of the characteristic points is guaranteed, the gray value of the image is enhanced, the stability of the image can be greatly improved, the low-contrast points are removed more accurately, and the accuracy of image analysis is further improved; the method comprises the steps that a main direction determining module is set, a judging formula of an optimal line segment is provided, the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the feature point is used as the main direction of the feature point, the line segment is more stable relative to the point, so that a descriptor of the feature point corresponding to an image has repeatability, the accuracy of the feature descriptor is improved, the image can be identified and detected more quickly and accurately, and the robustness is high; this embodiment takes the threshold value T1=0.02,T2=11,T3The accuracy of the image analysis increased by 1% and the speed increased by 1.5% for 0.08.
Example 3
Referring to fig. 1, the image analysis apparatus of the present embodiment includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The image preprocessing module provided by the embodiment considers the visual habit and the nonlinear relation between the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately; a simplified calculation formula of a Gaussian difference scale space is provided, the calculation amount is reduced, the calculation speed is improved, and the speed of image analysis is further improved; the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points, so that the effectiveness of the characteristic points is guaranteed, the gray value of the image is enhanced, the stability of the image can be greatly improved, the low-contrast points are removed more accurately, and the accuracy of image analysis is further improved; the method comprises the steps that a main direction determining module is set, a judging formula of an optimal line segment is provided, the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the feature point is used as the main direction of the feature point, the line segment is more stable relative to the point, so that a descriptor of the feature point corresponding to an image has repeatability, the accuracy of the feature descriptor is improved, the image can be identified and detected more quickly and accurately, and the robustness is high; this embodiment takes the threshold value T1=0.03,T2=12,T3The accuracy of the image analysis increased by 2.5% and the speed increased by 3% at 0.06%.
Example 4
Referring to fig. 1, the image analysis apparatus of the present embodiment includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The image preprocessing module provided by the embodiment considers the visual habit and the nonlinear relation between the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately; a simplified calculation formula of a Gaussian difference scale space is provided, the calculation amount is reduced, the calculation speed is improved, and the speed of image analysis is further improved; the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points, so that the effectiveness of the characteristic points is guaranteed, the gray value of the image is enhanced, the stability of the image can be greatly improved, the low-contrast points are removed more accurately, and the accuracy of image analysis is further improved; the method comprises the steps that a main direction determining module is set, a judging formula of an optimal line segment is provided, the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the feature point is used as the main direction of the feature point, the line segment is more stable relative to the point, so that a descriptor of the feature point corresponding to an image has repeatability, the accuracy of the feature descriptor is improved, the image can be identified and detected more quickly and accurately, and the robustness is high; this embodiment takes the threshold value T1=0.04,T2=13,T3The accuracy of the image analysis increased by 1.5% and the speed increased by 2%, 0.04.
Example 5
Referring to fig. 1, the image analysis apparatus of the present embodiment includes:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) - min ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing ratios between eigenvalues of the matrix H;
preferably, the image analysis apparatus further includes:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; max , g &OverBar; max = max ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυFor the length of one of said line segments being greater than the average lengthA line segment set;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
Further, the sub-line segments with similar slopes have slope differences smaller than a preset threshold T3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
The image preprocessing module provided by the embodiment considers the visual habit and the nonlinear relation between the perceptibility of human eyes to different colors and the color intensity, and can describe the image more accurately; a simplified calculation formula of a Gaussian difference scale space is provided, the calculation amount is reduced, the calculation speed is improved, and the speed of image analysis is further improved; the set image characteristic point positioning module removes low-contrast points and unstable edge points from extreme points, so that the effectiveness of the characteristic points is guaranteed, the gray value of the image is enhanced, the stability of the image can be greatly improved, the low-contrast points are removed more accurately, and the accuracy of image analysis is further improved; the method comprises the steps that a main direction determining module is set, a judging formula of an optimal line segment is provided, the direction of the optimal line segment in the line segment formed by connecting any two adjacent peak values in a gradient direction histogram of the feature point is used as the main direction of the feature point, the line segment is more stable relative to the point, so that a descriptor of the feature point corresponding to an image has repeatability, the accuracy of the feature descriptor is improved, the image can be identified and detected more quickly and accurately, and the robustness is high; this embodiment takes the threshold value T1=0.05,T2=14,T3The accuracy of the image analysis increased by 1.8% and the speed increased by 1.5%, 0.02.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (3)

1. An image analysis device for recognizing and detecting an image, comprising:
(1) the image preprocessing module comprises an image conversion submodule for converting the color image into a gray image and an image filtering submodule for filtering the gray image, wherein the image gray conversion formula of the image conversion submodule is as follows:
I ( x , y ) = m a x ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) 2 + 2 &lsqb; max ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) + m i n ( R ( x , y ) , G ( x , y ) , B ( x , y ) ) &rsqb;
wherein, R (x, y), G (x, y) and B (x, y) respectively represent the red, green and blue intensity values at the pixel point (x, y), and I (x, y) represents the gray value at the pixel point (x, y);
(2) the image extreme point detection module detects the position of each extreme point through a Gaussian difference scale space of an image established by convolution of a Gaussian difference operator and the image, when the values of a sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales of the sampling point with the same scale are all large, the sampling point is a maximum point, when the values of the sampling point relative to 18 points corresponding to 8 adjacent points and upper and lower adjacent scales with the same scale are all small, the sampling point is a minimum point, and the simplified calculation formula of the Gaussian difference scale space is as follows:
D(x,y,σ)=(G(x,kσ)-G(x,σ))*I'(x,y)+(G(y,kσ)-G(y,σ))*I'(x,y)
here, the
G ( x , &sigma; ) = 1 2 &pi; &sigma; e - x 2 / 2 &sigma; 2 , G ( y , &sigma; ) = 1 2 &pi; &sigma; e - y 2 / 2 &sigma; 2
Wherein D (x, y, σ) represents a gaussian difference scale space function, I' (x, y) is an image function output by the image transformation submodule,. indicates a convolution operation,. sigma.represents a scale space factor, G (x, σ), G (y, σ) are defined scale-variable gaussian functions, and k is a constant multiplication factor;
(3) the image feature point positioning module determines extreme points serving as feature points by eliminating low-contrast points sensitive to noise and unstable edge points in the extreme points, and comprises a first positioning submodule, a second positioning submodule and a third positioning submodule, wherein the first positioning submodule, the second positioning submodule and the third positioning submodule are sequentially connected and are used for accurately positioning the extreme points, the second positioning submodule is used for removing the low-contrast points, and the third positioning submodule is used for removing the unstable edge points, wherein:
a. the first positioning submodule performs quadratic Taylor expansion on the Gaussian difference scale space function and obtains the accurate position of an extreme point through derivation, and the scale space function of the extreme point is as follows:
D ( X ^ ) = D ( x , y , &sigma; ) + &part; D ( x , y , &sigma; ) T &part; x X ^
wherein,scale space function representing extreme points, D (x, y, σ)TIs an offset from the extreme point,representing the exact location of the extreme point;
b. the second positioning sub-module sequentially performs gray scale enhancement and normalization processing on the image output by the image conversion sub-module and then eliminates the low-contrast points, wherein the enhanced gray scale values are as follows:
here, the
The determination formula of the low-contrast point is as follows:
D ( X ^ ) < T 1 , T 1 &Element; &lsqb; 0.01 , 0.06 &rsqb;
where I "(x, y) represents the gray value enhanced image function,for the correction coefficients containing local information, M is the maximum gray-scale value of the pixel, which is 255, MHIs the mean of all pixels in the image with a gray value above 128, mLIs the average of all pixels with a gray value below 128, # x (y) is the image processed by the image filtering sub-module, T1Is a set threshold value;
c. the third positioning submodule obtains the main curvature of the extreme point by calculating a Hessian matrix H with the position scale of the extreme point being 2 × 2, and eliminates the main curvature ratio larger than a set threshold value T2To eliminate said unstable edge points, wherein the threshold value T is2Has a value range of [10,15 ]]The principal curvature ratio is determined by comparing the ratios between eigenvalues of the matrix H.
2. An image analysis apparatus according to claim 1, further comprising:
(1) the main direction determining module comprises a connecting sub-module, a merging sub-module and a processing sub-module which are sequentially connected, wherein the connecting sub-module is used for connecting any two adjacent peak values in a gradient direction histogram of the feature points to form a plurality of sub-line segments, the merging sub-module is used for merging the sub-line segments which have similar slopes and are adjacent in the length direction to form a line segment, the processing sub-module is used for taking the direction of the optimal line segment in the line segments as the main direction of the feature points, and the judging formula of the optimal line segment is as follows:
L Y = L g &OverBar; m a x , g &OverBar; m a x = m a x ( g &OverBar; L n ) , g &OverBar; L n = 1 k &Sigma; k = 1 k g k , L n &Element; L &upsi; )
wherein L isYThe optimal line segment is represented by a line segment,is an average gradient value ofThe line segment of (a) is,is the average gradient value of the nth line segment of the plurality of line segments, gkIs the k-th sub-line segment, L, in the n-th line segmentυThe line segment length in the plurality of line segments is larger than the average line segment length;
(2) and the feature extraction module rotates the neighborhood of the feature point according to the main direction and describes the feature point according to the rotated neighborhood so as to generate the descriptor of the feature point.
3. The image analysis apparatus according to claim 1, wherein the sub-line segments having similar slopes have a slope difference smaller than a predetermined thresholdT3The sub-line segment of, the threshold value T3Has a value range of (0, 0.1)]。
CN201610230473.XA 2016-04-14 2016-04-14 Image analyzing apparatus Pending CN105844651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610230473.XA CN105844651A (en) 2016-04-14 2016-04-14 Image analyzing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610230473.XA CN105844651A (en) 2016-04-14 2016-04-14 Image analyzing apparatus

Publications (1)

Publication Number Publication Date
CN105844651A true CN105844651A (en) 2016-08-10

Family

ID=56597669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610230473.XA Pending CN105844651A (en) 2016-04-14 2016-04-14 Image analyzing apparatus

Country Status (1)

Country Link
CN (1) CN105844651A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106737870A (en) * 2017-03-02 2017-05-31 深圳万智联合科技有限公司 A kind of good arenas robot cooling platform of heat dispersion
CN106723241A (en) * 2017-01-09 2017-05-31 浙江大学 A kind of 3D portraits food Method of printing
CN110225335A (en) * 2019-06-20 2019-09-10 中国石油大学(北京) Camera stability assessment method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470896A (en) * 2007-12-24 2009-07-01 南京理工大学 Automotive target flight mode prediction technique based on video analysis
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN104978709A (en) * 2015-06-24 2015-10-14 北京邮电大学 Descriptor generation method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470896A (en) * 2007-12-24 2009-07-01 南京理工大学 Automotive target flight mode prediction technique based on video analysis
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN104978709A (en) * 2015-06-24 2015-10-14 北京邮电大学 Descriptor generation method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴京辉: "视频监控目标的跟踪与识别研究", 《中国博士学位论文全文数据库 信息科技辑》 *
张建兴: "基于注意力的目标识别算法及在移动机器人的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106723241A (en) * 2017-01-09 2017-05-31 浙江大学 A kind of 3D portraits food Method of printing
CN106737870A (en) * 2017-03-02 2017-05-31 深圳万智联合科技有限公司 A kind of good arenas robot cooling platform of heat dispersion
CN110225335A (en) * 2019-06-20 2019-09-10 中国石油大学(北京) Camera stability assessment method and device

Similar Documents

Publication Publication Date Title
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
US9846932B2 (en) Defect detection method for display panel based on histogram of oriented gradient
US8391612B2 (en) Edge detection with adaptive threshold
EP3036730B1 (en) Traffic light detection
CN103824091B (en) A kind of licence plate recognition method for intelligent transportation system
CN108921813B (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
US9916513B2 (en) Method for processing image and computer-readable non-transitory recording medium storing program
CN105844337A (en) Intelligent garbage classification device
Shaikh et al. A novel approach for automatic number plate recognition
US20130201358A1 (en) Efficient Line Detection Method
CN112991374A (en) Canny algorithm-based edge enhancement method, device, equipment and storage medium
CN111222507A (en) Automatic identification method of digital meter reading and computer readable storage medium
CN105844651A (en) Image analyzing apparatus
CN105928099A (en) Intelligent air purifier
CN105844260A (en) Multifunctional smart cleaning robot apparatus
Khin et al. License plate detection of Myanmar vehicle images captured from the dissimilar environmental conditions
Jeong et al. Fast fog detection for de-fogging of road driving images
CN102122347B (en) Method and device for distinguishing polarity of text color in image
CN117218672A (en) Deep learning-based medical records text recognition method and system
CN116486092A (en) Electromagnetic probe calibration piece identification method based on improved Hu invariant moment
CN105933698A (en) Intelligent satellite digital TV program play quality detection system
Mutholib et al. Optimization of ANPR algorithm on Android mobile phone
CN105930779A (en) Image scene mode generation device
CN105930853A (en) Automatic image capturing device for content generation
CN105868730A (en) Ultrasonic detecting device with scene identification function

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160810

RJ01 Rejection of invention patent application after publication