CN108960012A - Feature point detecting method, device and electronic equipment - Google Patents

Feature point detecting method, device and electronic equipment Download PDF

Info

Publication number
CN108960012A
CN108960012A CN201710366545.8A CN201710366545A CN108960012A CN 108960012 A CN108960012 A CN 108960012A CN 201710366545 A CN201710366545 A CN 201710366545A CN 108960012 A CN108960012 A CN 108960012A
Authority
CN
China
Prior art keywords
point
target image
characteristic
depth
foreground features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710366545.8A
Other languages
Chinese (zh)
Other versions
CN108960012B (en
Inventor
田光亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ThunderSoft Co Ltd
Original Assignee
ThunderSoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ThunderSoft Co Ltd filed Critical ThunderSoft Co Ltd
Priority to CN201710366545.8A priority Critical patent/CN108960012B/en
Publication of CN108960012A publication Critical patent/CN108960012A/en
Application granted granted Critical
Publication of CN108960012B publication Critical patent/CN108960012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of feature point detecting method, device and electronic equipments, are related to technical field of image processing, are able to solve the real-time and accuracy problem of Feature point recognition in the prior art.The feature point detecting method of the embodiment of the present invention includes: the depth characteristic information for obtaining target image;Based on the depth characteristic information, the rapid characteristic points extracted on the target image are divided into foreground features point and background characteristic point;Strong feature detection is carried out to the foreground features point, obtains the strong characteristic point of the target image;Using the strong characteristic point as the target feature point of the target image.In addition, the embodiment of the invention also discloses feature point detection devices, electronic equipment.Through the above scheme, the real-time and accuracy that can take into account Feature point recognition, improve the efficiency of Feature point recognition.

Description

Feature point detecting method, device and electronic equipment
Technical field
The present invention relates to the characteristic point detection techniques in technical field of image processing more particularly to image.
Background technique
During detecting to the moving target in video, moving target possibly is present at any position of picture It sets, and is used to indicate each pixel of moving target and is likely to appear in moving target.Due to the fortune of a large amount of pixels Dynamic vector is with uniformity, and in order to reduce the calculation amount of moving target pixel, the identical region of motion vector can use several figures Shape indicates that the characteristic point of the figure can be indicated with image angle point, and the target image characteristics point detection based on angle point technology is fortune The key technology of moving-target identification.Application is refreshed in the scene of frame per second higher (especially with game application), in order to guarantee to regard The continuity of frequency movement, the processing time of each image of component frame is very short, therefore the real-time of Corner Detection is mentioned Higher requirement is gone out.
Inventor has found that the Fast Corner Detection of the prior art is due to that need not calculate in the implementation of the present invention The horizontal gradient of full figure, vertical gradient, computational efficiency is higher, can a large amount of angle point of quick obtaining in a relatively short period of time.But Algorithm detection angle point is of low quality, and accuracy is poor.In addition to this, there is also use Harris angle point, Shi in the prior art Tomasi angle point determines the scheme of target image characteristics point.Harris angle point, Shi Tomasi angle point are believed using gradient Breath searches for angle point, angle point response is higher, then confidence level is higher by calculating extra large plucked instrument matrix and angle point receptance function.Pass through Non-maxima suppression or other similar method, search for strong angle point.This method detects angle point accuracy rate height, and intermediate output sea plucked instrument square Battle array, gradient information can be utilized by the downstream algorithm in moving object detection.But during the algorithm calculates gradient, it is related to Convolution operation is very time-consuming, is unsatisfactory for the requirement of real-time of mobile device.In addition, there is also use Fast angle point in the prior art With the method combined based on Gradient Features, angle point to be detected is preselected by Fast feature, later based on ladder in Fast angle point Spend Feature Selection angle point.But this algorithm is still difficult to solve the problems, such as that time overhead is larger.
In the real-time for image rendering, rapidity, the higher image procossing scene of accuracy requirement, one kind is needed Fast and accurately characteristic point detection technique.
Summary of the invention
In view of this, the embodiment of the present invention provide a kind of feature point detecting method based on client/server, device, Electronic equipment, non-transient computer readable storage medium and computer program, at least part of solution are existing in the prior art Problem.
In a first aspect, the embodiment of the invention provides a kind of feature point detecting methods, comprising:
Obtain the depth characteristic information of target image;
Based on the depth characteristic information, the rapid characteristic points extracted on the target image are divided into foreground features Point and background characteristic point;
Strong feature detection is carried out to the foreground features point, obtains the strong characteristic point of the target image;
Using the strong characteristic point as the target feature point of the target image.
A kind of specific implementation according to an embodiment of the present invention is based on the depth characteristic information described, will be in institute The rapid characteristic points extracted on target image are stated to divide into before foreground features point and background characteristic point, the method also includes:
Obtain the target image for needing to carry out characteristic point detection;
Rapid characteristic points extraction operation is executed to the target image, determines the rapid characteristic points of the target image.
A kind of specific implementation according to an embodiment of the present invention, it is described that target image execution rapid characteristic points are mentioned Extract operation determines the rapid characteristic points of the target image, comprising:
Choose pixel centered on any pixel point in the target image;
To obtain using the central pixel point as origin, radius be r pixel, the pixel annulus that width is 1 pixel;
Judge on the pixel annulus with the presence or absence of n gray value be all larger than or less than the central pixel point pixel;
If it exists, then the central pixel point is determined as rapid characteristic points.
A kind of specific implementation according to an embodiment of the present invention, it is described to be based on the depth characteristic information, it will be described The rapid characteristic points extracted on target image divide into foreground features point and background characteristic point, comprising:
Obtain the depth value of the rapid characteristic points;
Judge whether the depth value is greater than preset threshold;
The rapid characteristic points that the depth value is greater than the preset threshold are determined as foreground features point;
The rapid characteristic points that the depth value is less than or equal to the preset threshold are determined as background characteristic point.
A kind of specific implementation according to an embodiment of the present invention, it is described that strong feature inspection is carried out to the foreground features point It surveys, obtains the strong characteristic point of the target image, comprising:
Obtain the gradient information of the foreground features point;
Based on the gradient information, the angle point response of the foreground features point is calculated;
The foreground features point that angle point response is greater than preset threshold is determined as strong characteristic point.
A kind of specific implementation according to an embodiment of the present invention, the gradient information for obtaining the foreground features point, Include:
Traverse the foreground features point;
Generate the foreground features neighborhood of a point mask;
The horizontal gradient and vertical gradient of the foreground features point are calculated in the neighborhood mask, so obtain it is described before The gradient information of scape characteristic point.
A kind of specific implementation according to an embodiment of the present invention carries out strong feature inspection to the foreground features point described It surveys, before obtaining the strong characteristic point of the target image, this method further include:
The target image is carried out region division, obtains the mesh by the depth characteristic information based on the target image The depth intervals of logo image;
The edge data of the depth image is extracted in the depth intervals;
Edge enhancing is carried out to the target image based on the edge data.
A kind of specific implementation according to an embodiment of the present invention, the depth characteristic letter based on the target image The target image is carried out region division by breath, comprising:
Random seed point is generated in the corresponding depth data of the depth characteristic information;
Obtain the peak-to-valley value of the histogram of the random seed point;
The cluster centre of the depth data is determined using the peak-to-valley value, and is completed according to the cluster centre to described The cluster of random seed point;
Based on cluster as a result, determining the division region of the target image.
A kind of specific implementation according to an embodiment of the present invention, it is described obtain target image depth characteristic information it Before, the method also includes:
The target image is pre-processed.
Second aspect, the embodiment of the invention also provides a kind of feature point detection devices, comprising:
First obtains module, for obtaining the depth characteristic information of target image;
Discriminating module, for being based on the depth characteristic information, the rapid characteristic points that will be extracted on the target image Divide into foreground features point and background characteristic point;
Detection module obtains the strong feature of the target image for carrying out strong feature detection to the foreground features point Point;
Execution module, for using the strong characteristic point as the target feature point of the target image.
A kind of specific implementation according to an embodiment of the present invention, described device further include:
Second obtains module, for obtaining the target image for needing to carry out characteristic point detection;
Determining module determines the target image for executing rapid characteristic points extraction operation to the target image Rapid characteristic points.
A kind of specific implementation according to an embodiment of the present invention, the determining module are also used to:
Choose pixel centered on any pixel point in the target image;
To obtain using the central pixel point as origin, radius be r pixel, the pixel annulus that width is 1 pixel;
Judge on the pixel annulus with the presence or absence of n gray value be all larger than or less than the central pixel point pixel;
If it exists, then the central pixel point is determined as rapid characteristic points.
A kind of specific implementation according to an embodiment of the present invention, the discriminating module are also used to:
Obtain the depth value of the rapid characteristic points;
Judge whether the depth value is greater than preset threshold;
The rapid characteristic points that the depth value is greater than the preset threshold are determined as foreground features point;
The rapid characteristic points that the depth value is less than or equal to the preset threshold are determined as background characteristic point.
A kind of specific implementation according to an embodiment of the present invention, the detection module are also used to:
Obtain the gradient information of the foreground features point;
Based on the gradient information, the angle point response of the foreground features point is calculated;
The foreground features point that angle point response is greater than preset threshold is determined as strong characteristic point.
A kind of specific implementation according to an embodiment of the present invention, the detection module are also used to:
Traverse the foreground features point;
Generate the foreground features neighborhood of a point mask;
The horizontal gradient and vertical gradient of the foreground features point are calculated in the neighborhood mask, so obtain it is described before The gradient information of scape characteristic point.
A kind of specific implementation according to an embodiment of the present invention, described device further include:
The target image is carried out region and drawn by division module for the depth characteristic information based on the target image Point, obtain the depth intervals of the target image;
Extraction module, for extracting the edge data of the depth image in the depth intervals;
Edge enhances module, for carrying out edge enhancing to the target image based on the edge data.
A kind of specific implementation according to an embodiment of the present invention, the division module are also used to:
Random seed point is generated in the corresponding depth data of the depth characteristic information;
Obtain the peak-to-valley value of the histogram of the random seed point;
The cluster centre of the depth data is determined using the peak-to-valley value, and is completed according to the cluster centre to described The cluster of random seed point;
Based on cluster as a result, determining the division region of the target image.
A kind of specific implementation according to an embodiment of the present invention, described device further include:
Preprocessing module, for being pre-processed to the target image.
The third aspect, the embodiment of the invention also provides a kind of electronic equipment, the electronic equipment includes:
At least one processor;And
The memory being connect at least one described processor communication;Wherein,
The memory is stored with the instruction that can be executed by least one described processor, and described instruction is by described at least one A processor executes, so that at least one described processor is able to carry out any realization of aforementioned first aspect or first aspect Feature point detecting method described in mode.
Fourth aspect, it is described non-transient the embodiment of the invention also provides a kind of non-transient computer readable storage medium Computer-readable recording medium storage computer instruction, the computer instruction is for making the computer execute aforementioned first party Feature point detecting method described in any implementation of face or first aspect.
5th aspect, the embodiment of the invention also provides a kind of computer program product, the computer program product packet The calculation procedure being stored in non-transient computer readable storage medium is included, the computer program includes program instruction, works as institute When stating program instruction and being computer-executed, the computer is made to execute any implementation of aforementioned first aspect or first aspect The feature point detecting method.
Feature point detecting method provided in an embodiment of the present invention, device, electronic equipment, non-transient computer readable storage medium The rapid characteristic points extracted on target image are divided into prospect by utilizing depth characteristic information by matter and computer program Characteristic point and background characteristic point, and when carrying out strong Feature point recognition, only identify foreground features point, and then improve the reality of identification When property and accuracy.By rapid characteristic points recognition methods predetermined and based on the front/rear feature of depth characteristic information Point differentiating method, can be improved the time of calculating;Before carrying out strong corner recognition, using depth characteristic information to target image Edge enhancing is carried out, the complexity of strong corner recognition is reduced.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this field For those of ordinary skill, without creative efforts, it can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of characteristic point testing process schematic diagram provided in an embodiment of the present invention;
Fig. 2 is another characteristic point testing process schematic diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of determining rapid characteristic points provided in an embodiment of the present invention;
Fig. 4 is a kind of flow diagram for distinguishing foreground features point and background characteristic point provided in an embodiment of the present invention;
Fig. 5 is a kind of flow diagram for determining strong characteristic point provided in an embodiment of the present invention;
Fig. 6 is a kind of flow diagram of determining foreground features point gradient information provided in an embodiment of the present invention;
Fig. 7 is the flow diagram that a kind of pair of target image provided in an embodiment of the present invention carries out edge enhancing;
Fig. 8 is the flow diagram that a kind of pair of target image provided in an embodiment of the present invention carries out region division;
Fig. 9 is a kind of feature point detection device structural schematic diagram provided in an embodiment of the present invention;
Figure 10 is another feature point detection device structural schematic diagram provided in an embodiment of the present invention;
Figure 11 is another feature point detection device structural schematic diagram provided in an embodiment of the present invention;
Figure 12 is a kind of electronic equipment structural schematic diagram provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described in detail with reference to the accompanying drawing.
It will be appreciated that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Base Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts it is all its Its embodiment, shall fall within the protection scope of the present invention.
Fig. 1 is the flow diagram that the characteristic point of the embodiment of the present invention detects, as shown in Figure 1, the method for the present embodiment It may comprise steps of:
S101 obtains the depth characteristic information of target image.
Picture depth refers to digit used in each pixel of storage, is also used for the color-resolution of measurement image.Image is deep The presumable number of colours of each pixel for determining color image is spent, or determines the presumable gray scale of each pixel of gray level image Series it determine the maximum tonal gradation in the MaxColors or gray level image that may occur in which in color image.
The depth characteristic data of target image usually can be using the photographic device with deep image information acquisition function It obtains, can also be calculated and be generated according to fixed model algorithm when image is generated.
The depth characteristic data of image are stored in target image, specifically, can be slow in the corresponding depth of target image Deposit the depth characteristic information that target image is obtained in (depthbuffer).
S102 is based on the depth characteristic information, before the rapid characteristic points extracted on the target image are divided into Scape characteristic point and background characteristic point.
Specifically, the depth value of the rapid characteristic points extracted on available target image, by the depth value and default threshold Value is compared, and for being greater than the rapid characteristic points of the preset threshold, this feature point can be labeled as foreground features point.For Less than or equal to the rapid characteristic points of the preset threshold, this feature point can be labeled as background characteristic point
S103 carries out strong feature detection to the foreground features point, obtains the strong characteristic point of the target image.
Strong characteristic point is different from rapid characteristic points, has higher accuracy.Common rapid characteristic points include the angle fast Point, common strong characteristic point include harris angle point, Shi Tomasi angle point etc..
By taking fast angle point and harris angle point as an example, the calculating process of strong feature detection is carried out such as to the foreground features point Under: to improve calculating speed, prospect fast angle point neighborhood exposure mask is firstly generated, is calculated in the corresponding block of exposure mask horizontal, vertical Vertical ladder degree.According to gradient data, Hessian matrix is calculated, and seeks the response of its angle point.If the response is neighborhood local pole When being worth, and being greater than threshold value t, marking the point is strong angle point.Particularly, threshold value t and neighborhood size can be preset value, can also basis Depth data dynamic generation.
S104, using the strong characteristic point as the target feature point of the target image.
By that can be constructed based on the strong characteristic point using the strong characteristic point as the target feature point of the target image The feature contour of object in target image out can analyze object on target image by analyzing the variation tendency of strong characteristic point The movement tendency of body.
Method in through this embodiment is carrying out strong spy by distinguishing rapid characteristic points based on depth information When sign point identification, only identify that the foreground features point in rapid characteristic points mentions under the premise of ensure that Feature point recognition real-time The high accuracy of Feature point recognition.
Scheme according to another embodiment of the present invention, it is referring to fig. 2, optional other than executing embodiment corresponding to Fig. 1 Selecting property, feature point detecting method can also include:
S201 obtains the target image for needing to carry out characteristic point detection.
Specifically, the data in the color caching (colorbuffer) in target image are obtained, it is in store in color caching The image information of target image in the plane is based on the image information, it may be convenient to obtain the shape of object.
S202 executes rapid characteristic points extraction operation to the target image, determines the swift nature of the target image Point.
In actual application, need to take into account the real-time and accuracy of target image processing, for this purpose, first carrying out quickly special Sign point extracts, and referring to Fig. 3, executes rapid characteristic points extraction operation to the target image, determines the quick of the target image Characteristic point may include steps of:
S301 chooses pixel centered on any pixel point in the target image;
S302, the pixel circle that obtaining using the central pixel point as origin, radius is r pixel, width is 1 pixel Ring;
S303 judges to be all larger than or with the presence or absence of n gray value less than the central pixel point on the pixel annulus Pixel;
S304, if so, the central pixel point is determined as rapid characteristic points.
In actual implementation procedure, rapid characteristic points can be Fast angle point.
By carrying out rapid characteristic points identification to target image, the real-time of target image characteristics point identification is improved.
Optionally, referring to fig. 4, the embodiment of the invention also provides a kind of sides for distinguishing foreground features point and background characteristic point Method, comprising:
S401 obtains the depth value of the rapid characteristic points.
Specifically, in the depth buffered middle depth value for obtaining the rapid characteristic points.Depth buffered (DepthBuffer) is The process of picture depth coordinate is handled in 3-D graphic, this process is usually completed within hardware, it can also be in software It completes, when 3-D graphic card renders object, the depth (i.e. z coordinate) of each pixel generated is just stored in one In buffer area.This buffer area is called depth buffer, this buffer area is typically organized to one and saves each screen pixels depth The x-y two-dimensional array of degree.
Depth buffered resolution ratio has a great impact for scene quality: when two objects are very close, 16 The depth buffer of position may result in the man-made noise of " fight (fighting) for buffer area ", for this purpose, used herein 24 Position or 32 depth buffers.
S402, judges whether the depth value is greater than preset threshold.
Specifically, can judge whether its corresponding depth value is greater than threshold value t on the basis of determining Fast angle point, t's Numerical value can be configured according to the actual needs.As an example, the range of t can be chosen in the range of 0~255.
The rapid characteristic points that the depth value is greater than the preset threshold are determined as foreground features point by S403.
The rapid characteristic points that the depth value is less than or equal to the preset threshold are determined as background characteristic point by S404.
By the way that characteristic point is divided into foreground features point and background characteristic point, the work of subsequent strong Feature point recognition is simplified Amount, reduces the time of strong Feature point recognition.
Optionally, referring to Fig. 5, the embodiment of the invention also provides a kind of methods for determining strong characteristic point, include following step It is rapid:
S501 obtains the gradient information of the foreground features point.
During obtaining the gradient information of the foreground features point, referring to Fig. 5, it can specifically comprise the following steps:
S601 traverses the foreground features point.
All foreground features points obtained in obtaining step S403, in a certain order successively to the ladder of foreground features point Degree information is calculated.
S602 generates the foreground features neighborhood of a point mask.
It can be a binary picture being made of 0 and 1 by mask design during concrete implementation.When application is covered When mould, 1 value region is processed, and shielded 0 value region is not included in calculating.It, can be with after having selected foreground features point Pattern mask is defined by specified data value, data area, limited or infinitary value, region of interest and comment file, it can also Mask is established as input using any combination of the above-mentioned option of application.
S603, calculates the horizontal gradient and vertical gradient of the foreground features point in the neighborhood mask, and then obtains The gradient information of the foreground features point.
Image function f (x, y) is the vector with size and Orientation in the gradient of pixel (x, y), be set as Gx and Gy respectively indicates the gradient in the direction x and the direction v, and the vector of this gradient can indicate are as follows:
The amplitude of this vector is
Deflection are as follows:
In digital picture, more carry out approximate derivative using difference, simplest gradient approximate expression is as follows:
Gx=f (x, y)-f (x-1, y)
Gy=f (x, y)-f (x, y-1)
The direction of gradient is that the most fast direction of function f (x, y) variation centainly has biggish when an edge is present in the image Gradient value, on the contrary, gray-value variation is smaller when there is smoother part in image, then corresponding gradient is also smaller, image The mould of gradient is referred to as gradient in processing, the image being made of image gradient becomes gradient image.Zonule mould can be used Plate carries out convolution to calculate, and common calculating operator includes: Sobel operator, Robinson operator, Laplace operator etc..
S502 is based on the gradient information, calculates the angle point response of the foreground features point.
Specifically, angle point receptance function can be defined, the angle point of the foreground features point is calculated by angle point receptance function Response.
The foreground features point that angle point response is greater than preset threshold is determined as strong characteristic point by S503.
By carrying out strong characteristic point detection to foreground features point, the accuracy of characteristic point detection ensure that.
Optionally, referring to Fig. 7, the method that a kind of pair of target image that the embodiment of the present invention also provides carries out edge enhancing, Include the following steps:
The target image is carried out region division, obtains institute by S701, the depth characteristic information based on the target image State the depth intervals of target image.
During realizing S701, it may include steps of:
S801 generates random seed point in the corresponding depth data of the depth characteristic information.
To improve calculating speed, several seed points are generated in depth buffer (depthbuffer).It for example, can be with Random seed point is generated using randomizer, randomizer can use fixed-size seed, can also make to plant The variable dimension of son.Data entropy can also be directly used, directly generates key from the entropy source of depth buffer, that is, m as a result Position seed is directly used as key.
S802 obtains the peak-to-valley value of the histogram of the random seed point.
Image histogram is the statistical form for reflecting the distribution of an image pixel, and abscissa represents the type of image pixel, It can be gray scale, be also possible to colour.Ordinate represents each color value sum of all pixels in the picture or accounts for The percentage of all pixels number.Can be with OpenCV using various ways and the histogram of function calculating random seed point , it is calcHist that image histogram transform is calculated in OpenCV.
S803 is determined the cluster centre of the depth data using the peak-to-valley value, and is completed according to the cluster centre To the cluster of the random seed point.
Depth histogram is counted to above-mentioned seed point, to calculating its peak-to-valley value after histogram smothing filtering, and then is initialized Several cluster centres.
S804, based on cluster as a result, determining the division region of the target image.
Each seed point is clustered specifically, kmeans algorithm can be used, finally by complete depth data, if being divided into Dry depth layer, depth intervals, show as several blocks on the image.
S702 extracts the edge data of the depth image in the depth intervals.
Edge detection can use a variety of methods.For example, can by calculate a certain position pixel and its four The degree of correlation of all neighbor pixels is realized: central point pixel value pixel adjacent thereto being subtracted each other respectively, and takes it absolutely Value;This two o'clock correlation is determined when the absolute value of Difference of Adjacent Pixels is less than the dependent thresholds of setting;With surrounding neighbor pixel All relevant pixel is located at target internal, and pixel relevant to three pixels around is located at object edge, with surrounding two The relevant pixel of a pixel is located at object boundary intersection point
S703 carries out edge enhancing to the target image based on the edge data.
Edge enhancing can be carried out using high-pass filtering.The details (edge) of the edge of image or lines partially with image frequency The high fdrequency component of spectrum is corresponding, allows high fdrequency component to pass through using high-pass filtering, and suitably inhibits low frequency components, is image Details become apparent, realize image edge enhancing.
It is final to use alternatively, it is also possible to extract depth image edge data using edge detection operator based on depth intervals Edge enhancement operator realizes edge enhancing to color caching (colorbuffer).
By the scheme in the embodiment, edge enhancing can be carried out to target image based on depth data, it is enhanced Target image can be improved as the efficiency of subsequent strong characteristic point.
Rapid characteristic points in above-described embodiment can be Fast angle point, and strong characteristic point can be harris angle point, Shi Tomasi angle point.
Corresponding with preceding feature point detecting method embodiment, the embodiment of the invention also provides a kind of detections of characteristic point to fill It sets, as shown in figure 9, feature point detection device 10, comprising:
First obtains module 101, for obtaining the depth characteristic information of target image.
Picture depth refers to digit used in each pixel of storage, is also used for the color-resolution of measurement image.Image is deep The presumable number of colours of each pixel for determining color image is spent, or determines the presumable gray scale of each pixel of gray level image Series it determine the maximum tonal gradation in the MaxColors or gray level image that may occur in which in color image.
The depth characteristic data of target image usually can be using the photographic device with deep image information acquisition function It obtains, can also be calculated and be generated according to fixed model algorithm when image is generated.
The depth characteristic data of image are stored in target image, specifically, can be slow in the corresponding depth of target image Deposit the depth characteristic information that target image is obtained in (depthbuffer).
Discriminating module 102, for being based on the depth characteristic information, the swift nature that will be extracted on the target image Point divides into foreground features point and background characteristic point.
Specifically, the depth value of the rapid characteristic points extracted on available target image, by the depth value and default threshold Value is compared, and for being greater than the rapid characteristic points of the preset threshold, this feature point can be labeled as foreground features point.For Less than or equal to the rapid characteristic points of the preset threshold, this feature point can be labeled as background characteristic point
Detection module 103 obtains the strong spy of the target image for carrying out strong feature detection to the foreground features point Sign point.
Strong characteristic point is different from rapid characteristic points, has higher accuracy.Common rapid characteristic points include the angle fast Point, common strong characteristic point include harris angle point, Shi Tomasi angle point etc..
By taking fast angle point and harris angle point as an example, the calculating process of strong feature detection is carried out such as to the foreground features point Under: to improve calculating speed, prospect fast angle point neighborhood exposure mask is firstly generated, is calculated in the corresponding block of exposure mask horizontal, vertical Vertical ladder degree.According to gradient data, Hessian matrix is calculated, and seeks the response of its angle point.If the response is neighborhood local pole When being worth, and being greater than threshold value t, marking the point is strong angle point.Particularly, threshold value t and neighborhood size can be preset value, can also basis Depth data dynamic generation.
Execution module 104, for using the strong characteristic point as the target feature point of the target image.
By that can be constructed based on the strong characteristic point using the strong characteristic point as the target feature point of the target image The feature contour of object in target image out can analyze object on target image by analyzing the variation tendency of strong characteristic point The movement tendency of body.
Method in through this embodiment is carrying out strong spy by distinguishing rapid characteristic points based on depth information When sign point identification, only identify that the foreground features point in rapid characteristic points mentions under the premise of ensure that Feature point recognition real-time The high accuracy of Feature point recognition.
Referring to Figure 10, other than structure shown in Fig. 9, feature point detection device can also include the second acquisition module 201, determining module 202.
Referring to Figure 11, other than structure shown in Fig. 9, feature point detection device can also include division module 701, mention Modulus block 702, edge enhance module 703.
Function performed by each functional module and the corresponding embodiment of the method for content correspond in above-described embodiment, Details are not described herein.
Figure 12 shows the structural schematic diagram of electronic equipment 120 provided in an embodiment of the present invention, and electronic equipment 120 includes extremely A few processor 1201 (such as CPU), at least one input/output interface 1204, memory 1202 and at least one communication Bus 1203, for realizing the connection communication between these components.At least one processor 1201 is for executing memory 1202 The executable module of middle storage, such as computer program.Memory 1202 is non-transient memory (non-transitory Memory), it may include volatile memory, such as high-speed random access memory (RAM:Random Access It Memory), also may include nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage. It is realized and at least one other net by least one input/output interface 1204 (can be wired or wireless communication interface) Communication connection between member.
In some embodiments, memory 1202 stores program 12021, and processor 1201 executes program 12021, uses In the embodiment for executing aforementioned any feature point detecting method based on electronic equipment.
The electronic equipment can exist in a variety of forms, including but not limited to:
(1) mobile communication equipment: the characteristics of this kind of equipment is that have mobile communication function, and to provide speech, data Communication is main target.This Terminal Type includes: smart phone (such as iPhone), multimedia handset, functional mobile phone and low Hold mobile phone etc..
(2) super mobile personal computer equipment: this kind of equipment belongs to the scope of personal computer, there is calculating and processing function Can, generally also have mobile Internet access characteristic.This Terminal Type includes: PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device: this kind of equipment can show and play multimedia content.Such equipment include: audio, Video player (such as iPod), handheld device, e-book and intelligent toy and portable car-mounted navigation equipment.
(4) particular server: providing the equipment of the service of calculating, and the composition of server includes processor, hard disk, memory, is Bus of uniting etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, is handling Ability, stability, reliability, safety, scalability, manageability etc. are more demanding.
(5) other electronic equipments with data interaction function.
It should be noted that, in this document, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these.
There are any actual relationship or orders between entity or operation.Moreover, the terms "include", "comprise" or Any other variant thereof is intended to cover non-exclusive inclusion by person, so that including the process, method of a series of elements, article Or equipment not only includes those elements, but also including other elements that are not explicitly listed, or it is this for further including The intrinsic element of process, method, article or equipment.In the absence of more restrictions, by sentence " including one It is a ... " limit element, it is not excluded that there is also another in the process, method, article or apparatus that includes the element Outer identical element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.
For Installation practice, since it is substantially similar to the method embodiment, so the comparison of description is simple Single, the relevent part can refer to the partial explaination of embodiments of method.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.
In the above-described embodiment, multiple steps or method can be with storages in memory and by suitable instruction execution The software or firmware that system executes are realized.For example, in another embodiment, can be used if realized with hardware Any one of following technology well known in the art or their combination are realized: being had for realizing logic function to data-signal The discrete logic of the logic gates of energy, the specific integrated circuit with suitable combinational logic gate circuit, programmable gate Array (PGA), field programmable gate array (FPGA) etc..
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those familiar with the art, all answers It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (10)

1. a kind of feature point detecting method characterized by comprising
Obtain the depth characteristic information of target image;
Based on the depth characteristic information, by the rapid characteristic points extracted on the target image divide into foreground features point and Background characteristic point;
Strong feature detection is carried out to the foreground features point, obtains the strong characteristic point of the target image;
Using the strong characteristic point as the target feature point of the target image.
2. feature point detecting method according to claim 1, which is characterized in that believed described based on the depth characteristic Breath, before the rapid characteristic points extracted on the target image are divided into foreground features point and background characteristic point, the side Method further include:
Obtain the target image for needing to carry out characteristic point detection;
Rapid characteristic points extraction operation is executed to the target image, determines the rapid characteristic points of the target image.
3. feature point detecting method according to claim 1, which is characterized in that described to be executed quickly to the target image Feature point extraction operation, determines the rapid characteristic points of the target image, comprising:
Choose pixel centered on any pixel point in the target image;
To obtain using the central pixel point as origin, radius be r pixel, the pixel annulus that width is 1 pixel;
Judge on the pixel annulus with the presence or absence of n gray value be all larger than or less than the central pixel point pixel;
If it exists, then the central pixel point is determined as rapid characteristic points.
4. feature point detecting method according to claim 1, which is characterized in that it is described to be based on the depth characteristic information, The rapid characteristic points extracted on the target image are divided into foreground features point and background characteristic point, comprising:
Obtain the depth value of the rapid characteristic points;
Judge whether the depth value is greater than preset threshold;
The rapid characteristic points that the depth value is greater than the preset threshold are determined as foreground features point;
The rapid characteristic points that the depth value is less than or equal to the preset threshold are determined as background characteristic point.
5. feature point detecting method according to claim 1, which is characterized in that described to be carried out by force to the foreground features point Feature detection, obtains the strong characteristic point of the target image, comprising:
Obtain the gradient information of the foreground features point;
Based on the gradient information, the angle point response of the foreground features point is calculated;
The foreground features point that angle point response is greater than preset threshold is determined as strong characteristic point.
6. feature point detecting method according to claim 5, which is characterized in that the ladder for obtaining the foreground features point Spend information, comprising:
Traverse the foreground features point;
Generate the foreground features neighborhood of a point mask;
The horizontal gradient and vertical gradient of the foreground features point are calculated in the neighborhood mask, and then it is special to obtain the prospect Levy the gradient information of point.
7. feature point detecting method according to claim 1, which is characterized in that carried out described to the foreground features point Strong feature detection, before obtaining the strong characteristic point of the target image, this method further include:
The target image is carried out region division, obtains the target figure by the depth characteristic information based on the target image The depth intervals of picture;
The edge data of the depth image is extracted in the depth intervals;
Edge enhancing is carried out to the target image based on the edge data.
8. feature point detecting method according to claim 7, which is characterized in that the depth based on the target image The target image is carried out region division by characteristic information, comprising:
Random seed point is generated in the corresponding depth data of the depth characteristic information;
Obtain the peak-to-valley value of the histogram of the random seed point;
The cluster centre of the depth data is determined using the peak-to-valley value, and is completed according to the cluster centre to described random The cluster of seed point;
Based on cluster as a result, determining the division region of the target image.
9. feature point detecting method according to claim 1, which is characterized in that special in the depth for obtaining target image Before reference breath, the method also includes:
The target image is pre-processed.
10. a kind of feature point detection device characterized by comprising
First obtains module, for obtaining the depth characteristic information of target image;
Discriminating module distinguishes the rapid characteristic points extracted on the target image for being based on the depth characteristic information For foreground features point and background characteristic point;
Detection module obtains the strong characteristic point of the target image for carrying out strong feature detection to the foreground features point;
Execution module, for using the strong characteristic point as the target feature point of the target image.
CN201710366545.8A 2017-05-22 2017-05-22 Feature point detection method and device and electronic equipment Active CN108960012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710366545.8A CN108960012B (en) 2017-05-22 2017-05-22 Feature point detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710366545.8A CN108960012B (en) 2017-05-22 2017-05-22 Feature point detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108960012A true CN108960012A (en) 2018-12-07
CN108960012B CN108960012B (en) 2022-04-15

Family

ID=64461605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710366545.8A Active CN108960012B (en) 2017-05-22 2017-05-22 Feature point detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108960012B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816645A (en) * 2019-01-18 2019-05-28 创新奇智(广州)科技有限公司 A kind of automatic testing method of coil of strip loose winding
CN110097576A (en) * 2019-04-29 2019-08-06 腾讯科技(深圳)有限公司 The motion information of image characteristic point determines method, task executing method and equipment
CN110189242A (en) * 2019-05-06 2019-08-30 百度在线网络技术(北京)有限公司 Image processing method and device
CN111899149A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Image processing method and device based on operator fusion and storage medium
CN112652004A (en) * 2020-12-31 2021-04-13 珠海格力电器股份有限公司 Image processing method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060171594A1 (en) * 2005-02-01 2006-08-03 Shmuel Avidan Detecting moving objects in videos with corner-based background model
CN102799883A (en) * 2012-06-29 2012-11-28 广州中国科学院先进技术研究所 Method and device for extracting movement target from video image
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN103198319A (en) * 2013-04-11 2013-07-10 武汉大学 Method of extraction of corner of blurred image in mine shaft environment
CN103810718A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Method and device for detection of violently moving target
CN105809619A (en) * 2015-01-19 2016-07-27 株式会社理光 Image acquisition user interface for linear panoramic image stitching
WO2017049994A1 (en) * 2015-09-25 2017-03-30 深圳大学 Hyperspectral image corner detection method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060171594A1 (en) * 2005-02-01 2006-08-03 Shmuel Avidan Detecting moving objects in videos with corner-based background model
CN102799883A (en) * 2012-06-29 2012-11-28 广州中国科学院先进技术研究所 Method and device for extracting movement target from video image
CN103810718A (en) * 2012-11-15 2014-05-21 浙江大华技术股份有限公司 Method and device for detection of violently moving target
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN103198319A (en) * 2013-04-11 2013-07-10 武汉大学 Method of extraction of corner of blurred image in mine shaft environment
CN105809619A (en) * 2015-01-19 2016-07-27 株式会社理光 Image acquisition user interface for linear panoramic image stitching
WO2017049994A1 (en) * 2015-09-25 2017-03-30 深圳大学 Hyperspectral image corner detection method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ETHAN RUBLEE ET AL: "ORB: An efficient alternative to SIFT or SURF", 《2011 INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
张永强: "机载光电平台的机动目标跟踪系统研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
王丽芳: "基于ORB和GroupSAC复杂场景视频图像的快速角点检测", 《科学技术与工程》 *
白雪冰等: "结合快速鲁棒性特征改进ORB的特征点匹配算法", 《计算机应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816645A (en) * 2019-01-18 2019-05-28 创新奇智(广州)科技有限公司 A kind of automatic testing method of coil of strip loose winding
CN110097576A (en) * 2019-04-29 2019-08-06 腾讯科技(深圳)有限公司 The motion information of image characteristic point determines method, task executing method and equipment
CN110097576B (en) * 2019-04-29 2022-11-18 腾讯科技(深圳)有限公司 Motion information determination method of image feature point, task execution method and equipment
CN110189242A (en) * 2019-05-06 2019-08-30 百度在线网络技术(北京)有限公司 Image processing method and device
CN111899149A (en) * 2020-07-09 2020-11-06 浙江大华技术股份有限公司 Image processing method and device based on operator fusion and storage medium
CN112652004A (en) * 2020-12-31 2021-04-13 珠海格力电器股份有限公司 Image processing method, device, equipment and medium
CN112652004B (en) * 2020-12-31 2024-04-05 珠海格力电器股份有限公司 Image processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN108960012B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US10943145B2 (en) Image processing methods and apparatus, and electronic devices
US10038892B2 (en) Device and method for augmented reality applications
Shigematsu et al. Learning RGB-D salient object detection using background enclosure, depth contrast, and top-down features
US11443437B2 (en) Vibe-based three-dimensional sonar point cloud image segmentation method
US9741170B2 (en) Method for displaying augmented reality content based on 3D point cloud recognition, and apparatus and system for executing the method
CN108960012A (en) Feature point detecting method, device and electronic equipment
CN109690620A (en) Threedimensional model generating means and threedimensional model generation method
US10229340B2 (en) System and method for coarse-to-fine video object segmentation and re-composition
CN103871039B (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
JP7063837B2 (en) Area extraction device and program
CN111160291B (en) Human eye detection method based on depth information and CNN
CN109712131A (en) Quantization method, device, electronic equipment and the storage medium of Lung neoplasm feature
CN106062824A (en) Edge detection device, edge detection method, and program
CN111985427A (en) Living body detection method, living body detection apparatus, and readable storage medium
CN110298281A (en) Video structural method, apparatus, electronic equipment and storage medium
WO2014133584A1 (en) Image processor with multi-channel interface between preprocessing layer and one or more higher layers
CN108268138A (en) Processing method, device and the electronic equipment of augmented reality
CN110555863A (en) moving object detection method and device and computer readable storage medium
CN110298809A (en) A kind of image defogging method and device
CN108986145A (en) Method of video image processing and device
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
CN109271974A (en) A kind of lightweight face joint-detection and recognition methods and its system
KR101715266B1 (en) Line drawing method for 3d model using graphic accelerator and computer-readable recording medium storing for processing program using the same
KR20100009451A (en) Method for determining ground line
Zhang et al. A generative adversarial network approach for removing motion blur in the automatic detection of pavement cracks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant