CN112926593A - Image feature processing method and device for dynamic image enhancement presentation - Google Patents

Image feature processing method and device for dynamic image enhancement presentation Download PDF

Info

Publication number
CN112926593A
CN112926593A CN202110193780.6A CN202110193780A CN112926593A CN 112926593 A CN112926593 A CN 112926593A CN 202110193780 A CN202110193780 A CN 202110193780A CN 112926593 A CN112926593 A CN 112926593A
Authority
CN
China
Prior art keywords
image
feature
points
pixel
current target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110193780.6A
Other languages
Chinese (zh)
Inventor
李毅
吴益剑
范林龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN202110193780.6A priority Critical patent/CN112926593A/en
Publication of CN112926593A publication Critical patent/CN112926593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种用于动态图像增强呈现中的图像特征处理方法,包括获取模板图像和识别图像;根据预设的FAST算法,分别检测出模板图像及识别图像的特征点;根据预设的BRIEF方法,对含有特征关键点的模板图像及识别图像均进行平滑模糊处理,使得模板图像及识别图像各自的特征点均能以二进制编码表征;基于模板图像及识别图像各自以二进制编码表征的特征点,利用汉明距离计算获取模板图像与识别图像之间匹配度符合预定条件的若干特征点,并输出。实施本发明,能降低增强现实模板图像特征提取的算法难度,充分的满足特征提取与匹配的实时性要求,在一定程度上还不受图像环境噪点和变换的影响。

Figure 202110193780

The present invention provides an image feature processing method used in dynamic image enhancement and presentation, which includes acquiring a template image and a recognition image; detecting the feature points of the template image and the recognition image respectively according to a preset FAST algorithm; The method is to smooth and blur the template image and the recognition image containing the feature key points, so that the feature points of the template image and the recognition image can be represented by binary codes; , using the Hamming distance calculation to obtain several feature points whose matching degree between the template image and the recognition image meets the predetermined conditions, and output them. The implementation of the invention can reduce the algorithm difficulty of feature extraction of augmented reality template images, fully meet the real-time requirements of feature extraction and matching, and is not affected by image environment noise and transformation to a certain extent.

Figure 202110193780

Description

Image feature processing method and device for dynamic image enhancement presentation
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image feature processing method and an image feature processing device for use in enhanced presentation of a dynamic image.
Background
The Augmented Reality (Augmented Reality) technology is a rendering technology which combines computer graphics and computer vision, detects and matches the characteristics of images, and maps a three-dimensional model and the virtual and real space above the images.
As a dimension extension of human beings observing things in the real world, the augmented reality technology can enhance the acquisition of digital information (including characters, images, three-dimensional models, voice and the like) of things and realize dynamic interaction with virtual objects through analog simulation. Therefore, the augmented reality technology is widely applied to a plurality of fields of media entertainment, education and medical training, operation virtual simulation, industrial manufacturing assistance, intelligent driving guidance and the like, and intelligent digital interaction convenience is provided for human production and life.
It is well known that augmented reality technology requires the acquisition and extraction of valid features of a template image. Due to the influence of limited hardware equipment acquisition efficiency and resolution, the image recognition and model enhancement presentation effect is directly related. In addition, when the augmented reality technology is applied to practical augmented applications, the influence of various factors in a complex environment on the recognition accuracy of the template image needs to be overcome. Therefore, how to quickly and accurately detect the features of the template image, perform space positioning, virtual and real shielding interaction and extract the natural features of the key region, and overcome the influence of factors such as environment on the augmented presentation becomes a hotspot concerned by researchers on the augmented reality technology.
At present, relevant scholars at home and abroad carry out deep research on the problems. The traditional augmented reality template image feature extraction mainly comprises simple label feature extraction and natural feature extraction. The simple label feature extraction is difficult to bring good experience to users in practical application scenes due to the crude form, and is gradually replaced by natural feature extraction. However, although natural feature extraction gradually becomes a leading role in augmented reality application, the algorithm complexity is high, the requirement of real-time feature matching in augmented reality application cannot be met, and the method is also influenced by image environment noise and transformation to a certain extent.
Therefore, it is necessary to provide an image feature processing method, which can reduce the difficulty of the algorithm for extracting the features of the augmented reality template image, fully meet the real-time requirement for feature extraction and matching, and is not affected by the noise and transformation of the image environment to a certain extent.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide an image feature processing method and apparatus for use in dynamic image enhancement presentation, which can reduce the difficulty of an algorithm for extracting features of an augmented reality template image, fully meet the real-time requirement for feature extraction and matching, and are not affected by image environment noise and transformation to a certain extent.
In order to solve the above technical problem, an embodiment of the present invention provides an image feature processing method for use in dynamic image enhanced presentation, where the method includes:
acquiring a template image and an identification image;
respectively detecting the characteristic points of the template image and the identification image according to a preset FAST algorithm;
according to a preset BRIEF method, smooth fuzzy processing is carried out on a template image containing characteristic key points and an identification image, so that the characteristic points of the template image and the identification image can be represented by binary codes;
and calculating and acquiring a plurality of characteristic points with matching degree meeting a preset condition between the template image and the identification image by utilizing a Hamming distance based on the characteristic points of the template image and the identification image which are respectively represented by binary codes, and outputting the characteristic points.
The specific steps of respectively detecting the feature points of the template image and the feature points of the identification image according to a preset FAST algorithm comprise:
reading a current target image; wherein the current target image is the template image or the identification image;
determining a circumference of a current target image within a range of 4 pixels by taking any pixel point P as a circle center, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the selected 16 pixel points within a preset gray threshold range;
and if the gray value of more than 8 connected pixel points is judged to be larger or smaller than the gray value of the pixel point P, selecting the pixel point P as the key point of the current target image.
The specific steps of performing smooth blurring processing on both a template image containing feature points and an identification image according to a preset BRIEF method so that the feature points of both the template image and the identification image can be represented by binary codes include:
reading a current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points;
taking the feature points extracted from the current target image as the center, selecting a window with a certain proportion, randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel values between each pair of the selected pixel points according to a formula (1) to obtain that the feature points in the current target image can be represented by binary codes:
Figure BDA0002945659890000031
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) The pixel value of the random point x2 is (u2, v2), and ui is a horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
Wherein the binary code is a 256-bit binary code.
The specific steps of calculating and acquiring a plurality of feature points with matching degree meeting a preset condition between the template image and the identification image by utilizing a Hamming distance based on the feature points characterized by binary codes of the template image and the identification image respectively and outputting the feature points comprise:
calculating the Hamming distance between the feature point of the template image and the distance center point of the identification image according to formula (2):
Figure BDA0002945659890000032
wherein, PnCharacteristic points of the template image are obtained; pc,iIs the distance center point of the identification image;
if the calculated Hamming distance is smaller than or equal to a preset threshold value, taking the distance center point of the identification image as a clustering center point, and further matching with other feature points under the clustering center point to calculate the Hamming distance so as to obtain the optimal matching result;
and if the calculated Hamming distance is larger than the set threshold, finishing the matching, and recording and outputting the obtained matching result.
The embodiment of the present invention further provides an image feature processing apparatus for enhanced presentation of dynamic images, including:
an image acquisition unit for acquiring a template image and an identification image;
the image feature extraction unit is used for respectively detecting the feature points of the template image and the identification image according to a preset FAST algorithm;
the image feature processing unit is used for performing smooth fuzzy processing on a template image and an identification image which contain feature key points according to a preset BRIEF method, so that the feature points of the template image and the identification image can be represented by binary codes;
and the image feature matching unit is used for calculating and acquiring a plurality of feature points of which the matching degree between the template image and the identification image meets a preset condition by utilizing the Hamming distance and outputting the feature points.
Wherein the image feature extraction unit includes:
the first image reading module is used for reading a current target image; wherein the current target image is the template image or the identification image;
the pixel points face the comparison module and are used for determining a circumference which takes any pixel point P as a circle center and is within a range of 4 pixels in the current target image, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the 16 selected pixel points within a preset gray threshold range;
and the characteristic key point extraction module is used for selecting the pixel point P as the key point of the current target image if the gray value of more than 8 connected pixel points is judged to be greater than or less than the gray value of the pixel point P.
Wherein the image feature processing unit includes:
the second image reading module is used for reading the current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points;
the characteristic point processing module is used for selecting a window with a certain proportion by taking the characteristic points extracted from the current target image as a center, randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel value between each pair of the selected pixel points according to a formula (1) to obtain that the characteristic points in the current target image can be represented by binary codes:
Figure BDA0002945659890000041
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) The pixel value of the random point x2 is (u2, v2), and ui is a horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
Wherein the image feature matching unit includes:
a hamming distance calculating module, configured to calculate a hamming distance between the feature point of the template image and the distance center point of the recognition image according to formula (2):
Figure BDA0002945659890000051
wherein, PnFor said template imageFeature points; pc,iIs the distance center point of the identification image;
the feature matching module is used for taking the distance center point of the identification image as a clustering center point if the calculated Hamming distance is less than or equal to a preset threshold value, and further performing matching calculation on the distance center point and other feature points under the clustering center point to obtain the optimal matching result;
and the matching result output module is used for finishing matching if the calculated Hamming distance is larger than the set threshold value, and recording and outputting the obtained matching result.
The embodiment of the invention has the following beneficial effects:
the invention is based on FAST (Features from Accelerated Segments Test) feature point detection and BRIEF (Binary Robust Independent basic Features) feature description vector creation algorithm, ensures the rotation invariance of feature points, obviously optimizes and improves the efficiency and accuracy of feature detection, greatly accelerates the speed of feature descriptor creation, and avoids the influence of high-frequency noise points of acquisition equipment environment factors and images on Binary descriptor over sensitivity, thereby reducing the algorithm difficulty of augmented reality template image feature extraction, fully meeting the real-time requirement of feature extraction and matching, and not being influenced by image environment noise points and transformation to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
FIG. 1 is a flowchart of an image feature processing method for use in a dynamic image enhanced presentation according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a detection result of a FAST feature point in an application scene of an image feature processing method for use in enhanced presentation of a dynamic image according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a feature matching result at different positions of an image in an application scene of an image feature processing method for use in enhanced presentation of a dynamic image according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a feature matching result in an actual augmented reality environment in an application scene of an image feature processing method for use in dynamic image augmented presentation according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image feature processing apparatus for use in enhanced presentation of dynamic images according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, in an embodiment of the present invention, there is provided an image feature processing method for use in a dynamic image enhanced presentation, where the method includes the following steps:
step S1, acquiring a template image and an identification image;
step S2, respectively detecting the characteristic points of the template image and the identification image according to a preset FAST algorithm;
step S3, according to a preset BRIEF method, performing smooth fuzzy processing on a template image and an identification image both containing feature key points, so that the feature points of the template image and the identification image can be represented by binary codes;
and step S4, based on the feature points represented by the template image and the identification image respectively in binary codes, calculating by using a Hamming distance to obtain a plurality of feature points with matching degree between the template image and the identification image meeting a preset condition, and outputting the feature points.
Specifically, in step S1, a template image and a recognition image are acquired.
In step S2, first, the current target image is read; wherein the current target image is the template image or the identification image; secondly, determining a circumference of the current target image within a range of 4 pixels by taking any pixel point P as a circle center, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the selected 16 pixel points within a preset gray threshold range; and finally, if the gray value of more than 8 connected pixel points is judged to be larger or smaller than the gray value of the pixel point P, selecting the pixel point P as the key point of the current target image.
Experiments have shown that comparing only four equally spaced pixels on the circumference has the same effect as traversing 16 pixels, but the optimized search time can be shortened by a factor of 4, as shown in fig. 2.
In step S3, since the target template image may change in real time during the augmented reality presentation process, especially based on the augmented reality application of the mobile camera capture device, the image feature extraction needs to have invariance in scale for the change of scales such as direction and size, thereby enhancing the robustness of image feature identification. FAST provides the rotation and scale invariance properties of images, and first constructs an image pyramid for a given template image to ensure multi-scale resolution representation of a single image. And extracting key points aiming at graphs at different pyramid levels, and calculating the intensity centroid of the graphs in a box taking the key points as the center, wherein the direction of the key points is a vector from the key points to the intensity centroid.
Therefore, by adopting the BRIEF method, the Gaussian kernel is used for performing smooth fuzzy processing on the input template image so as to prevent the binary descriptor from being too sensitive due to the environmental factors of the acquisition equipment and the influence of the high-frequency noise of the image. Compared with the traditional method for describing the feature points by the regional gray level histogram, the method has the advantages that the speed of establishing the feature descriptors is greatly increased, and the augmented reality feature matching calculation can be performed on mobile terminal equipment with very limited calculation resources.
At this time, firstly, reading a current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points; secondly, taking the feature points extracted from the current target image as the center, selecting a window with a certain proportion (such as S), randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel value between each pair of the selected pixel points according to a formula (1) to obtain that the feature points in the current target image can be characterized by binary coding:
Figure BDA0002945659890000071
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) The pixel value of the random point x2 is (u2, v2), and ui is a horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
Through the feature extraction algorithm, a 256-bit binary feature description code can be obtained for each feature point of the template image and the identification image, and the two images with similar or overlapped parts are registered.
In step S4, feature registration is generally determined by using hamming distance, that is: 1. the number of the same elements on the corresponding positions of the two feature codes is less than 128, and the feature codes are not matched feature points; 2. the template image feature points and the feature points with the maximum number of the same elements on the corresponding positions of the feature codes on the identification image are matched into a pair. The smaller the hamming distance, the higher the matching accuracy is proved.
At this time, first, according to formula (2), a hamming distance between the feature point of the template image and the distance center point of the recognition image is calculated:
Figure BDA0002945659890000081
wherein, PnCharacteristic points of the template image are obtained; pc,iIs the distance center point of the identification image;
then, if the calculated Hamming distance is smaller than or equal to a preset threshold value, taking the distance center point of the recognition image as a clustering center point, and further matching with other feature points under the clustering center point to calculate the Hamming distance so as to obtain the optimal matching result;
and finally, if the calculated Hamming distance is larger than a set threshold value, finishing the matching, and recording and outputting the obtained matching result.
As shown in fig. 3 and fig. 4, the feature matching results are obtained in different scenarios. In fig. 3, feature matching results in different position scenes of the image are presented; in fig. 4, the feature matching results in the actual augmented reality application scenario are presented.
It can be understood that, in order to achieve the enhanced presentation effect of the dynamic image at the terminal, a spatial model to be presented may be further modeled, for example, a Unity3D embedded geometric model tool is used to generate a simple model, or a third-party modeling tool such as 3D MAX, MAYA, bler is used to create a model; and then, extracting key frames of the dynamic images to be displayed and mapping the key frames on the surface of the spatial model. The dynamic images can be created through Photoshop to generate a GIF format image, a key frame animation interpolation technology is used, for the dynamic images such as similar videos and animations, key frame images with maximized features are extracted on average, and the dynamic images among key frames are subjected to supplementing and generating through a linear interpolation algorithm; and finally, packing the Vufaria SDK to generate an augmented reality application program of a corresponding platform for the application of the Android mobile terminal by configuring the running environment of the Vufaria SDK in Unity 3D.
As shown in fig. 5, in an embodiment of the present invention, an image feature processing apparatus for use in a dynamic image enhanced presentation is provided, including:
an image acquisition unit 110 for acquiring a template image and an identification image;
an image feature extraction unit 120, configured to detect feature points of the template image and the identification image according to a preset FAST algorithm;
the image feature processing unit 130 is configured to perform smooth blurring processing on both a template image and an identification image containing feature key points according to a preset BRIEF method, so that the feature points of both the template image and the identification image can be represented by binary codes;
and the image feature matching unit 140 is configured to calculate, based on feature points represented by binary codes of the template image and the identification image, a plurality of feature points, of which matching degrees between the template image and the identification image meet a predetermined condition, by using a hamming distance, and output the feature points.
Wherein the image feature extraction unit includes:
the first image reading module is used for reading a current target image; wherein the current target image is the template image or the identification image;
the pixel points face the comparison module and are used for determining a circumference which takes any pixel point P as a circle center and is within a range of 4 pixels in the current target image, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the 16 selected pixel points within a preset gray threshold range;
and the characteristic key point extraction module is used for selecting the pixel point P as the key point of the current target image if the gray value of more than 8 connected pixel points is judged to be greater than or less than the gray value of the pixel point P.
Wherein the image feature processing unit includes:
the second image reading module is used for reading the current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points;
the characteristic point processing module is used for selecting a window with a certain proportion by taking the characteristic points extracted from the current target image as a center, randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel value between each pair of the selected pixel points according to a formula (1) to obtain that the characteristic points in the current target image can be represented by binary codes:
Figure BDA0002945659890000101
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) The pixel value of the random point x2 is (u2, v2), and ui is a horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
Wherein the image feature matching unit includes:
a hamming distance calculating module, configured to calculate a hamming distance between the feature point of the template image and the distance center point of the recognition image according to formula (2):
Figure BDA0002945659890000102
wherein, PnCharacteristic points of the template image are obtained; pc,iIs the distance center point of the identification image;
the feature matching module is used for taking the distance center point of the identification image as a clustering center point if the calculated Hamming distance is less than or equal to a preset threshold value, and further performing matching calculation on the distance center point and other feature points under the clustering center point to obtain the optimal matching result;
and the matching result output module is used for finishing matching if the calculated Hamming distance is larger than the set threshold value, and recording and outputting the obtained matching result.
The embodiment of the invention has the following beneficial effects:
the invention is based on FAST (Features from Accelerated Segments Test) feature point detection and BRIEF (Binary Robust Independent basic Features) feature description vector creation algorithm, ensures the rotation invariance of feature points, obviously optimizes and improves the efficiency and accuracy of feature detection, greatly accelerates the speed of feature descriptor creation, and avoids the influence of high-frequency noise points of acquisition equipment environment factors and images on Binary descriptor over sensitivity, thereby reducing the algorithm difficulty of augmented reality template image feature extraction, fully meeting the real-time requirement of feature extraction and matching, and not being influenced by image environment noise points and transformation to a certain extent.
It should be noted that, in the above device embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be achieved; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (9)

1. A method of image feature processing for use in dynamic image enhanced rendering, the method comprising the steps of:
acquiring a template image and an identification image;
respectively detecting the characteristic points of the template image and the identification image according to a preset FAST algorithm;
according to a preset BRIEF method, smooth fuzzy processing is carried out on a template image containing characteristic key points and an identification image, so that the characteristic points of the template image and the identification image can be represented by binary codes;
and calculating and acquiring a plurality of characteristic points with matching degree meeting a preset condition between the template image and the identification image by utilizing a Hamming distance based on the characteristic points of the template image and the identification image which are respectively represented by binary codes, and outputting the characteristic points.
2. The method as claimed in claim 1, wherein the step of detecting the feature points of the template image and the identified image respectively according to a preset FAST algorithm comprises:
reading a current target image; wherein the current target image is the template image or the identification image;
determining a circumference of a current target image within a range of 4 pixels by taking any pixel point P as a circle center, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the selected 16 pixel points within a preset gray threshold range;
and if the gray value of more than 8 connected pixel points is judged to be larger or smaller than the gray value of the pixel point P, selecting the pixel point P as the key point of the current target image.
3. The method as claimed in claim 1, wherein the step of performing a smooth blurring process on both the template image and the identified image containing the feature points according to a predefined BRIEF method so that the feature points of the template image and the identified image can be represented by binary codes comprises:
reading a current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points;
taking the feature points extracted from the current target image as the center, selecting a window with a certain proportion, randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel values between each pair of the selected pixel points according to a formula (1) to obtain that the feature points in the current target image can be represented by binary codes:
Figure FDA0002945659880000021
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) Is a random pointx2 is the pixel value of (u2, v2), and ui is the horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
4. The method of claim 3, wherein the binary code is a 256-bit binary code.
5. The method as claimed in claim 1, wherein the step of obtaining and outputting a plurality of feature points whose matching degree between the template image and the recognition image meets a predetermined condition by using hamming distance calculation based on the feature points of the template image and the recognition image respectively characterized by binary codes comprises:
calculating the Hamming distance between the feature point of the template image and the distance center point of the identification image according to formula (2):
Figure FDA0002945659880000022
wherein, PnCharacteristic points of the template image are obtained; pc,iIs the distance center point of the identification image;
if the calculated Hamming distance is smaller than or equal to a preset threshold value, taking the distance center point of the identification image as a clustering center point, and further matching with other feature points under the clustering center point to calculate the Hamming distance so as to obtain the optimal matching result;
and if the calculated Hamming distance is larger than the set threshold, finishing the matching, and recording and outputting the obtained matching result.
6. An image feature processing apparatus for use in a dynamic image enhanced presentation, comprising:
an image acquisition unit for acquiring a template image and an identification image;
the image feature extraction unit is used for respectively detecting the feature points of the template image and the identification image according to a preset FAST algorithm;
the image feature processing unit is used for performing smooth fuzzy processing on a template image and an identification image which contain feature key points according to a preset BRIEF method, so that the feature points of the template image and the identification image can be represented by binary codes;
and the image feature matching unit is used for calculating and acquiring a plurality of feature points of which the matching degree between the template image and the identification image meets a preset condition by utilizing a Hamming distance based on the feature points of the template image and the identification image which are respectively represented by binary codes, and outputting the feature points.
7. The image feature processing apparatus for use in dynamic image enhanced rendering according to claim 6, wherein the image feature extraction unit includes:
the first image reading module is used for reading a current target image; wherein the current target image is the template image or the identification image;
the pixel points face the comparison module and are used for determining a circumference which takes any pixel point P as a circle center and is within a range of 4 pixels in the current target image, selecting gray values of 16 pixel points on the determined circumference, and further comparing the gray values of the 16 selected pixel points within a preset gray threshold range;
and the characteristic key point extraction module is used for selecting the pixel point P as the key point of the current target image if the gray value of more than 8 connected pixel points is judged to be greater than or less than the gray value of the pixel point P.
8. The image feature processing apparatus for use in dynamic image enhanced rendering as recited in claim 6, wherein said image feature processing unit comprises:
the second image reading module is used for reading the current target image; the current target image is a template image containing characteristic points or an identification image containing the characteristic points;
the characteristic point processing module is used for selecting a window with a certain proportion by taking the characteristic points extracted from the current target image as a center, randomly selecting N pairs of pixel points in the selected window, and further comparing the pixel value between each pair of the selected pixel points according to a formula (1) to obtain that the characteristic points in the current target image can be represented by binary codes:
Figure FDA0002945659880000031
wherein, P (x)1) Is the pixel value of the random point x1 ═ P (u1, v1) (x)2) The pixel value of the random point x2 is (u2, v2), and ui is a horizontal coordinate value of the pixel point i in the current target image; and vi is a vertical coordinate value of the pixel point i in the current target image.
9. The image feature processing apparatus for use in dynamic image enhanced rendering according to claim 6, wherein the image feature matching unit includes:
a hamming distance calculating module, configured to calculate a hamming distance between the feature point of the template image and the distance center point of the recognition image according to formula (2):
Figure FDA0002945659880000041
wherein, PnCharacteristic points of the template image are obtained; pc,iIs the distance center point of the identification image;
the feature matching module is used for taking the distance center point of the identification image as a clustering center point if the calculated Hamming distance is less than or equal to a preset threshold value, and further performing matching calculation on the distance center point and other feature points under the clustering center point to obtain the optimal matching result;
and the matching result output module is used for finishing matching if the calculated Hamming distance is larger than the set threshold value, and recording and outputting the obtained matching result.
CN202110193780.6A 2021-02-20 2021-02-20 Image feature processing method and device for dynamic image enhancement presentation Pending CN112926593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110193780.6A CN112926593A (en) 2021-02-20 2021-02-20 Image feature processing method and device for dynamic image enhancement presentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110193780.6A CN112926593A (en) 2021-02-20 2021-02-20 Image feature processing method and device for dynamic image enhancement presentation

Publications (1)

Publication Number Publication Date
CN112926593A true CN112926593A (en) 2021-06-08

Family

ID=76170026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110193780.6A Pending CN112926593A (en) 2021-02-20 2021-02-20 Image feature processing method and device for dynamic image enhancement presentation

Country Status (1)

Country Link
CN (1) CN112926593A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method
CN106204660A (en) * 2016-07-26 2016-12-07 华中科技大学 A kind of Ground Target Tracking device of feature based coupling
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
US20180053293A1 (en) * 2016-08-19 2018-02-22 Mitsubishi Electric Research Laboratories, Inc. Method and System for Image Registrations
US20190281224A1 (en) * 2017-06-23 2019-09-12 Goertek Inc. Method for tracking and shooting moving target and tracking device
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 An Improved ORB Feature Extraction and Matching Method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844663A (en) * 2016-03-21 2016-08-10 中国地质大学(武汉) Adaptive ORB object tracking method
CN106204660A (en) * 2016-07-26 2016-12-07 华中科技大学 A kind of Ground Target Tracking device of feature based coupling
US20180053293A1 (en) * 2016-08-19 2018-02-22 Mitsubishi Electric Research Laboratories, Inc. Method and System for Image Registrations
US20190281224A1 (en) * 2017-06-23 2019-09-12 Goertek Inc. Method for tracking and shooting moving target and tracking device
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 An Improved ORB Feature Extraction and Matching Method

Similar Documents

Publication Publication Date Title
CN109359538B (en) Training method of convolutional neural network, gesture recognition method, device and equipment
CN110610453B (en) Image processing method and device and computer readable storage medium
Waheed et al. Exploiting human pose and scene information for interaction detection
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
Vazquez et al. Virtual and real world adaptation for pedestrian detection
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
Holte et al. View-invariant gesture recognition using 3D optical flow and harmonic motion context
Qiang et al. SqueezeNet and fusion network-based accurate fast fully convolutional network for hand detection and gesture recognition
Bayraktar et al. Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics
Feng et al. Depth-projection-map-based bag of contour fragments for robust hand gesture recognition
CN108573231B (en) Human body behavior identification method of depth motion map generated based on motion history point cloud
CN107886558A (en) A kind of human face expression cartoon driving method based on RealSense
Cai et al. A novel saliency detection algorithm based on adversarial learning model
Alfarano et al. Estimating optical flow: A comprehensive review of the state of the art
Dai et al. An improved orb feature extraction algorithm based on enhanced image and truncated adaptive threshold
Zhang et al. Activity object detection based on improved faster R-CNN
Cho et al. Modified perceptual cycle generative adversarial network-based image enhancement for improving accuracy of low light image segmentation
CN113313725B (en) Bung hole identification method and system for energetic material medicine barrel
CN108564043B (en) A Human Behavior Recognition Method Based on Spatio-temporal Distribution Map
Scheck et al. Unsupervised domain adaptation from synthetic to real images for anchorless object detection
WO2021056531A1 (en) Face gender recognition method, face gender classifier training method and device
CN112926593A (en) Image feature processing method and device for dynamic image enhancement presentation
JP6393495B2 (en) Image processing apparatus and object recognition method
Kaiser et al. Towards using covariance matrix pyramids as salient point descriptors in 3D point clouds
Goudelis et al. 3D Cylindrical Trace Transform based feature extraction for effective human action classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210608

RJ01 Rejection of invention patent application after publication