WO2019129293A1 - Feature data generation method and apparatus and feature matching method and apparatus - Google Patents

Feature data generation method and apparatus and feature matching method and apparatus Download PDF

Info

Publication number
WO2019129293A1
WO2019129293A1 PCT/CN2018/125732 CN2018125732W WO2019129293A1 WO 2019129293 A1 WO2019129293 A1 WO 2019129293A1 CN 2018125732 W CN2018125732 W CN 2018125732W WO 2019129293 A1 WO2019129293 A1 WO 2019129293A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
features
types
image
type
Prior art date
Application number
PCT/CN2018/125732
Other languages
French (fr)
Chinese (zh)
Inventor
李小利
白博
陈茂林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019129293A1 publication Critical patent/WO2019129293A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the embodiments of the present invention relate to the field of multimedia technologies, and in particular, to a feature data generation and feature matching method and apparatus.
  • Feature extraction mainly implements detection and tracking of targets, and extracts features from regions of interest to characterize source data.
  • the extracted features require a certain commonality to the same type of data, and the heterogeneous data has a higher discrimination, that is, has a stronger discriminating power.
  • the extracted features include color, texture, edge, depth feature, etc.
  • Embodiments of the present invention provide a method and apparatus for feature matching and feature information generation, which can improve the accuracy of image feature matching.
  • an embodiment of the present invention provides a method for feature matching.
  • the method includes: performing feature extraction on an image to obtain at least two types of features.
  • the features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger.
  • Feature information including multiple types of features is obtained according to the weighted features, and feature matching is performed according to the feature information.
  • the ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature.
  • feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
  • the description capability of the image is determined according to an indicator value associated with the feature.
  • the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature.
  • the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value.
  • the confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
  • determining the weight corresponding to the feature according to the indicator value may be according to a formula:
  • T 1 and T 2 are preset threshold values.
  • the above formula can be used to determine ⁇ 1 .
  • the first type of feature is used as the unique feature of feature matching of the image, that is, ⁇ 1 takes 1 when
  • the first type of feature is not used as a feature of feature matching of the image, that is, ⁇ 1 is taken as 0.
  • the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  • an embodiment of the present invention provides a method for generating feature information, where the method includes: performing feature extraction on an image to obtain at least two types of features.
  • the features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger.
  • Feature information including multiple types of features is obtained according to the weighted features, and the weighted feature information is stored in a database.
  • the ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature.
  • feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
  • the description capability of the image is determined according to an indicator value associated with the feature.
  • the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature.
  • the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value.
  • the confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
  • determining the weight corresponding to the feature according to the indicator value may be according to a formula:
  • T 1 and T 2 are preset threshold values.
  • the above formula can be used to determine ⁇ 1 .
  • the first type of feature is used as the unique feature of feature matching of the image, that is, ⁇ 1 takes 1 when
  • the first type of feature is not used as a feature of feature matching of the image, that is, ⁇ 1 is taken as 0.
  • the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  • an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the above first aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the method and function performed in the above first aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module is the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the first aspect above.
  • an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the second aspect above.
  • an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores instructions that, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect.
  • the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect described above.
  • the features of the different types are weighted according to the description capability, and the feature matching is performed according to the weighted feature information.
  • the weight of the feature with strong description capability in the feature information is enhanced, thereby enhancing the difference of feature information in feature matching and improving the accuracy of feature matching.
  • FIG. 1 is a schematic structural diagram of an image search system according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a feature matching method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a method for generating feature data according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of still another image processing apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present invention.
  • a feature in the embodiment of the present invention is that a certain type of object is distinguished from corresponding features or characteristics of other types of objects, or a collection of these features and characteristics.
  • the feature is the ability to extract data by measuring or processing.
  • each image has its own features that can be distinguished from other types of images, some are natural features that can be intuitively perceived, such as brightness, edges, textures, and colors; others require transformation or processing. Can be obtained, such as moments, histograms and principal components; and some are depth features obtained by deep learning model extraction.
  • the ability to describe the same type of feature may be different for different images. For example, for a solid color-based image, since the texture in the image is relatively small, when the texture feature is used to describe the image, it is not possible to distinguish different images with different solid colors. Features When describing such images, it is better to distinguish between different images in such images.
  • Feature matching refers to an algorithm that performs parameter matching by extracting features of two or more images separately, and then using the described parameters to perform matching.
  • Features processed by feature-based matching typically include features such as color features, texture features, shape features, spatial location features, and the like.
  • Feature matching Firstly, the image is preprocessed to extract its high-level features, and then the matching correspondence between the two images is established.
  • the commonly used feature primitives have some features, edge features and regional features.
  • Feature matching requires many mathematical operations such as matrix operations, gradient solutions, and Fourier transforms and Taylor expansion.
  • Commonly used feature extraction and matching methods are: statistical methods, geometric methods, model methods, signal processing methods, boundary feature methods, Fourier shape description methods, geometric parameter methods, shape invariant moment methods, and so on.
  • the image in the embodiment of the present invention includes a still image and a moving image.
  • an application scenario of an embodiment of the present invention may be an image search system 1000 based on image feature matching, which may perform on-time analysis and query on a video or an image.
  • the system is composed of four core modules: a feature extraction module 1001, a feature storage module 1002, a feature weighting module 1003, and a feature matching module 1004.
  • the feature extraction module 1001 mainly implements detection and tracking of an image or a video file, and extracts features from the region of interest to obtain feature information data corresponding to the image or video.
  • the feature extraction module for the target image or video to be retrieved Feature extraction can also be performed, and the extracted features are processed by the feature weighting module.
  • the feature storage module 1002 is configured to construct a database and an index thereof based on the result of feature extraction by the feature module 1001 on the video or the image.
  • the feature weighting module 1003 weights the features extracted in the target image for the target image to be retrieved, and obtains the weighted image feature information.
  • the feature storage module 1002 may establish a database according to the weighted feature obtained by weighting the features extracted by the feature extraction module by the feature weighting module 1003, and obtain the weighted feature database and its index.
  • the feature weighting module 1003 may weight the features of the image or video used to establish the database extracted in the feature extraction module 1001, thereby establishing the weighted result through the feature storage module to establish a weighted database.
  • the image retrieval module 1004 obtains the query result based on the feature information weighted by the target image and matches the image in the database according to the feature matching algorithm.
  • image matching the extracted features require certain commonality to the same type of data, and the heterogeneous data has higher discrimination, that is, has stronger discriminating power.
  • the extracted feature is weighted by the feature weighting method, so that the finally obtained feature has a stronger description capability, thereby facilitating a better effect by the graph search system.
  • the image search system 1000 may be a stand-alone computer system, such as a server, to implement the corresponding functions of the feature extraction module 1001, the feature storage module 1002, the feature weighting module 1003, and the feature matching module 1004.
  • image search system 1000 can be a distributed system that includes database nodes and compute nodes for images.
  • the database node stores the database processed by the feature storage module 1002 and its index, and the computing node can implement the corresponding functions of the feature extraction module 1001, the feature weighting module 1002, and the feature matching module 1004.
  • a module may be deployed on different nodes.
  • the feature extraction module 1001 and the feature weighting module 1003 may be respectively deployed on the database node and the computing node.
  • the database node When generating the feature data, the database node needs to call the feature weighting module 1003 to weight the features extracted by the feature extraction module 1001 to generate a weighted
  • the feature data while the computing node performs feature matching, also needs to call the feature weighting module 1003 to weight the features of the target image extracted by the feature extraction module 1001 to perform feature matching.
  • FIG. 2 it is a flowchart of an image feature matching method according to an embodiment of the present invention.
  • the method includes:
  • the target image is the image to be matched in the embodiment of the present invention.
  • the feature information of the target image is extracted by the feature extraction algorithm.
  • the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network.
  • the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc.
  • the extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like.
  • the extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
  • the primary feature can be determined by a predetermined category.
  • the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature.
  • the texture feature may be preset as a main feature.
  • the feature is considered to be the main feature.
  • the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
  • whether a class of features is a primary feature can be determined by an indicator value that can describe the feature.
  • an indicator value that can describe the feature.
  • whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian.
  • the average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image.
  • the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image.
  • a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
  • whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
  • the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature.
  • the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
  • the weight of the feature may be adaptively determined based on the index value corresponding to the feature of the category.
  • the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
  • the acquired features are weighted to obtain the final weighted feature information.
  • the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
  • the image can be feature-matched according to the feature information.
  • similarity calculations such as Euclidean distance, Mahalanobis distance, and Hamming distance can be used to characterize the similarity between different features, so as to obtain the final image matching result.
  • FIG. 3 is an image feature matching method according to an embodiment of the present invention.
  • the texture feature and color feature are taken as an example, and the average amplitude is used as an index value for measuring a texture feature or a color feature.
  • the texture features and color features are extracted from the image.
  • the threshold method is used to obtain the weight corresponding to each type of feature.
  • the final feature information of the image is weighted by corresponding weights, thus enhancing the discriminative power of the final feature. It can be understood that for other conventional features, such as edge features, grayscale features, etc., a similar method can be used for matching with reference to this embodiment.
  • the target image is a 3-channel color image with a size of 100 ⁇ 100, and the image is extracted from the texture feature and the color feature.
  • the texture features are extracted from the image:
  • d i is the amplitude of a point with coordinates (x i , y i ), with Gradient in the x and y directions for this point;
  • c. Delimit the amplitude distribution interval. If the distribution interval is 50 intervals, obtain the distribution histogram of the image amplitude, and obtain the 50-dimensional texture feature corresponding to the image. Reference formula: among them, The statistical value of the magnitude falling into the i-th interval.
  • the average amplitude of the image that is, the amplitude of each point in the image is averaged.
  • T 1 and T 2 are threshold values that can be described by the texture image and the color image as unique features when performing feature matching according to experience.
  • T 1 and T 2 are threshold values that can be described by the texture image and the color image as unique features when performing feature matching according to experience.
  • the image is a strong color image, and the description feature of the texture feature in the image is weak, and the texture feature may no longer be used as a matching feature when the feature is matched; correspondingly, when the average
  • the amplitude is greater than T 1 , the image is a strong texture image, and the color feature description ability in the image is very weak.
  • the color feature can no longer be used as a matching feature.
  • the weight corresponding to the texture feature and the color feature can be determined according to the formula:
  • the weight corresponding to the feature is determined by using a double threshold.
  • color features when The value is less than The color feature is the main feature in the image, at which point ⁇ 1 ⁇ 2 , ie the weight of the color feature is greater than the texture feature.
  • Feature matching can be performed using various feature matching algorithms. For example, the similarity between different features can be characterized by the Euclidean distance to obtain the final image recognition result.
  • the similarity of the two images can be judged according to the Euclidean distance, wherein the smaller the Euclidean distance, the higher the similarity between the two images.
  • the weighting of the color features and texture features of the image is adjusted based on the average magnitude.
  • the color feature is the main feature of the image
  • the color feature corresponds to a higher weight
  • the texture feature asks the main feature of the image
  • the texture feature corresponds to a higher weight.
  • FIG. 4 is still another image feature matching method according to an embodiment of the present invention.
  • the embodiment takes a deep learning feature of a character image as an example.
  • different depth features are extracted from the image based on the depth model.
  • a traditional classifier such as SVM, or a fully connected layer (fc layer) may be added based on the depth feature to obtain different features. Confidence. Different depth features are weighted according to confidence, thus enhancing the discriminative power of the final depth feature.
  • the depth feature for describing the image can be extracted by the depth model.
  • images can be described from different dimensions.
  • the depth feature can describe the possibility that the portrait is male and female in the image from the gender dimension; it can also describe the possibility of the portrait in the image at different ages from the age dimension.
  • the gender feature f2 is extracted, and the feature dimension is n2.
  • a fully connected layer (fc layer) is added to obtain a probability that the feature maps to different classes, for example, a probability that a gender feature of an image is mapped to "male", that is, The confidence pi of different features is obtained.
  • the final feature F concat ( ⁇ 1 ⁇ f 1 , ⁇ 2 ⁇ f 2 , ⁇ 3 ⁇ f 3 , ⁇ 4 ⁇ f 4 ) is obtained.
  • the final feature can also be determined based on a method of supervised learning.
  • a method of supervised learning such as SVM
  • a method for generating image feature data according to the present invention is provided.
  • an image feature database for feature matching can be generated.
  • feature matching can be performed in the image feature database generated by the method in the method embodiment. Thereby realizing the retrieval function of the image.
  • the method includes:
  • the image to be generated that is, the image in which the feature information is stored in the database is required to be produced in the embodiment of the present invention.
  • the feature information of the image to be generated is extracted by the feature extraction algorithm.
  • the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network.
  • the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc.
  • the extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like.
  • the extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
  • the primary feature can be determined by a predetermined category.
  • the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature.
  • the texture feature may be preset as a main feature.
  • the feature is considered to be the main feature.
  • the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
  • whether a class of features is a primary feature can be determined by an indicator value that can describe the feature.
  • an indicator value that can describe the feature.
  • whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian.
  • the average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image.
  • the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image.
  • a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
  • whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
  • the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature.
  • the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
  • the weight of the feature may be adaptively determined according to the index value corresponding to the category feature.
  • the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
  • the acquired features are weighted to obtain the final weighted feature information.
  • the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
  • the weighted feature information is stored in the feature database.
  • the feature information in the feature database can be used for feature matching of the image to be retrieved in the foregoing embodiment.
  • the feature weighting method in the aforementioned feature matching corresponds to the feature weighting method in the present embodiment.
  • the method in the foregoing feature matching method is completed by the feature extraction module 1001 , the feature weighting module 1003 , and the feature matching module 1004 .
  • the method in the feature information generation embodiment is performed by the feature extraction module 1001 .
  • the feature storage module 1002 and the feature weighting module 1003 are completed.
  • the feature weighting methods corresponding to feature matching and feature information generation should be consistent.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the apparatus in this embodiment may be a device independent of the feature storage module 1002 and the database in FIG. 1, or may be integrated in the same device as the feature storage module 1002 and the database.
  • the image processing apparatus in this embodiment is configured to perform feature matching in a database after feature extraction and processing of the target image.
  • the image processing apparatus includes a feature extraction module 601, a feature weighting module 602, and a feature matching module 603. Among them, the description of each module is as follows:
  • a feature extraction module 601 configured to acquire at least two types of features of the target image
  • the feature weighting module 602 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
  • the feature matching module 603 performs feature matching on the image according to the weighted feature information.
  • each module may also perform the methods and functions performed in the foregoing embodiments corresponding to the corresponding descriptions of the method embodiments shown in FIG. 2, FIG. 3 or FIG.
  • the function of the feature extraction module 601 can refer to the method in S201
  • the function of the feature weighting module 602 can refer to the method in S202, and the method for weighting the feature in S203 to obtain the weighted feature information
  • the feature matching module 603 can refer to the method of performing image feature matching in S203.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the device in this embodiment is independent of the device shown in FIG. 6, and may also be a device integrated in the device shown in FIG. 6 to implement the functions of the embodiment.
  • the image processing apparatus in the present embodiment is configured to perform feature extraction on the image data, weight the features, and generate the weighted feature information and store the information in the database.
  • the image processing apparatus includes a feature extraction module 701, a feature weighting module 702, a feature storage module 703, and a feature information database 704. Among them, the description of each module is as follows:
  • a feature extraction module 701 configured to acquire at least two types of features of the image
  • the feature weighting module 702 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
  • the feature storage module 703 is configured to store the weighted feature information in a feature database
  • the database 704 is configured to store the feature information weighted by the image data.
  • each module may also perform the methods and functions performed in the foregoing embodiments in accordance with the corresponding description of the method embodiments shown in FIG. 5.
  • the feature extraction module 701 and the feature weighting module 702 can also refer to the corresponding feature extraction and feature processing methods of the image to be retrieved in FIG. 2, FIG. 3 and FIG.
  • FIG. 8 it is an apparatus embodiment of an foregoing method embodiment of the present invention.
  • the apparatus may perform the method corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG. 5, or may be the foregoing FIG. 6 or 7 A hardware implementation of the device.
  • the embodiments of the present invention are described with reference to a general computer system environment as an example.
  • the device can be adapted to other similar computing hardware architectures to achieve similar functionality. Includes, without limitation, personal computers, service computers, multiprocessor systems, microprocessor based systems, programmable consumer appliances, network PCs, minicomputers, mainframe computers, distributed computing environments including any of the above systems or devices ,and many more.
  • embodiments of the present invention can also be implemented by other terminal devices capable of implementing similar computer functions, such as a smart phone, a PAD, a smart wearable device, and the like.
  • Elements of device 800 may include, but are not limited to, processing unit 820, system memory 830, and system bus 810.
  • the system bus couples various system components including system memory to processing unit 820.
  • System bus 810 can be any of several types of bus structures, which can include a memory bus or memory controller, a peripheral bus, and a local bus using a bus structure.
  • the bus structure may include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Extended ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Extended ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • the heterogeneous processing such as a central processing unit (CPU), a graphics processing unit (Graphics Processing Unit), or a digital signal processing (DSP) processor listed in the foregoing embodiment is included.
  • the central processing unit can be used to perform the method steps in the foregoing embodiments, such as the method steps corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG.
  • Device 800 generally includes a variety of device readable media.
  • the device readable medium can be any medium that can be effectively accessed by any device 800 and includes volatile or nonvolatile media, as well as removable or non-removable media.
  • the device readable medium can comprise a device storage medium and a communication medium.
  • the device storage medium includes volatile and nonvolatile, removable and non-removable media, which can be implemented by any method or technique for storing information such as device readable instructions, data structures, program modules or other data.
  • Device storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, or hard disk storage, solid state hard disk storage, optical disk storage, disk cartridges, disk storage or other storage devices, or any other storage device.
  • Communication media typically includes embedded computer readable instructions, data structures, program modules or other data in a modular data signal (e.g., carrier wave or other transport mechanism) and also includes any medium for information delivery. Combinations of any of the above should also be included within the scope of the device readable media.
  • System memory 830 includes device storage media, which can be volatile and non-volatile memory, such as read only memory (ROM) 831 and random access memory (RAM) 832.
  • ROM read only memory
  • RAM random access memory
  • the basic input/output system 833 (BIOS) is typically stored in ROM 831 and contains basic routines that facilitate the transfer of information between elements in device 810.
  • RAM 832 typically contains data and/or program modules that can be accessed and/or operated immediately by processing unit 820. For example, but not limited to, FIG. 8 illustrates operating system 834, application 835, other program modules 836, and program data 837.
  • FIG. 8 illustrates a hard disk storage 841 which may be a non-removable and non-volatile readable and writable magnetic medium; an external memory 851 which may be a detachable and nonvolatile non-volatile external memory such as Optical disk, magnetic disk, flash memory or mobile hard disk, etc.; hard disk storage 81 is generally connected to system bus 810 through a non-removable storage interface (eg, interface 840), which typically passes through a removable storage interface (eg, interface 860) and system bus 810 is connected.
  • a non-removable storage interface eg, interface 840
  • a removable storage interface eg, interface 860
  • hard drive 841 illustrates storage operating system 842, application 843, other program modules 844, and program data 845. It should be noted that these elements may be the same or different from operating system 834, application 835, other program modules 836, and program data 837.
  • the method in the foregoing embodiment or the function of the logic module in the previous embodiment may be read by the processing unit 820 by the code or the readable instruction stored in the device storage medium or may be The instructions are read to perform the method.
  • the aforementioned storage medium such as the hard disk drive 841 or the external memory 851, may store the feature database in the foregoing embodiment.
  • the user can enter commands and information through various types of input device 861 devices 800.
  • Various input devices are often connected to the processing unit 820 through a user input interface 860.
  • the user input interface 860 is coupled to the system bus, but can also be connected to the bus structure through other interfaces, such as a parallel interface, or a universal serial. Interface (USB).
  • Display device 890 can also be coupled to system bus 810 via an interface (e.g., video interface 890).
  • computing device 800 can also include various types of peripheral output devices 820 that can be connected through output interface 880 or the like.
  • Device 800 can be coupled to one or more computing devices, such as remote computer 870, using logic.
  • a remote computing node includes a device, a computing node, a server, a router, a network PC, an equivalent device, or other general network node, and typically includes many or all of the elements discussed above in connection with device 800.
  • the remote computing node can be a slave node, a compute node, or other device.
  • the logical connections illustrated in Figure 8 include a local area network (LAN) and a wide area network (WAN), and may also include other networks. Through logical connections, the device can interact with other nodes in the present invention with other topics.
  • LAN local area network
  • WAN wide area network
  • the task information and the data can be transmitted through the logical link with the user, thereby acquiring the task to be assigned by the user; the resource data transmission and the task allocation command are transmitted through the logical link with the computing node, thereby realizing the resources of each node. Acquisition of information and assignment of tasks.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Abstract

Disclosed are a feature data generation method and a feature matching method, comprising: acquiring feature information of an image, and determining a weight corresponding to each type of feature in at least two types of features, wherein a weight corresponding to a first-type feature is greater than weights corresponding to the other features of the at least two types of features, and the first-type feature is a primary feature of acquired features; and according to the weight, weighting the at least two types of features to obtain weighted feature information, storing the weighted feature information in a database so as to generate feature data, or performing feature matching in the database according to the weighted feature information. By means of the embodiments of the present invention, the accuracy of established feature matching can be improved.

Description

一种特征数据生成和特征匹配方法及装置Feature data generation and feature matching method and device
本申请要求于2017年12月29日提交中国国家知识产权局、申请号为201711479219.4、申请名称为“一种特征数据生成和特征匹配方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on December 29, 2017, the Chinese National Intellectual Property Office, the application number is 201711479219.4, and the application name is “a feature data generation and feature matching method and device”. The citations are incorporated herein by reference.
技术领域Technical field
本发明实施例涉及多媒体技术领域,尤其涉及一种特征数据生成和特征匹配方法及装置。The embodiments of the present invention relate to the field of multimedia technologies, and in particular, to a feature data generation and feature matching method and apparatus.
背景技术Background technique
随着人们对社会公共安全越来越深入的密切关注,越来越多的政府研究机构和公司致力于相关的数据内容分析技术的研发,诸如商场、学校、大型广场、地铁站等人群密集易于发生公共安全事件的场所,已经部署了大量的监控摄像头,从而形成海量的视频数据,采用传统的人工对如此庞大的数据进行分析,已经无法满足实际的需求,因此,利用计算机实现对海量视屏数据中的行人自动再识别得到了快速的发展。As people pay more and more attention to social public safety, more and more government research institutions and companies are committed to the development of related data content analysis technologies, such as shopping malls, schools, large squares, subway stations and other crowds. In places where public safety incidents have occurred, a large number of surveillance cameras have been deployed to form massive video data. The traditional manual analysis of such huge data has been unable to meet the actual needs. Therefore, the use of computers to realize massive video data The automatic recognition of pedestrians in the field has been rapidly developed.
计算机内容分析技术的一个重要领域是特征提取。特征提取主要实现了对目标的检测与跟踪,并对感兴趣区域提取特征以表征源数据。对所提取的特征要求对同类数据具有一定的共性,异类的数据具有更高的区分性,即,具有较强的鉴别力。An important area of computer content analysis technology is feature extraction. Feature extraction mainly implements detection and tracking of targets, and extracts features from regions of interest to characterize source data. The extracted features require a certain commonality to the same type of data, and the heterogeneous data has a higher discrimination, that is, has a stronger discriminating power.
现有针对目标的识别问题中,所提取的特征包括颜色、纹理、边缘、深度特征等,然而在海量数据集中,采用单一的特征很难获得对源数据较精准的描述。如何采用较简洁的方法,使得这些特征的描述能力得到更好的增强,成为一个难题。In the existing target recognition problem, the extracted features include color, texture, edge, depth feature, etc. However, in a large data set, it is difficult to obtain a more accurate description of the source data by using a single feature. How to adopt a more concise method, so that the description ability of these features is better enhanced, becomes a problem.
发明内容Summary of the invention
本发明实施例提供一种特征匹配和特征信息生成的方法及装置,可以提高图像特征匹配的准确性。Embodiments of the present invention provide a method and apparatus for feature matching and feature information generation, which can improve the accuracy of image feature matching.
第一方面,本发明实施例提供了一种特征匹配的方法,方法包括:对图像进行特征提取,获取至少两类特征。根据每类特种对图像的描述能力的大小,对特征进行加权,其中,对图像的描述能力越大的特征权重越大。根据加权后的特征得到包含多类特征的特征信息,并根据该特征信息进行特征匹配。特征对图像的描述能力是指根据该特征对图像进行描述时的区分度。通过根据对图像的描述能力对特征进行加权,从而增强了描述能力高的特征在多类特种中的权重,提高了加权后的特征信息在图像匹配时的区分度,从而增强了图像匹配的准确度。In a first aspect, an embodiment of the present invention provides a method for feature matching. The method includes: performing feature extraction on an image to obtain at least two types of features. The features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger. Feature information including multiple types of features is obtained according to the weighted features, and feature matching is performed according to the feature information. The ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature. By weighting the features according to the description ability of the image, the weight of the features with high description ability in multiple types of special features is enhanced, and the discrimination degree of the weighted feature information in image matching is improved, thereby enhancing the accuracy of image matching. degree.
在一种实现方式中,获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征。确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征。根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息。根据所述加权后 的特征信息,对所述图像进行特征匹配。通过对第一类特征进行加权,由于第一类特征的图像描述能力大于其他特征,从而提升了加权后的特征信息在图像匹配时的区分度,增强了特征匹配的准确性。In one implementation, feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
在一种实现方式中,图像的描述能力根据与该特征相关的一种指标值确定,相应的,第一类特征对应的权重根据所述第一类特征的第一指标值确定。例如,当特征为纹理特征时,纹理特征对于图像的描述能力可以根据图像的平均幅值,或者拉普拉斯算子确定;深度特征可以根据特征的置信度或者通过质量评测值确定。置信度用于描述所述深度特征映射到对应的预设区间的可能性,质量评测值根据质量评测矩阵得到。In one implementation, the description capability of the image is determined according to an indicator value associated with the feature. Correspondingly, the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature. For example, when the feature is a texture feature, the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value. The confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
在一种实现方式中,根据指标值确定特征所对应的权重可以根据公式:In an implementation manner, determining the weight corresponding to the feature according to the indicator value may be according to a formula:
Figure PCTCN2018125732-appb-000001
Figure PCTCN2018125732-appb-000001
其中,
Figure PCTCN2018125732-appb-000002
为述第一类特征的第一指标值,T 1、T 2为预设阈值。当
Figure PCTCN2018125732-appb-000003
的取值在T 1、T 2之间时,可以参考上述公式确定ω 1,当
Figure PCTCN2018125732-appb-000004
大于T 1时,将所述第一类特征作为所述图像进行特征匹配的唯一特征,即ω 1取1,当
Figure PCTCN2018125732-appb-000005
小于T 2时,不将所述第一类特征作为所述图像进行特征匹配的特征,即ω 1取0。
among them,
Figure PCTCN2018125732-appb-000002
For the first indicator value of the first type of feature, T 1 and T 2 are preset threshold values. when
Figure PCTCN2018125732-appb-000003
When the value is between T 1 and T 2 , the above formula can be used to determine ω 1 .
Figure PCTCN2018125732-appb-000004
When it is greater than T 1 , the first type of feature is used as the unique feature of feature matching of the image, that is, ω 1 takes 1 when
Figure PCTCN2018125732-appb-000005
When less than T 2 , the first type of feature is not used as a feature of feature matching of the image, that is, ω 1 is taken as 0.
在一种实现方式中,所述第一类特征对应的权重根据所述第一类特征的特征类型确定。In an implementation manner, the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
第二方面,本发明实施例提供了一种特征信息生成的方法,方法包括:对图像进行特征提取,获取至少两类特征。根据每类特种对图像的描述能力的大小,对特征进行加权,其中,对图像的描述能力越大的特征权重越大。根据加权后的特征得到包含多类特征的特征信息,并将加权后的特征信息存储到数据库中。特征对图像的描述能力是指根据该特征对图像进行描述时的区分度。通过根据对图像的描述能力对特征进行加权,从而增强了描述能力高的特征在多类特种中的权重,提高了加权后的特征信息在图像匹配时的区分度,从而增强了图像匹配的准确度。In a second aspect, an embodiment of the present invention provides a method for generating feature information, where the method includes: performing feature extraction on an image to obtain at least two types of features. The features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger. Feature information including multiple types of features is obtained according to the weighted features, and the weighted feature information is stored in a database. The ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature. By weighting the features according to the description ability of the image, the weight of the features with high description ability in multiple types of special features is enhanced, and the discrimination degree of the weighted feature information in image matching is improved, thereby enhancing the accuracy of image matching. degree.
在一种实现方式中,获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征。确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征。根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息。根据所述加权后的特征信息,对所述图像进行特征匹配。通过对第一类特征进行加权,由于第一类特征的图像描述能力大于其他特征,从而提升了加权后的特征信息在图像匹配时的区分度,增强了特征匹配的准确性。In one implementation, feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
在一种实现方式中,图像的描述能力根据与该特征相关的一种指标值确定,相应的,第一类特征对应的权重根据所述第一类特征的第一指标值确定。例如,当特征为纹理特征时,纹理特征对于图像的描述能力可以根据图像的平均幅值,或者拉普拉斯算子确定;深度特征可以根据特征的置信度或者通过质量评测值确定。置信度用于描述所述深度特征映射到对应的预设区间的可能性,质量评测值根据质量评测矩阵得到。In one implementation, the description capability of the image is determined according to an indicator value associated with the feature. Correspondingly, the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature. For example, when the feature is a texture feature, the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value. The confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
在一种实现方式中,根据指标值确定特征所对应的权重可以根据公式:In an implementation manner, determining the weight corresponding to the feature according to the indicator value may be according to a formula:
Figure PCTCN2018125732-appb-000006
Figure PCTCN2018125732-appb-000006
其中,
Figure PCTCN2018125732-appb-000007
为述第一类特征的第一指标值,T 1、T 2为预设阈值。当
Figure PCTCN2018125732-appb-000008
的取值在T 1、T 2之间 时,可以参考上述公式确定ω 1,当
Figure PCTCN2018125732-appb-000009
大于T 1时,将所述第一类特征作为所述图像进行特征匹配的唯一特征,即ω 1取1,当
Figure PCTCN2018125732-appb-000010
小于T 2时,不将所述第一类特征作为所述图像进行特征匹配的特征,即ω 1取0。
among them,
Figure PCTCN2018125732-appb-000007
For the first indicator value of the first type of feature, T 1 and T 2 are preset threshold values. when
Figure PCTCN2018125732-appb-000008
When the value is between T 1 and T 2 , the above formula can be used to determine ω 1 .
Figure PCTCN2018125732-appb-000009
When it is greater than T 1 , the first type of feature is used as the unique feature of feature matching of the image, that is, ω 1 takes 1 when
Figure PCTCN2018125732-appb-000010
When less than T 2 , the first type of feature is not used as a feature of feature matching of the image, that is, ω 1 is taken as 0.
在一种实现方式中,所述第一类特征对应的权重根据所述第一类特征的特征类型确定。In an implementation manner, the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
第三方面,本发明实施例提供了一种图像处理装置,该图像处理装置被配置为实现上述第一方面中所执行的方法和功能,由硬件/软件实现,其硬件/软件包括与上述功能相应的模块。In a third aspect, an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the above first aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions. The corresponding module.
第四方面,本发明实施例提供了一种图像处理装置,该图像处理装置被配置为实现上述第二方面中所执行的方法和功能,由硬件/软件实现,其硬件/软件包括与上述功能相应的模块。In a fourth aspect, an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions. The corresponding module.
第五方面,本申请实施例提供了一种图像处理装置,包括:处理器、存储器和通信总线,其中,通信总线用于实现处理器和存储器之间连接通信,处理器执行存储器中存储的程序用于实现上述第一方面方法中的步骤。In a fifth aspect, an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the first aspect above.
第六方面,本申请实施例提供了一种图像处理装置,包括:处理器、存储器和通信总线,其中,通信总线用于实现处理器和存储器之间连接通信,处理器执行存储器中存储的程序用于实现上述第二方面方法中的步骤。In a sixth aspect, an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the second aspect above.
第七方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面或者第二方面的方法。In a seventh aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores instructions that, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect.
第八方面,本申请提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或者第二方面的方法。In an eighth aspect, the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect described above.
本发明实施例中,通过本实施例中的方法,对不同种类的特征根据其描述能力进行加权,并根据加权后的特征信息进行特征匹配。相对于现有技术,增强了描述能力强的特征在特征信息中的权重,从而增强了特征信息在进行特征匹配时的区别度,提升了特征匹配的准确率。In the embodiment of the present invention, the features of the different types are weighted according to the description capability, and the feature matching is performed according to the weighted feature information. Compared with the prior art, the weight of the feature with strong description capability in the feature information is enhanced, thereby enhancing the difference of feature information in feature matching and improving the accuracy of feature matching.
附图说明DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例所需要使用的附图进行说明。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings to be used in the embodiments of the present invention will be described below.
图1是本发明实施例提供的一种图像搜索系统的架构示意图;1 is a schematic structural diagram of an image search system according to an embodiment of the present invention;
图2是本发明实施例提供的一种特征匹配方法的流程示意图;2 is a schematic flowchart of a feature matching method according to an embodiment of the present invention;
图3是本发明实施例提供的又一种特征匹配方法的流程示意图;3 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention;
图4是本发明实施例提供的又一种特征匹配方法的流程示意图;4 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention;
图5是本发明实施例提供的一种特征数据生成方法的流程示意图;FIG. 5 is a schematic flowchart of a method for generating feature data according to an embodiment of the present invention;
图6是本发明实施例提供的一种图像处理装置的结构示意图;FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
图7是本发明实施例提供的又一种图像处理装置的结构示意图;FIG. 7 is a schematic structural diagram of still another image processing apparatus according to an embodiment of the present invention; FIG.
图8是本发明实施例提供的一种图像处理装置的硬件结构示意图;FIG. 8 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present invention; FIG.
具体实施方式Detailed ways
下面结合本发明实施例中的附图对本发明实施例进行描述。The embodiments of the present invention are described below in conjunction with the accompanying drawings in the embodiments of the present invention.
本发明实施例中的特征是某一类对象区别于其他类对象的相应特点或特性,或是这些特点和特性的集合。特征是通过测量或处理能够抽取的数据。对于图像而言,每一幅图像都具有能够区别于其他类图像的自身特征,有些是可以直观地感受到的自然特征,如亮度、边缘、纹理和色彩等;有些则是需要通过变换或处理才能得到的,如矩、直方图以及主成份等;还有一些是通过深度学习模型提取,得到的深度特征。A feature in the embodiment of the present invention is that a certain type of object is distinguished from corresponding features or characteristics of other types of objects, or a collection of these features and characteristics. The feature is the ability to extract data by measuring or processing. For images, each image has its own features that can be distinguished from other types of images, some are natural features that can be intuitively perceived, such as brightness, edges, textures, and colors; others require transformation or processing. Can be obtained, such as moments, histograms and principal components; and some are depth features obtained by deep learning model extraction.
对于不同的图像,同一类特征的描述能力可能不同。例如,对于以纯色为主的图像,由于图像中的纹理比较少,因此当采用纹理特征对图像进行描述时,不能够很好的对不同的纯色为主的图像进行区分,相反的,采用颜色特征对于这类图像进行描述时,则能够比较好的区分该类图像中的不同图像。The ability to describe the same type of feature may be different for different images. For example, for a solid color-based image, since the texture in the image is relatively small, when the texture feature is used to describe the image, it is not possible to distinguish different images with different solid colors. Features When describing such images, it is better to distinguish between different images in such images.
特征匹配是指通过分别提取两个或多个图像的特征,对特征进行参数描述,然后运用所描述的参数来进行匹配的一种算法。基于特征的匹配所处理的图像一般包含的特征有颜色特征、纹理特征、形状特征、空间位置特征等。特征匹配首先对图像进行预处理来提取其高层次的特征,然后建立两幅图像之间特征的匹配对应关系,通常使用的特征基元有点特征、边缘特征和区域特征。特征匹配需要用到许多诸如矩阵的运算、梯度的求解、还有傅立叶变换和泰勒展开等数学运算。常用的特征提取与匹配方法有:统计方法、几何法、模型法、信号处理法、边界特征法、傅氏形状描述法、几何参数法、形状不变矩法等。Feature matching refers to an algorithm that performs parameter matching by extracting features of two or more images separately, and then using the described parameters to perform matching. Features processed by feature-based matching typically include features such as color features, texture features, shape features, spatial location features, and the like. Feature matching Firstly, the image is preprocessed to extract its high-level features, and then the matching correspondence between the two images is established. The commonly used feature primitives have some features, edge features and regional features. Feature matching requires many mathematical operations such as matrix operations, gradient solutions, and Fourier transforms and Taylor expansion. Commonly used feature extraction and matching methods are: statistical methods, geometric methods, model methods, signal processing methods, boundary feature methods, Fourier shape description methods, geometric parameter methods, shape invariant moment methods, and so on.
本发明实施例中的图像,包含了静态图像和动态图像。The image in the embodiment of the present invention includes a still image and a moving image.
结合图1,是本发明实施例的一种应用场景可以是基于图像特征匹配的图像搜索系统1000,该图像搜索系统可以实现对视频或者图像进行在时分析、查询。该系统由特征提取模块1001、特征存储模块1002、特征加权模块1003、特征匹配模块1004四个核心模块组成。特征提取模块1001主要实现了对图像或者视频文件的检测与跟踪,并对感兴趣区域提取特征以得到图像或者视频相对应的特征信息数据,此外,对于待检索的目标图像或者视频,特征提取模块也可以进行特征提取,并将提取后的特征通过特征加权模块进行处理。特征存储模块1002用于基于特征模块1001对视频或者图像进行特征提取后的结果,构建数据库及其索引。特征加权模块1003针对待检索的目标图像,对目标图像中提取的特征进行加权,得到加权后的图像特征信息。特征存储模块1002在建立数据库时,可以根据特征加权模块1003对特征提取模块所提取的特征进行加权后得到的加权后特征建立数据库,得到加权后的的特征数据库及其索引。特征加权模块1003可以对特征提取模块1001中提取的用于建立数据库的图像或者视频的特征进行加权,从而将加权的结果通过特征存储模块建立加权后的数据库。图像检索模块1004基于目标图像加权后的特征信息,并根据特征匹配算法与在数据库中的图像进行匹配,从而获得查询结果。在图像匹配中,对所提取的特征要求对同类数据具有一定的共性,异类的数据具有更高的区分性,即,具有较强的鉴别力。然而在海量数据集中,采用单一的特征很难获得对源数据较精准的描述。因此,在本发明实施例中,通过特征加权方法,对所提取的特征赋予权重,使得最终所获得的特征具有更强的描述能力,从而促进以图搜图系统取得更好的效果。With reference to FIG. 1, an application scenario of an embodiment of the present invention may be an image search system 1000 based on image feature matching, which may perform on-time analysis and query on a video or an image. The system is composed of four core modules: a feature extraction module 1001, a feature storage module 1002, a feature weighting module 1003, and a feature matching module 1004. The feature extraction module 1001 mainly implements detection and tracking of an image or a video file, and extracts features from the region of interest to obtain feature information data corresponding to the image or video. In addition, the feature extraction module for the target image or video to be retrieved Feature extraction can also be performed, and the extracted features are processed by the feature weighting module. The feature storage module 1002 is configured to construct a database and an index thereof based on the result of feature extraction by the feature module 1001 on the video or the image. The feature weighting module 1003 weights the features extracted in the target image for the target image to be retrieved, and obtains the weighted image feature information. When the database is created, the feature storage module 1002 may establish a database according to the weighted feature obtained by weighting the features extracted by the feature extraction module by the feature weighting module 1003, and obtain the weighted feature database and its index. The feature weighting module 1003 may weight the features of the image or video used to establish the database extracted in the feature extraction module 1001, thereby establishing the weighted result through the feature storage module to establish a weighted database. The image retrieval module 1004 obtains the query result based on the feature information weighted by the target image and matches the image in the database according to the feature matching algorithm. In image matching, the extracted features require certain commonality to the same type of data, and the heterogeneous data has higher discrimination, that is, has stronger discriminating power. However, in a large data set, it is difficult to obtain a more accurate description of the source data using a single feature. Therefore, in the embodiment of the present invention, the extracted feature is weighted by the feature weighting method, so that the finally obtained feature has a stronger description capability, thereby facilitating a better effect by the graph search system.
图像搜索系统1000在一些实现方式中,可以是一个独立的计算机系统,例如是一台服务器,从而实现特征提取模块1001、特征存储模块1002、特征加权模块1003、特征匹配模块1004相应的功能。在另一些实现方式中,图像搜索系统1000可以是分布式系统,其中包括了图像的数据库节点和计算节点。其中,数据库节点保存有特征存储模块1002处理后的数据库及其索引,计算节点可以实现特征提取模块1001、特征加权模块1002以及特征匹 配模块1004的相应功能。在分布式系统的实现方式中,某一模块可能在不同的节点上均有部署。例如,特征提取模块1001和特征加权模块1003可能分别部署在数据库节点和计算节点上,数据库节点在生成特征数据时需要调用特征加权模块1003对特征提取模块1001提取的特征进行加权从而生成加权后的特征数据,而计算节点在进行特征匹配是也需要调用特征加权模块1003对特征提取模块1001提取的目标图像的特征进行加权从而进行特征匹配。In some implementations, the image search system 1000 may be a stand-alone computer system, such as a server, to implement the corresponding functions of the feature extraction module 1001, the feature storage module 1002, the feature weighting module 1003, and the feature matching module 1004. In other implementations, image search system 1000 can be a distributed system that includes database nodes and compute nodes for images. The database node stores the database processed by the feature storage module 1002 and its index, and the computing node can implement the corresponding functions of the feature extraction module 1001, the feature weighting module 1002, and the feature matching module 1004. In a distributed system implementation, a module may be deployed on different nodes. For example, the feature extraction module 1001 and the feature weighting module 1003 may be respectively deployed on the database node and the computing node. When generating the feature data, the database node needs to call the feature weighting module 1003 to weight the features extracted by the feature extraction module 1001 to generate a weighted The feature data, while the computing node performs feature matching, also needs to call the feature weighting module 1003 to weight the features of the target image extracted by the feature extraction module 1001 to perform feature matching.
结合图1所示的应用场景,参见图2,是本发明实施例提供的一种图像特征匹配方法的流程图,该方法包括:With reference to FIG. 2, it is a flowchart of an image feature matching method according to an embodiment of the present invention. The method includes:
S201,获取目标图像的特征信息。S201. Acquire feature information of the target image.
目标图像即在本发明实施例中待匹配的图像。通过特征提取算法,提取目标图像的特征信息。在本实施例中,需要提取至少两类不同种类的特征,从而获取每个种类特征相应的特征值。在可能的实现方式中,特征可以是传统特征,如纹理特征、颜色特征、边缘特征等,也可以是深度特征,如深度学习网络提取的图像特征。示例性的,提取颜色特征可以通过颜色直方图、颜色矩、颜色集、颜色聚合向量等方式提取;提取纹理特征可以通过灰度共生矩阵、Tamura纹理特征、自回归纹理模型、小波变换等方法获取;提取形状特征可以通过边界特征法、傅里叶形状描述符法、几何参数法、形状不变矩法等方法提取。The target image is the image to be matched in the embodiment of the present invention. The feature information of the target image is extracted by the feature extraction algorithm. In this embodiment, at least two types of different kinds of features need to be extracted, so as to obtain corresponding feature values of each category feature. In a possible implementation, the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network. Exemplarily, the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc. The extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like. The extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
S202,对提取的特征设置权重,使得主要特征的权重高于提取的其他特征的权重。在本发明实施例中,通过预设条件来判断提取的特种中某一特征是否为主要特征。所谓主要特征是指该特征对图像的描述能力高于所提取的其他特征。S202. Set weights on the extracted features, so that the weight of the main features is higher than the weights of the other features extracted. In the embodiment of the present invention, whether a certain feature in the extracted special feature is a main feature is determined by a preset condition. The so-called main feature means that the feature's ability to describe the image is higher than the other features extracted.
在一种实现方式中,可以通过预设的类别来确定主要特征。例如,对于特定的目标图像,其纹理特征对于图像的描述能力更高,可以预先设置纹理特征为主要特征。当获取的特征为纹理特征时,则认为该特征为主要特征。又例如,将图像的深度特征中的头部特征预设为主要特征,则当获取的深度特征在图像中任务的头部位置区域,即认为该特征为主要特征In one implementation, the primary feature can be determined by a predetermined category. For example, for a specific target image, the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature. When the acquired feature is a texture feature, the feature is considered to be the main feature. For another example, if the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
在另一种实现方式中,可以通过可以描述该特征的一种指标数值来确定一类特征是否为主要特征。例如,对于纹理特征,可以通过图像的平均幅值(Dense)、拉普拉斯算子来描述该图像中纹理特征是否为主要特征。通过将图像中每个点的幅值求平均值,获得该图像的平均幅值。图像的平均幅值越大,则认为该图像的纹理特征对于该图像的描述能力越强。当平均幅值超过预设的阈值后,则认为纹理特征为该图像的主要特征。又例如,对于采用VGGM深度模型所提取的深度特征,可以获得该特征相应的置信度。特征相应的置信度越高,则该特征对图像的描述能力越强。当置信度高于预设的阈值时,则认为该置信度相对应的深度特征为主要特征。In another implementation, whether a class of features is a primary feature can be determined by an indicator value that can describe the feature. For example, for a texture feature, whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian. The average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image. When the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image. As another example, for a depth feature extracted using a VGGM depth model, a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
在又一种实现方式中,可以通过描述采集的特征的指标数值来确定其他特征是否为主要特征。例如,当所采集的特征中某一特征外的其他特征的指标数值均低于预设的阈值,则可以认为该特征为主要特征。In yet another implementation, whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
当确定一类特征为主要特征后,通过对提取的特征设置权重,使得该类特征的权重高于所设置的其他类型特征的权重。在一种实现方式中,可以通过预设的规则设置权重。例如,当设定某与种类的特征为主要特征后,可以预设与该种类相对应的权重,使得该种类的特征的权重高于其他类型特征的权重。After determining that a type of feature is the main feature, the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature. In one implementation, the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
在一种实现方式中,可以根据该种类特征相对应的指标值,自适应的确定该类特征的 权重。例如,可以通过设定该类特征作为主要特征所对应的指标值的经验阈值,该类特征的指标值与对应的经验阈值的差值与该类特征对应的权重正相关,即该类特征的指标值大于对应的经验阈值且差值越大,则该类特征对应的权重越大。In one implementation, the weight of the feature may be adaptively determined based on the index value corresponding to the feature of the category. For example, the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
S203、根据所确定的权重对提取的特征进行加权,得到加权后的特征信息,并根据加权后的特征信息对图像进行特征匹配。S203. Weight the extracted features according to the determined weights to obtain weighted feature information, and perform feature matching on the image according to the weighted feature information.
对获取的特征进行加权,从而得到最终加权后的特征信息。在一种实现方式中,将加权后的不同种类的特征进行串联拼接或特征维数对齐后,取平均或最大最小值,从而得到最终的特征信息。例如,提取了50维的纹理特征和50维的颜色特征,在分别对纹理特征和颜色特征进行加权后,可以获得加权后该图像最终的100维特征。The acquired features are weighted to obtain the final weighted feature information. In an implementation manner, after the weighted different kinds of features are serially spliced or the feature dimensions are aligned, the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
获取了加权后的特征信息后,根据该特征信息,可以对图像进行特征匹配。在不同的实现方式中,可以采用欧式距离、马氏距离、汉明距离等相似度计算方式,来表征不同特征间的相似度,从而获得最终的图像匹配结果。After the weighted feature information is obtained, the image can be feature-matched according to the feature information. In different implementations, similarity calculations such as Euclidean distance, Mahalanobis distance, and Hamming distance can be used to characterize the similarity between different features, so as to obtain the final image matching result.
通过本实施例中的方法,对不同种类的特征根据其描述能力进行加权,并根据加权后的特征信息进行特征匹配。相对于现有技术,增强了描述能力强的特征在特征信息中的权重,从而增强了特征信息在进行特征匹配时的区别度,提升了特征匹配的准确率。Through the method in this embodiment, different types of features are weighted according to their description capabilities, and feature matching is performed according to the weighted feature information. Compared with the prior art, the weight of the feature with strong description capability in the feature information is enhanced, thereby enhancing the difference of feature information in feature matching and improving the accuracy of feature matching.
结合图3,是本发明实施例中给出的一种图像特征匹配方法,该实施例以纹理特征和颜色特征为例,采用平均幅值作为衡量纹理特征或颜色特征的指标值。对图像提取纹理特征和颜色特征两类特征,采用阈值法,获得每类特征对应的权重,对该图像最终的特征信息作相应的权重的加权,因而增强最终的特征的鉴别力。可以理解的,对于其他传统特征,如边缘特征、灰度特征等,可以参考本实施例采用相似的方法进行匹配。FIG. 3 is an image feature matching method according to an embodiment of the present invention. The texture feature and color feature are taken as an example, and the average amplitude is used as an index value for measuring a texture feature or a color feature. The texture features and color features are extracted from the image. The threshold method is used to obtain the weight corresponding to each type of feature. The final feature information of the image is weighted by corresponding weights, thus enhancing the discriminative power of the final feature. It can be understood that for other conventional features, such as edge features, grayscale features, etc., a similar method can be used for matching with reference to this embodiment.
S301,获取图像的纹理特征和颜色特征。S301. Acquire texture features and color features of the image.
在本实施例中,设目标图像为3通道彩色图像,尺寸为100×100,分别对图像进行纹理特征和颜色特征的提取。In this embodiment, the target image is a 3-channel color image with a size of 100×100, and the image is extracted from the texture feature and the color feature.
其中,对图像提取纹理特征:Wherein, the texture features are extracted from the image:
a.采用3*3 Sobel算子提取每个点在x和y方向梯度dx和dy;a. Using a 3*3 Sobel operator to extract the gradient dx and dy of each point in the x and y directions;
b.确定每一点的幅值,参考公式:
Figure PCTCN2018125732-appb-000011
其中,d i为坐标为(x i,y i)的点的幅值,
Figure PCTCN2018125732-appb-000012
Figure PCTCN2018125732-appb-000013
为该点在x和y方向梯度;
b. Determine the amplitude of each point, refer to the formula:
Figure PCTCN2018125732-appb-000011
Where d i is the amplitude of a point with coordinates (x i , y i ),
Figure PCTCN2018125732-appb-000012
with
Figure PCTCN2018125732-appb-000013
Gradient in the x and y directions for this point;
c.划定幅值分布区间,如设分布区间为50个区间,获得该图像幅值的分布直方图,从而,得到该图像对应的50维的纹理特征。参考公式:
Figure PCTCN2018125732-appb-000014
其中,
Figure PCTCN2018125732-appb-000015
为落入第i个区间的幅值的统计值。
c. Delimit the amplitude distribution interval. If the distribution interval is 50 intervals, obtain the distribution histogram of the image amplitude, and obtain the 50-dimensional texture feature corresponding to the image. Reference formula:
Figure PCTCN2018125732-appb-000014
among them,
Figure PCTCN2018125732-appb-000015
The statistical value of the magnitude falling into the i-th interval.
对图像提取颜色特征:Extract color features from images:
a.将目标图像进行灰度化处理。将三通道颜色值(r,g,b)加权为单通道灰度值Grey,参考公式:Grey=0.299×r+0.587×g+0.114*ba. Perform grayscale processing on the target image. The three-channel color value (r, g, b) is weighted to a single-channel gray value Grey, with reference to the formula: Grey=0.299×r+0.587×g+0.114*b
b.划定灰度值分布区间,如设分布区间为50个range,获得该图像灰度值的分布直方图,从而,得到该图像对应的50维的颜色特征:
Figure PCTCN2018125732-appb-000016
其中,
Figure PCTCN2018125732-appb-000017
为落入第i个区间的图像灰度值的统计值。
b. Delimit the gray value distribution interval, if the distribution interval is 50 ranges, obtain the distribution histogram of the gray value of the image, thereby obtaining the 50-dimensional color feature corresponding to the image:
Figure PCTCN2018125732-appb-000016
among them,
Figure PCTCN2018125732-appb-000017
The statistical value of the gray value of the image that falls within the i-th interval.
S302、根据图像的平均幅值,确定纹理特征和颜色特征的权重。S302. Determine weights of the texture feature and the color feature according to the average amplitude of the image.
图像的平均幅值,即图像中每个点的幅值求平均值。参考公式:
Figure PCTCN2018125732-appb-000018
其中,d ij指坐标为(i,j)的点的幅值,m和n为图像的尺寸大小,在本实施例中为100×100。
The average amplitude of the image, that is, the amplitude of each point in the image is averaged. Reference formula:
Figure PCTCN2018125732-appb-000018
Where d ij refers to the magnitude of the point of coordinates (i, j), and m and n are the size of the image, which is 100 × 100 in this embodiment.
Figure PCTCN2018125732-appb-000019
对应了阈值T 1和T 2(T 1>T 2),其中,T 1和T 2为根据经验设置的进行特征匹配时纹理图像和颜色图像作为唯一的特征即可对图像进行描述的阈值。具体的,当平均幅值小于T 2时,图像为强颜色图像,图像中的纹理特征的描述能力很弱,在特征匹配时可以不再将纹理特征作为进行匹配的特征;相应的,当平均幅值大于T 1时,图像为强纹理图像,图像中的颜色特征描述能力很弱,在特征匹配时可以不再将颜色特征作为进行匹配的特征。
Figure PCTCN2018125732-appb-000019
Corresponding to the thresholds T 1 and T 2 (T 1 > T 2 ), where T 1 and T 2 are threshold values that can be described by the texture image and the color image as unique features when performing feature matching according to experience. Specifically, when the average amplitude is less than T 2 , the image is a strong color image, and the description feature of the texture feature in the image is weak, and the texture feature may no longer be used as a matching feature when the feature is matched; correspondingly, when the average When the amplitude is greater than T 1 , the image is a strong texture image, and the color feature description ability in the image is very weak. When the feature is matched, the color feature can no longer be used as a matching feature.
Figure PCTCN2018125732-appb-000020
则该图像属于强纹理图像,ω 1=1,ω 2=0;
If
Figure PCTCN2018125732-appb-000020
Then the image belongs to a strong texture image, ω 1 =1, ω 2 =0;
Figure PCTCN2018125732-appb-000021
则该图像属于强颜色图像,ω 1=0,ω 2=1;
If
Figure PCTCN2018125732-appb-000021
Then the image belongs to a strong color image, ω 1 =0, ω 2 =1;
Figure PCTCN2018125732-appb-000022
则,可依据公式确定纹理特征和颜色特征所对应的权重:
Figure PCTCN2018125732-appb-000023
If
Figure PCTCN2018125732-appb-000022
Then, the weight corresponding to the texture feature and the color feature can be determined according to the formula:
Figure PCTCN2018125732-appb-000023
在本发明实施例中,采用了双阈值的方式确定特征对应的权重。对于纹理特征,当
Figure PCTCN2018125732-appb-000024
的取值大于
Figure PCTCN2018125732-appb-000025
时,即纹理特征为该图像中的主要特征,此时,基于公式:
Figure PCTCN2018125732-appb-000026
Figure PCTCN2018125732-appb-000027
或者ω 1=1,ω 2=0,均会使得ω 1>ω 2,即纹理特征的权重大于颜色特征。相应的,对于颜色特征,当
Figure PCTCN2018125732-appb-000028
的取值小于
Figure PCTCN2018125732-appb-000029
时,即颜色特征为该图像中的主要特征,此时ω 1<ω 2,即颜色特征的权重大于纹理特征。
In the embodiment of the present invention, the weight corresponding to the feature is determined by using a double threshold. For texture features, when
Figure PCTCN2018125732-appb-000024
The value is greater than
Figure PCTCN2018125732-appb-000025
When the texture feature is the main feature in the image, at this time, based on the formula:
Figure PCTCN2018125732-appb-000026
Figure PCTCN2018125732-appb-000027
Or ω 1 =1, ω 2 =0, both will make ω 1 > ω 2 , that is, the weight of the texture feature is greater than the color feature. Correspondingly, for color features, when
Figure PCTCN2018125732-appb-000028
The value is less than
Figure PCTCN2018125732-appb-000029
The color feature is the main feature in the image, at which point ω 12 , ie the weight of the color feature is greater than the texture feature.
S303、根据所确定的权重,调整提取的特征在最终特征描述中的权重,最终将加权后的纹理特征及颜色特征的串联拼接作为最终的特征信息,即获得该图像最终的100维的特征:S303. Adjust weights of the extracted features in the final feature description according to the determined weights, and finally use the weighted texture feature and the serial combination of the color features as the final feature information, that is, obtain the final 100-dimensional feature of the image:
Figure PCTCN2018125732-appb-000030
Figure PCTCN2018125732-appb-000030
S305、基于所获得的最终特征f,基于特征匹配完成图像识别或检索等工作。S305. Perform image recognition or retrieval based on feature matching based on the obtained final feature f.
可以采用各类特征匹配算法进行特征匹配。例如,可以采用由欧式距离表征不同特征间的相似度,从而获得最终的图像识别结果。假设数据库中有N幅图像,基于上述的特征加权形式,最终获得的特征分别为f i,i=1,2,…,N,则第i幅与第j幅图像间的欧式距离为:
Figure PCTCN2018125732-appb-000031
根据欧式距离可判断这两幅图像的相似度,其中,欧氏距离越小,表明两幅图像间的相似度越高。
Feature matching can be performed using various feature matching algorithms. For example, the similarity between different features can be characterized by the Euclidean distance to obtain the final image recognition result. Suppose there are N images in the database. Based on the feature weighting form described above, the final obtained features are f i , i=1, 2,..., N, then the Euclidean distance between the i-th and j-th images is:
Figure PCTCN2018125732-appb-000031
The similarity of the two images can be judged according to the Euclidean distance, wherein the smaller the Euclidean distance, the higher the similarity between the two images.
在本发明实施例中,通过根据平均幅值对图像的颜色特征和纹理特征的权重进行调整。使得当颜色特征为图像的主要特征时,颜色特征对应的权重更高,而当纹理特征问图像的主要特征时,纹理特征对应的权重更高。由此,增强了图像的主要特征对于图像的描述能力,从而能够增强不同特征在不同的图像中的区分性,提高了特征匹配时的准确度。In an embodiment of the invention, the weighting of the color features and texture features of the image is adjusted based on the average magnitude. When the color feature is the main feature of the image, the color feature corresponds to a higher weight, and when the texture feature asks the main feature of the image, the texture feature corresponds to a higher weight. Thereby, the ability of the main features of the image to describe the image is enhanced, thereby enhancing the distinguishability of different features in different images and improving the accuracy of feature matching.
结合图4,是本发明实施例中给出的又一种图像特征匹配方法,该实施例以人物图像的深度学习特征为例。在本实施例中,基于深度模型对图像提取出不同的深度特征,同时,可基于该深度特征采用传统分类器,如SVM,或者增加全连接层(fully connected layer,fc 层),获得不同特征的置信度。根据置信度对不同的深度特征进行加权,因而增强最终的深度特征的鉴别力。FIG. 4 is still another image feature matching method according to an embodiment of the present invention. The embodiment takes a deep learning feature of a character image as an example. In this embodiment, different depth features are extracted from the image based on the depth model. At the same time, a traditional classifier, such as SVM, or a fully connected layer (fc layer) may be added based on the depth feature to obtain different features. Confidence. Different depth features are weighted according to confidence, thus enhancing the discriminative power of the final depth feature.
在深度特征学习中,对于目标图像,通过深度模型,可以提取用于描述图像的深度特征。对于深度特征,可以从不同的维度来描述图像。例如,深度特征可以从性别的维度,描述图像中人像是男性以及是女性的可能性;也可以从年龄的维度,描述图像中的人像在不同年龄段的可能性。In the depth feature learning, for the target image, the depth feature for describing the image can be extracted by the depth model. For depth features, images can be described from different dimensions. For example, the depth feature can describe the possibility that the portrait is male and female in the image from the gender dimension; it can also describe the possibility of the portrait in the image at different ages from the age dimension.
S401、根据深度模型对目标图像提取深度特征。S401. Extract a depth feature from the target image according to the depth model.
采用VGGM深度模型,提取人头区域的深度特征f1,特征维数为n1,每一维特征可表征为f1 i,i=1,2,…,n1;同时,获得划分为该人的置信度p1。 Using the VGGM depth model, the depth feature f1 of the human head region is extracted, the feature dimension is n1, and each dimension feature can be characterized as f1 i , i=1, 2,..., n1; at the same time, the confidence p1 divided into the person is obtained. .
采用VGGM深度模型,提取人的性别特征f2,特征维数为n2,每一维特征可表征为f2 i,i=1,2,…,n2;同时,获得划分为该性别(的置信度p2。 Using the VGGM depth model, the gender feature f2 is extracted, and the feature dimension is n2. Each dimension feature can be characterized as f2 i , i=1, 2,..., n2; at the same time, the confidence level p2 is obtained. .
采用VGGM深度模型,提取人的年龄特征f3,特征维数为n3,每一维特征可表征为f3 i,i=1,2,…,n3;同时,获得划分为该年龄段的置信度p3。 Using the VGGM depth model, the age feature f3 of the person is extracted, the feature dimension is n3, and each dimension feature can be characterized as f3 i , i=1, 2,..., n3; at the same time, the confidence p3 divided into the age segment is obtained. .
采用VGGM深度模型,提取衣着款式的深度特征f4,特征维数为n4,每一维特征可表征为f4 i,i=1,2,…,n4;同时,获得划分为该类款式的置信度p4。 Using the VGGM depth model, the depth feature f4 of the clothing style is extracted, the feature dimension is n4, and each dimension feature can be characterized as f4 i , i=1, 2,..., n4; at the same time, the confidence level classified into the style is obtained. P4.
S402、根据各个特征对应的置信度,调节该特征对行人描述的权重,即,最终可获得n d=n1+n2+n3+n4维的特征。 S402. Adjust the weight of the feature to the pedestrian description according to the confidence level corresponding to each feature, that is, finally obtain the feature of n d =n1+n2+n3+n4 dimension.
基于原深度特征提取模型,增加全连接层(fully connected layer,fc层),从而获得该特征映射到不同类上的概率,如,获得某幅图像的性别特征映射到“男”的概率,即获得不同特征的置信度pi。基于人头、性别、年龄段、衣服款式对应的置信度p1、p2、p3、p4,可将每个特征对应的权重设置为ω 1=p1,
Figure PCTCN2018125732-appb-000032
Figure PCTCN2018125732-appb-000033
从而获得最终的特征F=concat(ω 1×f 1,ω 2×f 2,ω 3×f 3,ω 4×f 4)。
Based on the original depth feature extraction model, a fully connected layer (fc layer) is added to obtain a probability that the feature maps to different classes, for example, a probability that a gender feature of an image is mapped to "male", that is, The confidence pi of different features is obtained. Based on the confidence levels p1, p2, p3, and p4 corresponding to the head, gender, age, and clothing style, the weight corresponding to each feature can be set to ω 1 = p1,
Figure PCTCN2018125732-appb-000032
Figure PCTCN2018125732-appb-000033
Thereby the final feature F = concat (ω 1 × f 1 , ω 2 × f 2 , ω 3 × f 3 , ω 4 × f 4 ) is obtained.
S403、基于所获得的特征,完成特征匹配。S403. Perform feature matching based on the obtained features.
可以采用由欧式距离表征不同特征间的相似度,从而获得最终的图像识别结果。例如有N幅图像,基于上述的特征加权形式,最终获得的特征分别为F i,i=1,2,…,N,则第i幅与第j幅图像间的欧式距离为: The similarity between different features can be characterized by the Euclidean distance to obtain the final image recognition result. For example, there are N images, based on the feature weighting form described above, and the finally obtained features are F i , i=1, 2, . . . , N, then the Euclidean distance between the i-th image and the j-th image is:
Figure PCTCN2018125732-appb-000034
Figure PCTCN2018125732-appb-000034
由该距离可判断这两幅图像的相似度。From this distance, the similarity of the two images can be judged.
在一种实现方式中,还可以基于监督学习的方法来确定最终特征。采用传统有监督学习,如SVM,获得变换矩阵W可作为输入特征质量的评测矩阵,因此对每个深度特征fi,可得到对应的质量评测值pi=W*fi,若各组特征的维数不同,则可采用PCA等降维法,使得每个特征的维数对齐,因而获得最终的特征F=pooling(p1×f1,p2×f2,p3×f3,p4×f4),其中,pooling可采用取最大值、最小值或平均值。In one implementation, the final feature can also be determined based on a method of supervised learning. Using traditional supervised learning, such as SVM, the transformation matrix W can be used as the evaluation matrix of the input feature quality. Therefore, for each depth feature fi, the corresponding quality evaluation value pi=W*fi can be obtained, if the dimension of each group feature Different, the dimensionality reduction method such as PCA can be used to align the dimensions of each feature, thus obtaining the final feature F=pooling (p1×f1, p2×f2, p3×f3, p4×f4), wherein pooling can be Take the maximum, minimum or average value.
参考图5,为本发明提供的一种图像特征数据生成的方法。通过本实施例中的方法,可以生成用于特征匹配的图像特征数据库,在执行前述实施例中的图像特征匹配时,可以在本方法实施例中方法所生成的图像特征数据库中进行特征匹配,从而实现图像的检索能功能。该方法包括:Referring to FIG. 5, a method for generating image feature data according to the present invention is provided. Through the method in this embodiment, an image feature database for feature matching can be generated. When image feature matching in the foregoing embodiment is performed, feature matching can be performed in the image feature database generated by the method in the method embodiment. Thereby realizing the retrieval function of the image. The method includes:
S501,获取待生成图像的特征信息。S501. Acquire feature information of an image to be generated.
待生成图像即在本发明实施例中需要生产特征信息存储到数据库中的图像。通过特征提取算法,提取待生成图像的特征信息。在本实施例中,需要提取至少两类不同种类的特征,从而获取每个种类特征相应的特征值。在可能的实现方式中,特征可以是传统特征,如纹理特征、颜色特征、边缘特征等,也可以是深度特征,如深度学习网络提取的图像特征。示例性的,提取颜色特征可以通过颜色直方图、颜色矩、颜色集、颜色聚合向量等方式提取;提取纹理特征可以通过灰度共生矩阵、Tamura纹理特征、自回归纹理模型、小波变换等方法获取;提取形状特征可以通过边界特征法、傅里叶形状描述符法、几何参数法、形状不变矩法等方法提取。The image to be generated, that is, the image in which the feature information is stored in the database is required to be produced in the embodiment of the present invention. The feature information of the image to be generated is extracted by the feature extraction algorithm. In this embodiment, at least two types of different kinds of features need to be extracted, so as to obtain corresponding feature values of each category feature. In a possible implementation, the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network. Exemplarily, the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc. The extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like. The extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
S502,对提取的特征设置权重,使得主要特征的权重高于提取的其他特征的权重。在本发明实施例中,通过预设条件来判断提取的特种中某一特征是否为主要特征。所谓主要特征是指该特征对图像的描述能力高于所提取的其他特征。S502. Set weights on the extracted features, so that the weight of the main features is higher than the weights of other features extracted. In the embodiment of the present invention, whether a certain feature in the extracted special feature is a main feature is determined by a preset condition. The so-called main feature means that the feature's ability to describe the image is higher than the other features extracted.
在一种实现方式中,可以通过预设的类别来确定主要特征。例如,对于特定的目标图像,其纹理特征对于图像的描述能力更高,可以预先设置纹理特征为主要特征。当获取的特征为纹理特征时,则认为该特征为主要特征。又例如,将图像的深度特征中的头部特征预设为主要特征,则当获取的深度特征在图像中任务的头部位置区域,即认为该特征为主要特征In one implementation, the primary feature can be determined by a predetermined category. For example, for a specific target image, the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature. When the acquired feature is a texture feature, the feature is considered to be the main feature. For another example, if the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
在另一种实现方式中,可以通过可以描述该特征的一种指标数值来确定一类特征是否为主要特征。例如,对于纹理特征,可以通过图像的平均幅值(Dense)、拉普拉斯算子来描述该图像中纹理特征是否为主要特征。通过将图像中每个点的幅值求平均值,获得该图像的平均幅值。图像的平均幅值越大,则认为该图像的纹理特征对于该图像的描述能力越强。当平均幅值超过预设的阈值后,则认为纹理特征为该图像的主要特征。又例如,对于采用VGGM深度模型所提取的深度特征,可以获得该特征相应的置信度。特征相应的置信度越高,则该特征对图像的描述能力越强。当置信度高于预设的阈值时,则认为该置信度相对应的深度特征为主要特征。In another implementation, whether a class of features is a primary feature can be determined by an indicator value that can describe the feature. For example, for a texture feature, whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian. The average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image. When the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image. As another example, for a depth feature extracted using a VGGM depth model, a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
在又一种实现方式中,可以通过描述采集的特征的指标数值来确定其他特征是否为主要特征。例如,当所采集的特征中某一特征外的其他特征的指标数值均低于预设的阈值,则可以认为该特征为主要特征。In yet another implementation, whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
当确定一类特征为主要特征后,通过对提取的特征设置权重,使得该类特征的权重高于所设置的其他类型特征的权重。在一种实现方式中,可以通过预设的规则设置权重。例如,当设定某与种类的特征为主要特征后,可以预设与该种类相对应的权重,使得该种类的特征的权重高于其他类型特征的权重。After determining that a type of feature is the main feature, the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature. In one implementation, the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
在一种实现方式中,可以根据该种类特征相对应的指标值,自适应的确定该类特征的权重。例如,可以通过设定该类特征作为主要特征所对应的指标值的经验阈值,该类特征的指标值与对应的经验阈值的差值与该类特征对应的权重正相关,即该类特征的指标值大于对应的经验阈值且差值越大,则该类特征对应的权重越大。In an implementation manner, the weight of the feature may be adaptively determined according to the index value corresponding to the category feature. For example, the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
S503、根据所确定的权重对提取的特征进行加权,得到加权后的特征信息,并将加权后的特征信息存储至特征数据库中。S503. Weight the extracted features according to the determined weights to obtain weighted feature information, and store the weighted feature information in the feature database.
对获取的特征进行加权,从而得到最终加权后的特征信息。在一种实现方式中,将加权后的不同种类的特征进行串联拼接或特征维数对齐后,取平均或最大最小值,从而得到最终的特征信息。例如,提取了50维的纹理特征和50维的颜色特征,在分别对纹理特征和颜色特征进行加权后,可以获得加权后该图像最终的100维特征。The acquired features are weighted to obtain the final weighted feature information. In an implementation manner, after the weighted different kinds of features are serially spliced or the feature dimensions are aligned, the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
获取了加权后的特征信息后,将加权后的特征信息存储至特征数据库中。特征数据库中的特征信息可以用于前述实施例中待检索图像的特征匹配。After the weighted feature information is obtained, the weighted feature information is stored in the feature database. The feature information in the feature database can be used for feature matching of the image to be retrieved in the foregoing embodiment.
在本实施例中,在进行特征加权和得到加权后的特征信息时,可以参考前述实施例中,S301、S302、S303或者S401、S402、S403中的特征加权和特征信息生成的方法。基于相同的发明构思,前述特征匹配中的特征加权方法与本实施例中的特征加权方法相对应。在前述图1所对应的系统中,前述特征匹配方法实施例中的方法由特征提取模块1001、特征加权模块1003、特征匹配模块1004完成,特征信息生成实施例中的方法由特征提取模块1001、特征存储模块1002、特征加权模块1003完成。在同一系统中,为了保证特征匹配和特征数据库中的特征信息的统一性,特征匹配和特征信息生成所对应的特征加权方法应当一致。In this embodiment, when feature weighting and weighted feature information are performed, reference may be made to the method of feature weighting and feature information generation in S301, S302, S303 or S401, S402, S403 in the foregoing embodiment. Based on the same inventive concept, the feature weighting method in the aforementioned feature matching corresponds to the feature weighting method in the present embodiment. In the system corresponding to the foregoing FIG. 1 , the method in the foregoing feature matching method is completed by the feature extraction module 1001 , the feature weighting module 1003 , and the feature matching module 1004 . The method in the feature information generation embodiment is performed by the feature extraction module 1001 . The feature storage module 1002 and the feature weighting module 1003 are completed. In the same system, in order to ensure the feature matching and the uniformity of feature information in the feature database, the feature weighting methods corresponding to feature matching and feature information generation should be consistent.
请参照图6,为本发明实施例提供的一种图像处理装置的结构示意图。参考图1所示的架构,本实施例中的装置可以是一个独立于图1中的特征存储模块1002以及数据库的装置,也可以是与特征存储模块1002以及数据库集成在同一装置中。本实施例中的图像处理装置用于将目标图像进行特征提取和处理后,在数据库中进行特征匹配。如图所示,该图像处理装置包括特征提取模块601、特征加权模块602、特征匹配模块603。其中,各个模块的描述如下:FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. Referring to the architecture shown in FIG. 1, the apparatus in this embodiment may be a device independent of the feature storage module 1002 and the database in FIG. 1, or may be integrated in the same device as the feature storage module 1002 and the database. The image processing apparatus in this embodiment is configured to perform feature matching in a database after feature extraction and processing of the target image. As shown, the image processing apparatus includes a feature extraction module 601, a feature weighting module 602, and a feature matching module 603. Among them, the description of each module is as follows:
特征提取模块601,用于获取目标图像的至少两类特征;a feature extraction module 601, configured to acquire at least two types of features of the target image;
特征加权模块602,用于确定所述至少两类特征中每类特征所对应的权重,并根据设置的权重对所述至少两类特征进行加权,得到加权后的特征信息;The feature weighting module 602 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
特征匹配模块603,根据加权后的特征信息,对图像进行特征匹配。The feature matching module 603 performs feature matching on the image according to the weighted feature information.
需要说明的是,各个模块的实现还可以对应参照图2、图3或者图4所示的方法实施例的相应描述,执行上述实施例中所执行的方法和功能。例如,参照图2,特征提取模块601的功能可以参考S201中的方法;特征加权模块602的功能可以参考S202中的方法,以及S203中对特征进行加权得到加权后特征信息的方法;特征匹配模块603可以参考S203中进行图像特征匹配的方法。It should be noted that the implementation of each module may also perform the methods and functions performed in the foregoing embodiments corresponding to the corresponding descriptions of the method embodiments shown in FIG. 2, FIG. 3 or FIG. For example, referring to FIG. 2, the function of the feature extraction module 601 can refer to the method in S201; the function of the feature weighting module 602 can refer to the method in S202, and the method for weighting the feature in S203 to obtain the weighted feature information; the feature matching module 603 can refer to the method of performing image feature matching in S203.
请参照图7,为本发明实施例提供的一种图像处理装置的结构示意图。参考图1所示的架构,本实施例中的装置与图6所述装置相独立的装置,也可以是集成在图6所述装置中,实现本实施例功能的装置。本实施例中的图像处理装置用于将图像数据进行特征提取后,对特征进行加权,生成加权后的特征信息并存储于数据库中。如图所示,该图像处理装置包括特征提取模块701、特征加权模块702、特征存储模块703以及特征信息数据库704。其中,各个模块的描述如下:FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. Referring to the architecture shown in FIG. 1, the device in this embodiment is independent of the device shown in FIG. 6, and may also be a device integrated in the device shown in FIG. 6 to implement the functions of the embodiment. The image processing apparatus in the present embodiment is configured to perform feature extraction on the image data, weight the features, and generate the weighted feature information and store the information in the database. As shown, the image processing apparatus includes a feature extraction module 701, a feature weighting module 702, a feature storage module 703, and a feature information database 704. Among them, the description of each module is as follows:
特征提取模块701,用于获取图像的至少两类特征;a feature extraction module 701, configured to acquire at least two types of features of the image;
特征加权模块702,用于确定所述至少两类特征中每类特征所对应的权重,并根据设置的权重对所述至少两类特征进行加权,得到加权后的特征信息;The feature weighting module 702 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
特征存储模块703,将所述加权后的特征信息存储到特征数据库中;The feature storage module 703 is configured to store the weighted feature information in a feature database;
数据库704,用于存储图像数据加权后的特征信息。The database 704 is configured to store the feature information weighted by the image data.
需要说明的是,各个模块的实现还可以对应参照图5所示的方法实施例的相应描述,执行上述实施例中所执行的方法和功能。同时,特征提取模块701和特征加权模块702还可以参考图2、图3和图4中相对应的待检索图像的特征提取和特征处理的方法。It should be noted that the implementation of each module may also perform the methods and functions performed in the foregoing embodiments in accordance with the corresponding description of the method embodiments shown in FIG. 5. At the same time, the feature extraction module 701 and the feature weighting module 702 can also refer to the corresponding feature extraction and feature processing methods of the image to be retrieved in FIG. 2, FIG. 3 and FIG.
参考图8,是本发明给出的一种前述方法实施例的装置实施例,该装置可以执行前述图2、图3、图4或者图5所对应的方法,也可以是前述图6或者与7所述装置的一种硬件实现形式。Referring to FIG. 8 , it is an apparatus embodiment of an foregoing method embodiment of the present invention. The apparatus may perform the method corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG. 5, or may be the foregoing FIG. 6 or 7 A hardware implementation of the device.
本发明实施例以一种通用计算机系统环境作为示例来对装置进行说明。众所周知的,可适用于该装置还可以采用其他的异构计算硬件架构来实现类似的功能。包括并不限制于,个人计算机,服务计算机,多处理器系统,基于微处理器的系统,可编程消费电器,网路PC,小型计算机,大型计算机,包括任何上述系统或设备的分布式计算环境,等等。The embodiments of the present invention are described with reference to a general computer system environment as an example. As is well known, the device can be adapted to other similar computing hardware architectures to achieve similar functionality. Includes, without limitation, personal computers, service computers, multiprocessor systems, microprocessor based systems, programmable consumer appliances, network PCs, minicomputers, mainframe computers, distributed computing environments including any of the above systems or devices ,and many more.
可以理解的,本发明实施例也可以以其他能够实现类似计算机功能的终端装置实现,例如智能手机,PAD,智能穿戴设备,等等。It can be understood that the embodiments of the present invention can also be implemented by other terminal devices capable of implementing similar computer functions, such as a smart phone, a PAD, a smart wearable device, and the like.
装置800的元件可以包括,但并不限制于,处理单元820,系统存储器830,和系统总线810。系统总线将包括系统存储器的各种系统元件与处理单元820相耦合。系统总线810可以是几种类型总线结构中的任意一种总线,这些总线可以包括存储器总线或存储器控制器,外围总线,和使用一种总线结构的局部总线。总线结构可以包括工业标准结构(ISA)总线,微通道结构(MCA)总线,扩展ISA(EISA)总线,视频电子标准协会(VESA)局域总线,以及外围器件互联(PCI)总线。Elements of device 800 may include, but are not limited to, processing unit 820, system memory 830, and system bus 810. The system bus couples various system components including system memory to processing unit 820. System bus 810 can be any of several types of bus structures, which can include a memory bus or memory controller, a peripheral bus, and a local bus using a bus structure. The bus structure may include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Extended ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
包含前述本实施例中所列举的如中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit)或数字信号(Digital Signal Processing,DSP)处理器等类型的互为异构的处理器。其中,在一种实施方式中,中央处理器可用于执行前述实施例中的方法步骤,如前述图2、图3、图4或者图5所对应的方法步骤。The heterogeneous processing such as a central processing unit (CPU), a graphics processing unit (Graphics Processing Unit), or a digital signal processing (DSP) processor listed in the foregoing embodiment is included. Device. Wherein, in an embodiment, the central processing unit can be used to perform the method steps in the foregoing embodiments, such as the method steps corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG.
装置800一般包括多种装置可读媒介。装置可读媒介可以是任何装置800可有效访问的媒介,并包括易失性或非易失性媒介,以及可拆卸或非拆卸的媒介。例如,但并不限制于,装置可读媒介可以包括装置存储媒介和通讯媒介。装置存储媒介包括易失性和非易失性,可拆卸和非拆卸媒介,这些媒介可以采用存储诸如装置可读指令,数据结构,程序模块或其他数据的信息的任何方法或技术来实现。装置存储媒介包括,但并不限制于,RAM,ROM,EEPROM,闪存存储器或其他存储器技术,或者硬盘存储、固态硬盘存储、光盘存储,磁盘盒,磁盘存储或其它存储设备,或任何其它可以存储所要求信息和能够被装置800访问的媒介。通讯媒介一般包括嵌入的计算机可读指令,数据结构,程序模块或在模块化数据信号(例如,载波或其他传输机制)中的其他数据,并且还包括任何信息传递的媒介。上述任何组合也应该包括在装置可读媒介的范围内。 Device 800 generally includes a variety of device readable media. The device readable medium can be any medium that can be effectively accessed by any device 800 and includes volatile or nonvolatile media, as well as removable or non-removable media. For example, but not limited to, the device readable medium can comprise a device storage medium and a communication medium. The device storage medium includes volatile and nonvolatile, removable and non-removable media, which can be implemented by any method or technique for storing information such as device readable instructions, data structures, program modules or other data. Device storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, or hard disk storage, solid state hard disk storage, optical disk storage, disk cartridges, disk storage or other storage devices, or any other storage device. The required information and the medium that can be accessed by device 800. Communication media typically includes embedded computer readable instructions, data structures, program modules or other data in a modular data signal (e.g., carrier wave or other transport mechanism) and also includes any medium for information delivery. Combinations of any of the above should also be included within the scope of the device readable media.
系统存储器830包括装置存储媒介,它可以是易失性和非易失性存储器,例如,只读存储器(ROM)831和随即存取存储器(RAM)832。基本输入/输出系统833(BIOS)一般存储于ROM831中,包含着基本的例行程序,它有助于在装置810中各元件之间的信息传输。RAM 832一般包含着数据和/或程序模块,它可以被处理单元820即时访问和/或立即操作。例如,但并不限制于,图8说明了操作系统834,应用程序835,其他程序模块836和程序数据837。System memory 830 includes device storage media, which can be volatile and non-volatile memory, such as read only memory (ROM) 831 and random access memory (RAM) 832. The basic input/output system 833 (BIOS) is typically stored in ROM 831 and contains basic routines that facilitate the transfer of information between elements in device 810. RAM 832 typically contains data and/or program modules that can be accessed and/or operated immediately by processing unit 820. For example, but not limited to, FIG. 8 illustrates operating system 834, application 835, other program modules 836, and program data 837.
装置800也可以包括其他可拆卸/非拆卸,易失性/非易失性的装置存储媒介。仅仅是一 个实例,图8说明了硬盘存储器841,它可以是非拆卸和非易失性的可读写磁媒介;外部存储器851,它可以是可拆卸和非易失性的各类外部存储器,例如光盘、磁盘、闪存或者移动硬盘等;硬盘存储器81一般是通过非拆卸存储接口(例如,接口840)与系统总线810相连接,外部存储器一般通过可拆卸存储接口(例如,接口860)与系统总线810相连接。 Device 800 may also include other removable/non-removable, volatile/non-volatile device storage media. As just one example, FIG. 8 illustrates a hard disk storage 841 which may be a non-removable and non-volatile readable and writable magnetic medium; an external memory 851 which may be a detachable and nonvolatile non-volatile external memory such as Optical disk, magnetic disk, flash memory or mobile hard disk, etc.; hard disk storage 81 is generally connected to system bus 810 through a non-removable storage interface (eg, interface 840), which typically passes through a removable storage interface (eg, interface 860) and system bus 810 is connected.
上述所讨论的以及图8所示的驱动器和它相关的装置存储媒介提供了装置可读指令,数据结构,程序模块和装置800的其它数据的存储。例如,硬盘驱动器841说明了用于存储操作系统842,应用程序843,其它程序模块844以及程序数据845。值得注意的是,这些元件可以与操作系统834,应用程序835,其他程序模块836,以及程序数据837是相同的或者是不同的。The drives and their associated device storage media discussed above and illustrated in FIG. 8 provide storage of device readable instructions, data structures, program modules, and other data for device 800. For example, hard drive 841 illustrates storage operating system 842, application 843, other program modules 844, and program data 845. It should be noted that these elements may be the same or different from operating system 834, application 835, other program modules 836, and program data 837.
在本实施例中,前述实施例中的方法或者上一实施例中逻辑模块的功能可以通过存储在装置存储媒介中的代码或者可读指令,并由处理单元820读取所述的代码或者可读指令从而执行所述方法。In this embodiment, the method in the foregoing embodiment or the function of the logic module in the previous embodiment may be read by the processing unit 820 by the code or the readable instruction stored in the device storage medium or may be The instructions are read to perform the method.
当本装置执行前述图5所对应的方法,或者作为前述实施例中图7所对应的装置时,前述的存储媒介,例如硬盘驱动器841或者外部存储器851可以存储有前述实施例中的特征数据库。When the apparatus performs the method corresponding to the foregoing FIG. 5 or the apparatus corresponding to FIG. 7 in the foregoing embodiment, the aforementioned storage medium, such as the hard disk drive 841 or the external memory 851, may store the feature database in the foregoing embodiment.
用户可以通过各类输入设备861装置800输入命令和信息。各种输入设备经常都是通过用户输入接口860与处理单元820相连接,用户输入接口860与系统总线相耦合,但也可以通过其他接口和总线结构相连接,例如,并行接口,或通用串行接口(USB)。显示设备890也可以通过接口(例如,视频接口890)与系统总线810相连接。此外,诸如计算设备800也可以包括各类外围输出设备820,输出设备可以通过输出接口880等来连接。The user can enter commands and information through various types of input device 861 devices 800. Various input devices are often connected to the processing unit 820 through a user input interface 860. The user input interface 860 is coupled to the system bus, but can also be connected to the bus structure through other interfaces, such as a parallel interface, or a universal serial. Interface (USB). Display device 890 can also be coupled to system bus 810 via an interface (e.g., video interface 890). Moreover, such as computing device 800 can also include various types of peripheral output devices 820 that can be connected through output interface 880 or the like.
装置800可以在使用逻辑连接着一个或多个计算设备,例如,远程计算机870。远程计算节点包括装置,计算节点,服务器,路由器,网络PC,等同的设备或其它通用的网络结点,并且一般包括许多或所有与装置800有关的上述所讨论的元件。结合前述图1所描述的架构中,远程计算节点可以是从节点、计算节点或者其他装置。在图8中所说明的逻辑连接包括局域网(LAN)和广域网(WAN),也可以包括其它网络。通过逻辑连接,装置可以与其他节点实现本发明中与其他主题之间的交互。例如,可以通过与用户的逻辑链接进行任务信息和数据的传输,从而获取用户的待分配任务;通过和计算节点的逻辑链接进行资源数据的传输以及任务分配命令的传输,从而实现各个节点的资源信息的获取以及任务的分配。 Device 800 can be coupled to one or more computing devices, such as remote computer 870, using logic. A remote computing node includes a device, a computing node, a server, a router, a network PC, an equivalent device, or other general network node, and typically includes many or all of the elements discussed above in connection with device 800. In conjunction with the architecture described above with respect to FIG. 1, the remote computing node can be a slave node, a compute node, or other device. The logical connections illustrated in Figure 8 include a local area network (LAN) and a wide area network (WAN), and may also include other networks. Through logical connections, the device can interact with other nodes in the present invention with other topics. For example, the task information and the data can be transmitted through the logical link with the user, thereby acquiring the task to be assigned by the user; the resource data transmission and the task allocation command are transmitted through the logical link with the computing node, thereby realizing the resources of each node. Acquisition of information and assignment of tasks.
本领域技术人员应该可以意识到,在上述一个或多个示例中,本发明所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。Those skilled in the art will appreciate that in one or more examples described above, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本发明的保护范围之内。The specific embodiments of the present invention have been described in detail with reference to the preferred embodiments of the present invention. The scope of the protection, any modifications, equivalent substitutions, improvements, etc., which are made on the basis of the technical solutions of the present invention, are included in the scope of the present invention.
.

Claims (22)

  1. 一种特征匹配的方法,其特征在于,所述方法包括,A method of feature matching, characterized in that the method comprises
    获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征;Obtaining feature information of the image, where the feature information includes at least two types of features, and the at least two types of features include the first type of features;
    确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征;Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features;
    根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息;And weighting the at least two types of features according to the weights to obtain weighted feature information;
    根据所述加权后的特征信息,对所述图像进行特征匹配。Feature matching is performed on the image according to the weighted feature information.
  2. 根据权利要求1所述方法,其特征在于,所述第一类特征对应的权重根据所述第一类特征的第一指标值确定,所述第一指标值用于指示所述第一特征对于所述图像的描述能力。The method according to claim 1, wherein the weight corresponding to the first type of feature is determined according to a first indicator value of the first type of feature, and the first indicator value is used to indicate that the first feature is The ability to describe the image.
  3. 根据权利要求2所述方法,其特征在于,所述第一类特征为纹理特征,所述第一指标值为图像的平均幅值。The method of claim 2 wherein said first type of feature is a texture feature and said first indicator value is an average magnitude of the image.
  4. 根据权利要求2或3所述方法,其特征在于,确定所述至少两类特征中每类特征所对应的权重包括:The method according to claim 2 or 3, wherein determining the weight corresponding to each of the at least two types of features comprises:
    Figure PCTCN2018125732-appb-100001
    为所述第一类特征的第一指标值,T 1、T 2为预设阈值,当
    Figure PCTCN2018125732-appb-100002
    大于或等于T 2且小于或等于T 1时,确定所述第一类特征的权重
    Figure PCTCN2018125732-appb-100003
    Figure PCTCN2018125732-appb-100004
    大于T 1时,ω 1=1,当
    Figure PCTCN2018125732-appb-100005
    小于T 2时,ω 1=0。
    Figure PCTCN2018125732-appb-100001
    For the first indicator value of the first type of feature, T 1 and T 2 are preset thresholds, when
    Figure PCTCN2018125732-appb-100002
    Determining the weight of the first type of feature when greater than or equal to T 2 and less than or equal to T 1
    Figure PCTCN2018125732-appb-100003
    when
    Figure PCTCN2018125732-appb-100004
    When it is greater than T 1 , ω 1 =1, when
    Figure PCTCN2018125732-appb-100005
    When it is less than T 2 , ω 1 =0.
  5. 根据权利要求1或2所述方法,其特征在于,所述特征为深度特征,所述确定所述至少两类特征中每类特征所对应的权重具体包括:The method according to claim 1 or 2, wherein the feature is a depth feature, and the determining the weight corresponding to each of the at least two types of features comprises:
    根据每类特征的置信度确定对应的权重,其中,所述置信度用于描述所述深度特征映射到对应的预设区间的可能性。A corresponding weight is determined according to a confidence level of each type of feature, wherein the confidence is used to describe a possibility that the depth feature is mapped to a corresponding preset interval.
  6. 根据权利要求1所述方法,其特征在于,所述第一类特征对应的权重根据所述第一类特征的特征类型确定。The method according to claim 1, wherein the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  7. 一种特征数据的生成方法,其特征在于,所述方法包括:A method for generating feature data, the method comprising:
    获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征;Obtaining feature information of the image, where the feature information includes at least two types of features, and the at least two types of features include the first type of features;
    确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征;Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features;
    根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息;And weighting the at least two types of features according to the weights to obtain weighted feature information;
    将所述加权后的特征信息存储到特征数据库。The weighted feature information is stored in a feature database.
  8. 根据权利要求7所述方法,其特征在于,所述第一类特征对应的权重根据所述第一类特征的第一指标值确定,所述第一指标值用于指示所述第一特征对于所述图像的描述能力。The method according to claim 7, wherein the weight corresponding to the first type of feature is determined according to a first indicator value of the first type of feature, and the first indicator value is used to indicate that the first feature is The ability to describe the image.
  9. 根据权利要求8所述方法,其特征在于,所述第一类特征为纹理特征,所述第一指标值为图像的平均幅值;The method according to claim 8, wherein the first type of feature is a texture feature, and the first indicator value is an average amplitude of the image;
  10. 根据权利要求8或9所述方法,其特征在于,确定所述至少两类特征中每类特征所对应的权重包括:The method according to claim 8 or 9, wherein determining the weight corresponding to each of the at least two types of features comprises:
    Figure PCTCN2018125732-appb-100006
    为所述第一类特征的第一指标值,T 1、T 2为预设阈值,当
    Figure PCTCN2018125732-appb-100007
    大于或等于T 2且小于或等于T 1时,确定所述第一类特征的权重
    Figure PCTCN2018125732-appb-100008
    Figure PCTCN2018125732-appb-100009
    大于T 1时,ω 1=1,当
    Figure PCTCN2018125732-appb-100010
    小于T 2时,ω 1=0。
    Figure PCTCN2018125732-appb-100006
    For the first indicator value of the first type of feature, T 1 and T 2 are preset thresholds, when
    Figure PCTCN2018125732-appb-100007
    Determining the weight of the first type of feature when greater than or equal to T 2 and less than or equal to T 1
    Figure PCTCN2018125732-appb-100008
    when
    Figure PCTCN2018125732-appb-100009
    When it is greater than T 1 , ω 1 =1, when
    Figure PCTCN2018125732-appb-100010
    When it is less than T 2 , ω 1 =0.
  11. 根据权利要求7或8所述方法,其特征在于,所述特征为深度特征,所述确定所述至少两类特征中每类特征所对应的权重具体包括:The method according to claim 7 or 8, wherein the feature is a depth feature, and the determining the weight corresponding to each of the at least two types of features comprises:
    根据每类特征的置信度确定对应的权重,其中,所述置信度用于描述所述深度特征映射到对应的预设区间的可能性。A corresponding weight is determined according to a confidence level of each type of feature, wherein the confidence is used to describe a possibility that the depth feature is mapped to a corresponding preset interval.
  12. 根据权利要求7所述方法,其特征在于,所述第一类特征对应的权重根据所述第一类特征的特征类型确定。The method according to claim 7, wherein the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  13. 一种特征匹配装置,其特征在于,所述装置包括:A feature matching device, characterized in that the device comprises:
    特征提取模块,用于获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征;a feature extraction module, configured to acquire feature information of an image, where the feature information includes at least two types of features, and the at least two types of features include a first type of feature;
    特征加权模块,用于确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征;以及根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息;a feature weighting module, configured to determine a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, the first a feature that is characterized in that the image description capability of the at least two types of features is greater than other features of the at least two types of features; and weighting the at least two types of features according to the weights to obtain a weighted Characteristic information
    特征匹配模块,用于根据所述加权后的特征信息,对所述图像进行特征匹配。And a feature matching module, configured to perform feature matching on the image according to the weighted feature information.
  14. 根据权利要求13所述装置,其特征在于,所述第一类特征对应的权重根据所述第一类特征的第一指标值确定,所述第一指标值用于指示所述第一特征对于所述图像的描述能力。The device according to claim 13, wherein the weight corresponding to the first type of feature is determined according to a first indicator value of the first type of feature, and the first indicator value is used to indicate that the first feature is The ability to describe the image.
  15. 根据权利要求14所述装置,其特征在于,所述特征加权模块确定所述至少两类特征中每类特征所对应的权重包括:The device according to claim 14, wherein the feature weighting module determines that the weight corresponding to each of the at least two types of features comprises:
    Figure PCTCN2018125732-appb-100011
    为所述第一类特征的第一指标值,T 1、T 2为预设阈值,当
    Figure PCTCN2018125732-appb-100012
    大于或等于T 2且小于或等于T 1时,确定所述第一类特征的权重
    Figure PCTCN2018125732-appb-100013
    Figure PCTCN2018125732-appb-100014
    大于T 1时,ω 1=1,当
    Figure PCTCN2018125732-appb-100015
    小于T 2时,ω 1=0。
    Figure PCTCN2018125732-appb-100011
    For the first indicator value of the first type of feature, T 1 and T 2 are preset thresholds, when
    Figure PCTCN2018125732-appb-100012
    Determining the weight of the first type of feature when greater than or equal to T 2 and less than or equal to T 1
    Figure PCTCN2018125732-appb-100013
    when
    Figure PCTCN2018125732-appb-100014
    When it is greater than T 1 , ω 1 =1, when
    Figure PCTCN2018125732-appb-100015
    When it is less than T 2 , ω 1 =0.
  16. 一种特征数据生成装置,其特征在于,所述装置包括:A feature data generating device, characterized in that the device comprises:
    特征提取模块,用于获取图像的特征信息,所述特征信息包含至少两类特征,所述至少两类特征中包含第一类特征;a feature extraction module, configured to acquire feature information of an image, where the feature information includes at least two types of features, and the at least two types of features include a first type of feature;
    特征加权模块,用于确定所述至少两类特征中每类特征所对应的权重,其中,所述第一类特征对应的权重大于所述至少两类特征中其他特征对应的权重,所述第一特征为所述至少两类特征中的对所述图像描述能力大于所述至少两类特征中其他特征的特征;以及根据所述权重,对所述至少两类特征进行加权,得到加权后的特征信息;a feature weighting module, configured to determine a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, the first a feature that is characterized in that the image description capability of the at least two types of features is greater than other features of the at least two types of features; and weighting the at least two types of features according to the weights to obtain a weighted Characteristic information
    特征存储模块,用于将所述加权后的特征信息存储到特征数据库。And a feature storage module, configured to store the weighted feature information into a feature database.
  17. 根据权利要求16所述装置,其特征在于,所述第一类特征对应的权重根据所述第一类特征的第一指标值确定,所述第一指标值用于指示所述第一特征对于所述图像的描述能力。The device according to claim 16, wherein the weight corresponding to the first type of feature is determined according to a first indicator value of the first type of feature, and the first indicator value is used to indicate that the first feature is The ability to describe the image.
  18. 根据权利要求17所述装置,其特征在于,The device according to claim 17, wherein
    所述特征加权模块确定所述至少两类特征中每类特征所对应的权重包括:The feature weighting module determines that the weights corresponding to each of the at least two types of features include:
    Figure PCTCN2018125732-appb-100016
    为所述第一类特征的第一指标值,T 1、T 2为预设阈值,当
    Figure PCTCN2018125732-appb-100017
    大于或等于T 2且小于或等于T 1时,确定所述第一类特征的权重
    Figure PCTCN2018125732-appb-100018
    Figure PCTCN2018125732-appb-100019
    大于T 1时,ω 1=1,当
    Figure PCTCN2018125732-appb-100020
    小于T 2时,ω 1=0。
    Figure PCTCN2018125732-appb-100016
    For the first indicator value of the first type of feature, T 1 and T 2 are preset thresholds, when
    Figure PCTCN2018125732-appb-100017
    Determining the weight of the first type of feature when greater than or equal to T 2 and less than or equal to T 1
    Figure PCTCN2018125732-appb-100018
    when
    Figure PCTCN2018125732-appb-100019
    When it is greater than T 1 , ω 1 =1, when
    Figure PCTCN2018125732-appb-100020
    When it is less than T 2 , ω 1 =0.
  19. 一种计算机系统,其特征在于,所述计算机系统包括至少一个处理器和至少一个存储器,其中,A computer system, comprising: at least one processor and at least one memory, wherein
    所述存储器存储有计算机程序指令,所述处理器读取所述计算机程序指令,以执行权利要求1-6中任意一项权利要求所述的方法。The memory stores computer program instructions, the processor reading the computer program instructions to perform the method of any of claims 1-6.
  20. 一种计算机系统,其特征在于,所述计算机系统包括至少一个处理器和至少一个存储器,其中,A computer system, comprising: at least one processor and at least one memory, wherein
    所述存储器存储有计算机程序指令,所述处理器读取所述计算机程序指令,以执行权利要求7-12中任意一项权利要求所述的方法。The memory stores computer program instructions, the processor reading the computer program instructions to perform the method of any of claims 7-12.
  21. 一种计算机可读存储介质,其特征在于,所述算机可读存储介质包含指令,当所述指令在计算机上运行时,所述计算机执行权利要求1-6中任意一项权利要求所述的方法。A computer readable storage medium, characterized in that the computer readable storage medium comprises instructions for performing the operation of any one of claims 1-6 when the instructions are run on a computer Methods.
  22. 一种计算机可读存储介质,其特征在于,所述算机可读存储介质包含指令,当所述指令在计算机上运行时,所述计算机执行权利要求7-12中任意一项权利要求所述的方法。A computer readable storage medium, characterized in that the computer readable storage medium comprises instructions for performing the operation of any one of claims 7-12 when the instructions are run on a computer Methods.
PCT/CN2018/125732 2017-12-29 2018-12-29 Feature data generation method and apparatus and feature matching method and apparatus WO2019129293A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711479219.4 2017-12-29
CN201711479219.4A CN109993178B (en) 2017-12-29 2017-12-29 Feature data generation and feature matching method and device

Publications (1)

Publication Number Publication Date
WO2019129293A1 true WO2019129293A1 (en) 2019-07-04

Family

ID=67066670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125732 WO2019129293A1 (en) 2017-12-29 2018-12-29 Feature data generation method and apparatus and feature matching method and apparatus

Country Status (2)

Country Link
CN (1) CN109993178B (en)
WO (1) WO2019129293A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184969A (en) * 2023-04-24 2023-05-30 山东省滨州公路工程有限公司 Production quality monitoring method and system for asphalt mixing station

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140120716A (en) * 2013-04-04 2014-10-14 한국기술교육대학교 산학협력단 Defect detection method in heterogeneously textured surface
CN106776710A (en) * 2016-11-18 2017-05-31 广东技术师范学院 A kind of picture and text construction of knowledge base method based on vertical search engine
CN107480711A (en) * 2017-08-04 2017-12-15 合肥美的智能科技有限公司 Image-recognizing method, device, computer equipment and readable storage medium storing program for executing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100788643B1 (en) * 2001-01-09 2007-12-26 삼성전자주식회사 Searching method of image based on combination of color and texture
JP3948249B2 (en) * 2001-10-30 2007-07-25 日本電気株式会社 Similarity determination apparatus, similarity determination method, and program
US8774498B2 (en) * 2009-01-28 2014-07-08 Xerox Corporation Modeling images as sets of weighted features
CN102096797A (en) * 2011-01-18 2011-06-15 深圳市民德电子科技有限公司 Position prompting device and method for read bar code and bar code reading equipment
CN105718932A (en) * 2016-01-20 2016-06-29 中国矿业大学 Colorful image classification method based on fruit fly optimization algorithm and smooth twinborn support vector machine and system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140120716A (en) * 2013-04-04 2014-10-14 한국기술교육대학교 산학협력단 Defect detection method in heterogeneously textured surface
CN106776710A (en) * 2016-11-18 2017-05-31 广东技术师范学院 A kind of picture and text construction of knowledge base method based on vertical search engine
CN107480711A (en) * 2017-08-04 2017-12-15 合肥美的智能科技有限公司 Image-recognizing method, device, computer equipment and readable storage medium storing program for executing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184969A (en) * 2023-04-24 2023-05-30 山东省滨州公路工程有限公司 Production quality monitoring method and system for asphalt mixing station
CN116184969B (en) * 2023-04-24 2023-07-14 山东省滨州公路工程有限公司 Production quality monitoring method and system for asphalt mixing station

Also Published As

Publication number Publication date
CN109993178A (en) 2019-07-09
CN109993178B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US8457406B2 (en) Identifying descriptor for person and object in an image
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US9158995B2 (en) Data driven localization using task-dependent representations
WO2015149534A1 (en) Gabor binary pattern-based face recognition method and device
Ali et al. A real-time deformable detector
JP2014232533A (en) System and method for ocr output verification
US11023713B2 (en) Suspiciousness degree estimation model generation device
WO2020155790A1 (en) Method and apparatus for extracting claim settlement information, and electronic device
US20140247993A1 (en) Landmark localization via visual search
Demirkus et al. Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos
CN111373393B (en) Image retrieval method and device and image library generation method and device
Wang et al. Accurate playground localisation based on multi-feature extraction and cascade classifier in optical remote sensing images
Zhao et al. Hybrid generative/discriminative scene classification strategy based on latent Dirichlet allocation for high spatial resolution remote sensing imagery
CN112613471B (en) Face living body detection method, device and computer readable storage medium
US9002116B2 (en) Attribute recognition via visual search
WO2019129293A1 (en) Feature data generation method and apparatus and feature matching method and apparatus
Mu et al. Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm
CN113869253A (en) Living body detection method, living body training device, electronic apparatus, and medium
US20140254864A1 (en) System and method for gesture detection through local product map
CN111079704A (en) Face recognition method and device based on quantum computation
Gunasekar et al. Face detection on distorted images using perceptual quality-aware features
Hoshino et al. Inferencing the best AI service using Deep Neural Networks
Jamsandekar et al. A Framework for Identifying Image Dissimilarity
Zhao et al. Combinatorial and statistical methods for part selection for object recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18895456

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18895456

Country of ref document: EP

Kind code of ref document: A1