WO2019129293A1 - Procédé et appareil de génération de données de caractéristiques, et procédé et appareil de mise en correspondance de caractéristiques - Google Patents

Procédé et appareil de génération de données de caractéristiques, et procédé et appareil de mise en correspondance de caractéristiques Download PDF

Info

Publication number
WO2019129293A1
WO2019129293A1 PCT/CN2018/125732 CN2018125732W WO2019129293A1 WO 2019129293 A1 WO2019129293 A1 WO 2019129293A1 CN 2018125732 W CN2018125732 W CN 2018125732W WO 2019129293 A1 WO2019129293 A1 WO 2019129293A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
features
types
image
type
Prior art date
Application number
PCT/CN2018/125732
Other languages
English (en)
Chinese (zh)
Inventor
李小利
白博
陈茂林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019129293A1 publication Critical patent/WO2019129293A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Definitions

  • the embodiments of the present invention relate to the field of multimedia technologies, and in particular, to a feature data generation and feature matching method and apparatus.
  • Feature extraction mainly implements detection and tracking of targets, and extracts features from regions of interest to characterize source data.
  • the extracted features require a certain commonality to the same type of data, and the heterogeneous data has a higher discrimination, that is, has a stronger discriminating power.
  • the extracted features include color, texture, edge, depth feature, etc.
  • Embodiments of the present invention provide a method and apparatus for feature matching and feature information generation, which can improve the accuracy of image feature matching.
  • an embodiment of the present invention provides a method for feature matching.
  • the method includes: performing feature extraction on an image to obtain at least two types of features.
  • the features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger.
  • Feature information including multiple types of features is obtained according to the weighted features, and feature matching is performed according to the feature information.
  • the ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature.
  • feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
  • the description capability of the image is determined according to an indicator value associated with the feature.
  • the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature.
  • the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value.
  • the confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
  • determining the weight corresponding to the feature according to the indicator value may be according to a formula:
  • T 1 and T 2 are preset threshold values.
  • the above formula can be used to determine ⁇ 1 .
  • the first type of feature is used as the unique feature of feature matching of the image, that is, ⁇ 1 takes 1 when
  • the first type of feature is not used as a feature of feature matching of the image, that is, ⁇ 1 is taken as 0.
  • the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  • an embodiment of the present invention provides a method for generating feature information, where the method includes: performing feature extraction on an image to obtain at least two types of features.
  • the features are weighted according to the size of each type of special description ability of the image, wherein the feature weight for the description of the image is larger.
  • Feature information including multiple types of features is obtained according to the weighted features, and the weighted feature information is stored in a database.
  • the ability of a feature to describe an image refers to the degree of discrimination when the image is described according to the feature.
  • feature information of an image is acquired, the feature information includes at least two types of features, and the at least two types of features include a first type of feature. Determining a weight corresponding to each of the at least two types of features, wherein the weight of the first type of feature corresponds to a weight corresponding to other features of the at least two types of features, and the first feature is the at least The ability to describe the image in the two types of features is greater than the features of the other features in the at least two types of features. And weighting the at least two types of features according to the weights to obtain weighted feature information. Feature matching is performed on the image according to the weighted feature information. By weighting the first type of features, the image description ability of the first type of features is greater than other features, thereby improving the discrimination of the weighted feature information in image matching, and enhancing the accuracy of feature matching.
  • the description capability of the image is determined according to an indicator value associated with the feature.
  • the weight corresponding to the first type of feature is determined according to the first indicator value of the first type of feature.
  • the ability of the texture feature to describe the image may be determined according to the average amplitude of the image, or a Laplacian; the depth feature may be determined based on the confidence of the feature or by the quality evaluation value.
  • the confidence level is used to describe the possibility that the depth feature is mapped to the corresponding preset interval, and the quality evaluation value is obtained according to the quality evaluation matrix.
  • determining the weight corresponding to the feature according to the indicator value may be according to a formula:
  • T 1 and T 2 are preset threshold values.
  • the above formula can be used to determine ⁇ 1 .
  • the first type of feature is used as the unique feature of feature matching of the image, that is, ⁇ 1 takes 1 when
  • the first type of feature is not used as a feature of feature matching of the image, that is, ⁇ 1 is taken as 0.
  • the weight corresponding to the first type of feature is determined according to a feature type of the first type of feature.
  • an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the above first aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the method and function performed in the above first aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • an embodiment of the present invention provides an image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module configured to implement the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • the corresponding module is the image processing apparatus configured to implement the method and function performed in the foregoing second aspect, implemented by hardware/software, and the hardware/software includes the foregoing functions.
  • an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the first aspect above.
  • an embodiment of the present application provides an image processing apparatus, including: a processor, a memory, and a communication bus, wherein the communication bus is used to implement connection communication between the processor and the memory, and the processor executes the program stored in the memory. Used to implement the steps in the method of the second aspect above.
  • an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores instructions that, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect.
  • the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or the second aspect described above.
  • the features of the different types are weighted according to the description capability, and the feature matching is performed according to the weighted feature information.
  • the weight of the feature with strong description capability in the feature information is enhanced, thereby enhancing the difference of feature information in feature matching and improving the accuracy of feature matching.
  • FIG. 1 is a schematic structural diagram of an image search system according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a feature matching method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of still another feature matching method according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a method for generating feature data according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of still another image processing apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of hardware of an image processing apparatus according to an embodiment of the present invention.
  • a feature in the embodiment of the present invention is that a certain type of object is distinguished from corresponding features or characteristics of other types of objects, or a collection of these features and characteristics.
  • the feature is the ability to extract data by measuring or processing.
  • each image has its own features that can be distinguished from other types of images, some are natural features that can be intuitively perceived, such as brightness, edges, textures, and colors; others require transformation or processing. Can be obtained, such as moments, histograms and principal components; and some are depth features obtained by deep learning model extraction.
  • the ability to describe the same type of feature may be different for different images. For example, for a solid color-based image, since the texture in the image is relatively small, when the texture feature is used to describe the image, it is not possible to distinguish different images with different solid colors. Features When describing such images, it is better to distinguish between different images in such images.
  • Feature matching refers to an algorithm that performs parameter matching by extracting features of two or more images separately, and then using the described parameters to perform matching.
  • Features processed by feature-based matching typically include features such as color features, texture features, shape features, spatial location features, and the like.
  • Feature matching Firstly, the image is preprocessed to extract its high-level features, and then the matching correspondence between the two images is established.
  • the commonly used feature primitives have some features, edge features and regional features.
  • Feature matching requires many mathematical operations such as matrix operations, gradient solutions, and Fourier transforms and Taylor expansion.
  • Commonly used feature extraction and matching methods are: statistical methods, geometric methods, model methods, signal processing methods, boundary feature methods, Fourier shape description methods, geometric parameter methods, shape invariant moment methods, and so on.
  • the image in the embodiment of the present invention includes a still image and a moving image.
  • an application scenario of an embodiment of the present invention may be an image search system 1000 based on image feature matching, which may perform on-time analysis and query on a video or an image.
  • the system is composed of four core modules: a feature extraction module 1001, a feature storage module 1002, a feature weighting module 1003, and a feature matching module 1004.
  • the feature extraction module 1001 mainly implements detection and tracking of an image or a video file, and extracts features from the region of interest to obtain feature information data corresponding to the image or video.
  • the feature extraction module for the target image or video to be retrieved Feature extraction can also be performed, and the extracted features are processed by the feature weighting module.
  • the feature storage module 1002 is configured to construct a database and an index thereof based on the result of feature extraction by the feature module 1001 on the video or the image.
  • the feature weighting module 1003 weights the features extracted in the target image for the target image to be retrieved, and obtains the weighted image feature information.
  • the feature storage module 1002 may establish a database according to the weighted feature obtained by weighting the features extracted by the feature extraction module by the feature weighting module 1003, and obtain the weighted feature database and its index.
  • the feature weighting module 1003 may weight the features of the image or video used to establish the database extracted in the feature extraction module 1001, thereby establishing the weighted result through the feature storage module to establish a weighted database.
  • the image retrieval module 1004 obtains the query result based on the feature information weighted by the target image and matches the image in the database according to the feature matching algorithm.
  • image matching the extracted features require certain commonality to the same type of data, and the heterogeneous data has higher discrimination, that is, has stronger discriminating power.
  • the extracted feature is weighted by the feature weighting method, so that the finally obtained feature has a stronger description capability, thereby facilitating a better effect by the graph search system.
  • the image search system 1000 may be a stand-alone computer system, such as a server, to implement the corresponding functions of the feature extraction module 1001, the feature storage module 1002, the feature weighting module 1003, and the feature matching module 1004.
  • image search system 1000 can be a distributed system that includes database nodes and compute nodes for images.
  • the database node stores the database processed by the feature storage module 1002 and its index, and the computing node can implement the corresponding functions of the feature extraction module 1001, the feature weighting module 1002, and the feature matching module 1004.
  • a module may be deployed on different nodes.
  • the feature extraction module 1001 and the feature weighting module 1003 may be respectively deployed on the database node and the computing node.
  • the database node When generating the feature data, the database node needs to call the feature weighting module 1003 to weight the features extracted by the feature extraction module 1001 to generate a weighted
  • the feature data while the computing node performs feature matching, also needs to call the feature weighting module 1003 to weight the features of the target image extracted by the feature extraction module 1001 to perform feature matching.
  • FIG. 2 it is a flowchart of an image feature matching method according to an embodiment of the present invention.
  • the method includes:
  • the target image is the image to be matched in the embodiment of the present invention.
  • the feature information of the target image is extracted by the feature extraction algorithm.
  • the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network.
  • the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc.
  • the extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like.
  • the extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
  • the primary feature can be determined by a predetermined category.
  • the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature.
  • the texture feature may be preset as a main feature.
  • the feature is considered to be the main feature.
  • the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
  • whether a class of features is a primary feature can be determined by an indicator value that can describe the feature.
  • an indicator value that can describe the feature.
  • whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian.
  • the average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image.
  • the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image.
  • a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
  • whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
  • the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature.
  • the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
  • the weight of the feature may be adaptively determined based on the index value corresponding to the feature of the category.
  • the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
  • the acquired features are weighted to obtain the final weighted feature information.
  • the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
  • the image can be feature-matched according to the feature information.
  • similarity calculations such as Euclidean distance, Mahalanobis distance, and Hamming distance can be used to characterize the similarity between different features, so as to obtain the final image matching result.
  • FIG. 3 is an image feature matching method according to an embodiment of the present invention.
  • the texture feature and color feature are taken as an example, and the average amplitude is used as an index value for measuring a texture feature or a color feature.
  • the texture features and color features are extracted from the image.
  • the threshold method is used to obtain the weight corresponding to each type of feature.
  • the final feature information of the image is weighted by corresponding weights, thus enhancing the discriminative power of the final feature. It can be understood that for other conventional features, such as edge features, grayscale features, etc., a similar method can be used for matching with reference to this embodiment.
  • the target image is a 3-channel color image with a size of 100 ⁇ 100, and the image is extracted from the texture feature and the color feature.
  • the texture features are extracted from the image:
  • d i is the amplitude of a point with coordinates (x i , y i ), with Gradient in the x and y directions for this point;
  • c. Delimit the amplitude distribution interval. If the distribution interval is 50 intervals, obtain the distribution histogram of the image amplitude, and obtain the 50-dimensional texture feature corresponding to the image. Reference formula: among them, The statistical value of the magnitude falling into the i-th interval.
  • the average amplitude of the image that is, the amplitude of each point in the image is averaged.
  • T 1 and T 2 are threshold values that can be described by the texture image and the color image as unique features when performing feature matching according to experience.
  • T 1 and T 2 are threshold values that can be described by the texture image and the color image as unique features when performing feature matching according to experience.
  • the image is a strong color image, and the description feature of the texture feature in the image is weak, and the texture feature may no longer be used as a matching feature when the feature is matched; correspondingly, when the average
  • the amplitude is greater than T 1 , the image is a strong texture image, and the color feature description ability in the image is very weak.
  • the color feature can no longer be used as a matching feature.
  • the weight corresponding to the texture feature and the color feature can be determined according to the formula:
  • the weight corresponding to the feature is determined by using a double threshold.
  • color features when The value is less than The color feature is the main feature in the image, at which point ⁇ 1 ⁇ 2 , ie the weight of the color feature is greater than the texture feature.
  • Feature matching can be performed using various feature matching algorithms. For example, the similarity between different features can be characterized by the Euclidean distance to obtain the final image recognition result.
  • the similarity of the two images can be judged according to the Euclidean distance, wherein the smaller the Euclidean distance, the higher the similarity between the two images.
  • the weighting of the color features and texture features of the image is adjusted based on the average magnitude.
  • the color feature is the main feature of the image
  • the color feature corresponds to a higher weight
  • the texture feature asks the main feature of the image
  • the texture feature corresponds to a higher weight.
  • FIG. 4 is still another image feature matching method according to an embodiment of the present invention.
  • the embodiment takes a deep learning feature of a character image as an example.
  • different depth features are extracted from the image based on the depth model.
  • a traditional classifier such as SVM, or a fully connected layer (fc layer) may be added based on the depth feature to obtain different features. Confidence. Different depth features are weighted according to confidence, thus enhancing the discriminative power of the final depth feature.
  • the depth feature for describing the image can be extracted by the depth model.
  • images can be described from different dimensions.
  • the depth feature can describe the possibility that the portrait is male and female in the image from the gender dimension; it can also describe the possibility of the portrait in the image at different ages from the age dimension.
  • the gender feature f2 is extracted, and the feature dimension is n2.
  • a fully connected layer (fc layer) is added to obtain a probability that the feature maps to different classes, for example, a probability that a gender feature of an image is mapped to "male", that is, The confidence pi of different features is obtained.
  • the final feature F concat ( ⁇ 1 ⁇ f 1 , ⁇ 2 ⁇ f 2 , ⁇ 3 ⁇ f 3 , ⁇ 4 ⁇ f 4 ) is obtained.
  • the final feature can also be determined based on a method of supervised learning.
  • a method of supervised learning such as SVM
  • a method for generating image feature data according to the present invention is provided.
  • an image feature database for feature matching can be generated.
  • feature matching can be performed in the image feature database generated by the method in the method embodiment. Thereby realizing the retrieval function of the image.
  • the method includes:
  • the image to be generated that is, the image in which the feature information is stored in the database is required to be produced in the embodiment of the present invention.
  • the feature information of the image to be generated is extracted by the feature extraction algorithm.
  • the feature may be a traditional feature, such as a texture feature, a color feature, an edge feature, etc., or may be a depth feature, such as an image feature extracted by a deep learning network.
  • the extracted color features can be extracted by a color histogram, a color moment, a color set, a color aggregation vector, etc.
  • the extracted texture features can be obtained by a gray level co-occurrence matrix, a Tamura texture feature, an autoregressive texture model, a wavelet transform, and the like.
  • the extracted shape features can be extracted by a boundary feature method, a Fourier shape descriptor method, a geometric parameter method, a shape invariant moment method, and the like.
  • the primary feature can be determined by a predetermined category.
  • the texture feature has a higher ability to describe the image, and the texture feature may be preset as a main feature.
  • the texture feature may be preset as a main feature.
  • the feature is considered to be the main feature.
  • the head feature in the depth feature of the image is preset as the main feature, when the acquired depth feature is in the head position region of the task in the image, the feature is considered as the main feature.
  • whether a class of features is a primary feature can be determined by an indicator value that can describe the feature.
  • an indicator value that can describe the feature.
  • whether the texture feature in the image is the main feature can be described by the average amplitude (Dense) of the image and the Laplacian.
  • the average amplitude of the image is obtained by averaging the amplitude of each point in the image. The larger the average amplitude of an image, the stronger the ability of the texture feature of the image to describe the image.
  • the average amplitude exceeds a predetermined threshold, the texture feature is considered to be the primary feature of the image.
  • a corresponding confidence level for the feature can be obtained. The higher the confidence of the feature, the stronger the ability of the feature to describe the image. When the confidence level is higher than the preset threshold, the depth feature corresponding to the confidence is considered to be the main feature.
  • whether the other features are primary features can be determined by describing the index values of the acquired features. For example, when the index value of other features other than one of the collected features is lower than a preset threshold, the feature may be considered as the main feature.
  • the weight of the feature is set higher than the weight of the other type of feature set by setting weights on the extracted feature.
  • the weights can be set by preset rules. For example, when a feature of a certain type and a category is set as a main feature, a weight corresponding to the category may be preset such that the weight of the feature of the category is higher than the weight of the other type of feature.
  • the weight of the feature may be adaptively determined according to the index value corresponding to the category feature.
  • the empirical threshold of the index value corresponding to the main feature may be set by using the feature as the main feature, and the difference between the index value of the feature and the corresponding empirical threshold is positively correlated with the weight corresponding to the feature, that is, the feature of the feature If the indicator value is greater than the corresponding experience threshold and the difference is larger, the weight corresponding to the feature is larger.
  • the acquired features are weighted to obtain the final weighted feature information.
  • the average or maximum value is taken to obtain the final feature information. For example, a 50-dimensional texture feature and a 50-dimensional color feature are extracted, and after weighting the texture feature and the color feature respectively, the final 100-dimensional feature of the image after weighting can be obtained.
  • the weighted feature information is stored in the feature database.
  • the feature information in the feature database can be used for feature matching of the image to be retrieved in the foregoing embodiment.
  • the feature weighting method in the aforementioned feature matching corresponds to the feature weighting method in the present embodiment.
  • the method in the foregoing feature matching method is completed by the feature extraction module 1001 , the feature weighting module 1003 , and the feature matching module 1004 .
  • the method in the feature information generation embodiment is performed by the feature extraction module 1001 .
  • the feature storage module 1002 and the feature weighting module 1003 are completed.
  • the feature weighting methods corresponding to feature matching and feature information generation should be consistent.
  • FIG. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the apparatus in this embodiment may be a device independent of the feature storage module 1002 and the database in FIG. 1, or may be integrated in the same device as the feature storage module 1002 and the database.
  • the image processing apparatus in this embodiment is configured to perform feature matching in a database after feature extraction and processing of the target image.
  • the image processing apparatus includes a feature extraction module 601, a feature weighting module 602, and a feature matching module 603. Among them, the description of each module is as follows:
  • a feature extraction module 601 configured to acquire at least two types of features of the target image
  • the feature weighting module 602 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
  • the feature matching module 603 performs feature matching on the image according to the weighted feature information.
  • each module may also perform the methods and functions performed in the foregoing embodiments corresponding to the corresponding descriptions of the method embodiments shown in FIG. 2, FIG. 3 or FIG.
  • the function of the feature extraction module 601 can refer to the method in S201
  • the function of the feature weighting module 602 can refer to the method in S202, and the method for weighting the feature in S203 to obtain the weighted feature information
  • the feature matching module 603 can refer to the method of performing image feature matching in S203.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the device in this embodiment is independent of the device shown in FIG. 6, and may also be a device integrated in the device shown in FIG. 6 to implement the functions of the embodiment.
  • the image processing apparatus in the present embodiment is configured to perform feature extraction on the image data, weight the features, and generate the weighted feature information and store the information in the database.
  • the image processing apparatus includes a feature extraction module 701, a feature weighting module 702, a feature storage module 703, and a feature information database 704. Among them, the description of each module is as follows:
  • a feature extraction module 701 configured to acquire at least two types of features of the image
  • the feature weighting module 702 is configured to determine weights corresponding to each of the at least two types of features, and weight the at least two types of features according to the set weights to obtain weighted feature information.
  • the feature storage module 703 is configured to store the weighted feature information in a feature database
  • the database 704 is configured to store the feature information weighted by the image data.
  • each module may also perform the methods and functions performed in the foregoing embodiments in accordance with the corresponding description of the method embodiments shown in FIG. 5.
  • the feature extraction module 701 and the feature weighting module 702 can also refer to the corresponding feature extraction and feature processing methods of the image to be retrieved in FIG. 2, FIG. 3 and FIG.
  • FIG. 8 it is an apparatus embodiment of an foregoing method embodiment of the present invention.
  • the apparatus may perform the method corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG. 5, or may be the foregoing FIG. 6 or 7 A hardware implementation of the device.
  • the embodiments of the present invention are described with reference to a general computer system environment as an example.
  • the device can be adapted to other similar computing hardware architectures to achieve similar functionality. Includes, without limitation, personal computers, service computers, multiprocessor systems, microprocessor based systems, programmable consumer appliances, network PCs, minicomputers, mainframe computers, distributed computing environments including any of the above systems or devices ,and many more.
  • embodiments of the present invention can also be implemented by other terminal devices capable of implementing similar computer functions, such as a smart phone, a PAD, a smart wearable device, and the like.
  • Elements of device 800 may include, but are not limited to, processing unit 820, system memory 830, and system bus 810.
  • the system bus couples various system components including system memory to processing unit 820.
  • System bus 810 can be any of several types of bus structures, which can include a memory bus or memory controller, a peripheral bus, and a local bus using a bus structure.
  • the bus structure may include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Extended ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Extended ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • the heterogeneous processing such as a central processing unit (CPU), a graphics processing unit (Graphics Processing Unit), or a digital signal processing (DSP) processor listed in the foregoing embodiment is included.
  • the central processing unit can be used to perform the method steps in the foregoing embodiments, such as the method steps corresponding to the foregoing FIG. 2, FIG. 3, FIG. 4 or FIG.
  • Device 800 generally includes a variety of device readable media.
  • the device readable medium can be any medium that can be effectively accessed by any device 800 and includes volatile or nonvolatile media, as well as removable or non-removable media.
  • the device readable medium can comprise a device storage medium and a communication medium.
  • the device storage medium includes volatile and nonvolatile, removable and non-removable media, which can be implemented by any method or technique for storing information such as device readable instructions, data structures, program modules or other data.
  • Device storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, or hard disk storage, solid state hard disk storage, optical disk storage, disk cartridges, disk storage or other storage devices, or any other storage device.
  • Communication media typically includes embedded computer readable instructions, data structures, program modules or other data in a modular data signal (e.g., carrier wave or other transport mechanism) and also includes any medium for information delivery. Combinations of any of the above should also be included within the scope of the device readable media.
  • System memory 830 includes device storage media, which can be volatile and non-volatile memory, such as read only memory (ROM) 831 and random access memory (RAM) 832.
  • ROM read only memory
  • RAM random access memory
  • the basic input/output system 833 (BIOS) is typically stored in ROM 831 and contains basic routines that facilitate the transfer of information between elements in device 810.
  • RAM 832 typically contains data and/or program modules that can be accessed and/or operated immediately by processing unit 820. For example, but not limited to, FIG. 8 illustrates operating system 834, application 835, other program modules 836, and program data 837.
  • FIG. 8 illustrates a hard disk storage 841 which may be a non-removable and non-volatile readable and writable magnetic medium; an external memory 851 which may be a detachable and nonvolatile non-volatile external memory such as Optical disk, magnetic disk, flash memory or mobile hard disk, etc.; hard disk storage 81 is generally connected to system bus 810 through a non-removable storage interface (eg, interface 840), which typically passes through a removable storage interface (eg, interface 860) and system bus 810 is connected.
  • a non-removable storage interface eg, interface 840
  • a removable storage interface eg, interface 860
  • hard drive 841 illustrates storage operating system 842, application 843, other program modules 844, and program data 845. It should be noted that these elements may be the same or different from operating system 834, application 835, other program modules 836, and program data 837.
  • the method in the foregoing embodiment or the function of the logic module in the previous embodiment may be read by the processing unit 820 by the code or the readable instruction stored in the device storage medium or may be The instructions are read to perform the method.
  • the aforementioned storage medium such as the hard disk drive 841 or the external memory 851, may store the feature database in the foregoing embodiment.
  • the user can enter commands and information through various types of input device 861 devices 800.
  • Various input devices are often connected to the processing unit 820 through a user input interface 860.
  • the user input interface 860 is coupled to the system bus, but can also be connected to the bus structure through other interfaces, such as a parallel interface, or a universal serial. Interface (USB).
  • Display device 890 can also be coupled to system bus 810 via an interface (e.g., video interface 890).
  • computing device 800 can also include various types of peripheral output devices 820 that can be connected through output interface 880 or the like.
  • Device 800 can be coupled to one or more computing devices, such as remote computer 870, using logic.
  • a remote computing node includes a device, a computing node, a server, a router, a network PC, an equivalent device, or other general network node, and typically includes many or all of the elements discussed above in connection with device 800.
  • the remote computing node can be a slave node, a compute node, or other device.
  • the logical connections illustrated in Figure 8 include a local area network (LAN) and a wide area network (WAN), and may also include other networks. Through logical connections, the device can interact with other nodes in the present invention with other topics.
  • LAN local area network
  • WAN wide area network
  • the task information and the data can be transmitted through the logical link with the user, thereby acquiring the task to be assigned by the user; the resource data transmission and the task allocation command are transmitted through the logical link with the computing node, thereby realizing the resources of each node. Acquisition of information and assignment of tasks.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de génération de données de caractéristiques et un procédé de mise en correspondance de caractéristiques, comprenant les étapes consistant à : acquérir des informations de caractéristiques d'une image, et déterminer un poids correspondant à chaque type de caractéristique dans au moins deux types de caractéristiques, un poids correspondant à une caractéristique de premier type étant supérieur aux poids correspondant aux autres caractéristiques des deux types minimums de caractéristiques, et la caractéristique de premier type étant une caractéristique primaire des caractéristiques acquises ; et en fonction du poids, pondérer les deux types minimums de caractéristiques pour obtenir des informations de caractéristiques pondérées, stocker les informations de caractéristiques pondérées dans une base de données de façon à générer des données de caractéristiques, ou effectuer une mise en correspondance de caractéristiques dans la base de données en fonction des informations de caractéristiques pondérées. Au moyen des modes de réalisation de la présente invention, la précision de la mise en correspondance de caractéristiques établies peut être améliorée.
PCT/CN2018/125732 2017-12-29 2018-12-29 Procédé et appareil de génération de données de caractéristiques, et procédé et appareil de mise en correspondance de caractéristiques WO2019129293A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711479219.4A CN109993178B (zh) 2017-12-29 2017-12-29 一种特征数据生成和特征匹配方法及装置
CN201711479219.4 2017-12-29

Publications (1)

Publication Number Publication Date
WO2019129293A1 true WO2019129293A1 (fr) 2019-07-04

Family

ID=67066670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125732 WO2019129293A1 (fr) 2017-12-29 2018-12-29 Procédé et appareil de génération de données de caractéristiques, et procédé et appareil de mise en correspondance de caractéristiques

Country Status (2)

Country Link
CN (1) CN109993178B (fr)
WO (1) WO2019129293A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184969A (zh) * 2023-04-24 2023-05-30 山东省滨州公路工程有限公司 一种沥青拌和站生产质量监控方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140120716A (ko) * 2013-04-04 2014-10-14 한국기술교육대학교 산학협력단 불균일한 텍스쳐 표면의 불량 검출방법
CN106776710A (zh) * 2016-11-18 2017-05-31 广东技术师范学院 一种基于垂直搜索引擎的图文知识库构建方法
CN107480711A (zh) * 2017-08-04 2017-12-15 合肥美的智能科技有限公司 图像识别方法、装置、计算机设备和可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100788643B1 (ko) * 2001-01-09 2007-12-26 삼성전자주식회사 색과 질감의 조합을 기반으로 하는 영상 검색 방법
JP3948249B2 (ja) * 2001-10-30 2007-07-25 日本電気株式会社 類似性判定装置及び類似性判定方法並びにプログラム
US8774498B2 (en) * 2009-01-28 2014-07-08 Xerox Corporation Modeling images as sets of weighted features
CN102096797A (zh) * 2011-01-18 2011-06-15 深圳市民德电子科技有限公司 一种被识读条码的位置提示装置、方法及条码识读设备
CN105718932A (zh) * 2016-01-20 2016-06-29 中国矿业大学 一种基于果蝇优化算法和光滑孪生支持向量机的彩色图像分类方法与系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140120716A (ko) * 2013-04-04 2014-10-14 한국기술교육대학교 산학협력단 불균일한 텍스쳐 표면의 불량 검출방법
CN106776710A (zh) * 2016-11-18 2017-05-31 广东技术师范学院 一种基于垂直搜索引擎的图文知识库构建方法
CN107480711A (zh) * 2017-08-04 2017-12-15 合肥美的智能科技有限公司 图像识别方法、装置、计算机设备和可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116184969A (zh) * 2023-04-24 2023-05-30 山东省滨州公路工程有限公司 一种沥青拌和站生产质量监控方法及系统
CN116184969B (zh) * 2023-04-24 2023-07-14 山东省滨州公路工程有限公司 一种沥青拌和站生产质量监控方法及系统

Also Published As

Publication number Publication date
CN109993178B (zh) 2024-02-02
CN109993178A (zh) 2019-07-09

Similar Documents

Publication Publication Date Title
US8452096B2 (en) Identifying descriptor for person or object in an image
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
US9158995B2 (en) Data driven localization using task-dependent representations
WO2015149534A1 (fr) Procédé et dispositif de reconnaissance faciale basés sur un motif binaire de gabor
TW201926140A (zh) 影像標註方法、電子裝置及非暫態電腦可讀取儲存媒體
JP2014232533A (ja) Ocr出力検証システム及び方法
Lee et al. Tag refinement in an image folksonomy using visual similarity and tag co-occurrence statistics
US11023714B2 (en) Suspiciousness degree estimation model generation device
WO2020155790A1 (fr) Procédé et appareil d'extraction d'informations de règlement de sinistre, et dispositif électronique
Demirkus et al. Hierarchical temporal graphical model for head pose estimation and subsequent attribute classification in real-world videos
Wang et al. Accurate playground localisation based on multi-feature extraction and cascade classifier in optical remote sensing images
WO2019100348A1 (fr) Procédé et dispositif de récupération d'images, ainsi que procédé et dispositif de génération de bibliothèques d'images
Zhao et al. Hybrid generative/discriminative scene classification strategy based on latent Dirichlet allocation for high spatial resolution remote sensing imagery
CN112613471B (zh) 人脸活体检测方法、装置及计算机可读存储介质
CN113869253A (zh) 活体检测方法、训练方法、装置、电子设备及介质
US9002116B2 (en) Attribute recognition via visual search
WO2019129293A1 (fr) Procédé et appareil de génération de données de caractéristiques, et procédé et appareil de mise en correspondance de caractéristiques
Mu et al. Finding autofocus region in low contrast surveillance images using CNN-based saliency algorithm
US20140254864A1 (en) System and method for gesture detection through local product map
CN111079704A (zh) 一种基于量子计算的人脸识别方法及装置
Gunasekar et al. Face detection on distorted images using perceptual quality-aware features
Hoshino et al. Inferencing the best AI service using Deep Neural Networks
Zhao et al. Combinatorial and statistical methods for part selection for object recognition
MIRONICĂ et al. A Fisher Kernel Approach for Multiple Instance Based Object Retrieval in Video Surveillance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18895456

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18895456

Country of ref document: EP

Kind code of ref document: A1