WO2017041552A1 - 纹理特征提取方法及装置 - Google Patents

纹理特征提取方法及装置 Download PDF

Info

Publication number
WO2017041552A1
WO2017041552A1 PCT/CN2016/084837 CN2016084837W WO2017041552A1 WO 2017041552 A1 WO2017041552 A1 WO 2017041552A1 CN 2016084837 W CN2016084837 W CN 2016084837W WO 2017041552 A1 WO2017041552 A1 WO 2017041552A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
texture feature
sub
texture
feature extraction
Prior art date
Application number
PCT/CN2016/084837
Other languages
English (en)
French (fr)
Inventor
王甜甜
Original Assignee
深圳Tcl新技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl新技术有限公司 filed Critical 深圳Tcl新技术有限公司
Publication of WO2017041552A1 publication Critical patent/WO2017041552A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Definitions

  • the present invention relates to the field of image processing, and in particular, to a texture feature extraction method and apparatus.
  • Face recognition is a biometric technology based on the texture features of human faces. It is also commonly called portrait recognition or face recognition.
  • the LBP (Local Binary Patterns) feature of the image is usually extracted as the texture feature of the image.
  • the LBP operation is first performed after the image is equally divided into the horizontal and vertical coordinates of the image.
  • the LBP feature of each sub-image is concatenated as the LBP feature of the original image.
  • the texture features of the image segmentation line are filtered out, resulting in inaccurate extraction of texture features.
  • the main object of the present invention is to provide a texture feature extraction method and apparatus, which aim to accurately extract texture features of an image.
  • the present invention provides a texture feature extraction method, the texture feature extraction method comprising the following steps:
  • the image to be processed is separately segmented by a plurality of preset segmentation methods, and each of the obtained sub-images is subjected to a compression operation, wherein any sub-image obtained by each segmentation method and a sub-image obtained by other segmentation methods are obtained.
  • the images are not the same;
  • the present invention further provides a texture feature extraction method, the texture feature extraction method comprising the following steps:
  • the image to be processed is divided by a plurality of preset division methods, wherein each segmentation Any sub-image obtained by the method is different from the sub-image obtained by other segmentation methods;
  • the method before the step of combining the sub-images obtained by dividing the same segmentation into the first combined image and combining the first combined images into the second combined image, the method further includes:
  • a compression operation is performed on each of the obtained sub-images.
  • the performing the compressing operation on each of the obtained sub-images comprises:
  • Each of the sub-images is transformed into a frequency domain by discrete cosine transform, and each of the sub-images is used as a compressed sub-image in a low-frequency region of a respective frequency domain.
  • the method further includes:
  • the image to be processed is preprocessed by gradation transformation, noise reduction, and Gaussian smoothing.
  • the method further includes:
  • the texture features are compared with each of the texture features in the feature database, and an identification report is generated based on the results of the comparison.
  • the texture feature comprises a texture feature of a face image.
  • the present invention further provides a texture feature extraction apparatus, where the texture feature extraction apparatus includes:
  • the segmentation module is configured to separately process the processed image by using a plurality of preset segmentation manners, wherein any sub-image obtained by each segmentation method is different from the sub-image obtained by other segmentation methods;
  • a combination module configured to respectively combine the sub-images obtained by dividing the same segmentation into a first combined image, and combine each of the first combined images into a second combined image;
  • an extracting module configured to perform a local binary mode LBP operation on the second combined image, and use the obtained LBP feature as a texture feature of the image to be processed.
  • the texture feature extraction device further includes a compression module for each of the obtained The sub-image is subjected to a compression operation.
  • the compression module is further configured to transform each of the sub-images into a frequency domain by discrete cosine transform, and use the low-frequency regions of each of the sub-images in respective frequency domains as respective compressed sub-images.
  • the texture feature extraction device further includes a pre-processing module for pre-processing the image to be processed by gradation transformation, noise reduction, and Gaussian smoothing.
  • a pre-processing module for pre-processing the image to be processed by gradation transformation, noise reduction, and Gaussian smoothing.
  • the texture feature extraction device further comprises a comparison module, configured to compare the texture feature with each texture feature in the feature database, and generate an identification report according to the result of the comparison.
  • a comparison module configured to compare the texture feature with each texture feature in the feature database, and generate an identification report according to the result of the comparison.
  • the invention separately divides the image to be processed into a plurality of sub-images by using a plurality of different preset division manners, so that any sub-image obtained by one division method is different from the sub-images obtained by other division methods, and then each The sub-image obtained by the segmentation method is combined twice, so that the finally obtained combined image contains the texture features at the sub-image connection, and finally the texture features of the combined image are extracted as the texture features of the image to be processed.
  • the image to be processed is directly segmented and the texture features are respectively extracted, and the present invention can accurately extract the texture features of the image.
  • FIG. 1 is a schematic flow chart of a first embodiment of a texture feature extraction method according to the present invention
  • FIG. 2 is a schematic diagram of a segmentation manner in a first embodiment of a texture feature extraction method according to the present invention
  • FIG. 3 is a schematic diagram of another segmentation manner in the first embodiment of the texture feature extraction method of the present invention.
  • FIG. 4 is a schematic diagram of still another division manner in the first embodiment of the texture feature extraction method of the present invention.
  • FIG. 5 is a diagram showing an example of a second combined image in the first embodiment of the texture feature extraction method of the present invention.
  • FIG. 6 is a diagram showing an example of an LBP feature of a second combined image in the first embodiment of the texture feature extraction method of the present invention
  • FIG. 7 is a diagram showing an example of discrete cosine transform results in a second embodiment of the texture feature extraction method of the present invention.
  • FIG. 8 is a schematic diagram of functional modules of a first embodiment of a texture feature extraction apparatus according to the present invention.
  • the texture feature extraction method includes:
  • step S10 the image to be processed is separately segmented by a plurality of preset segmentation methods, wherein any sub-image obtained by each segmentation method is different from the sub-image obtained by other segmentation methods;
  • the texture feature extraction method provided in this embodiment can be applied to the field of face recognition.
  • the texture feature extraction method provided by the present invention can accurately extract the texture features of the face image.
  • the texture feature extraction method provided by the present invention can accurately extract the texture features of the face image.
  • the image to be processed is separately segmented by a plurality of preset segmentation methods, and a plurality of sub-images are obtained correspondingly, wherein any sub-image obtained by one segmentation method is different from the sub-image obtained by other segmentation methods.
  • the purpose is to ensure that the texture features of the partial line image of any one of the segmentation modes can be extracted by the sub-images obtained by other segmentation methods, thereby avoiding the loss of texture features due to image segmentation.
  • the embodiment preferably divides the image to be processed by three different split modes:
  • two dividing lines of the parallel coordinate Y-axis are set at 1/3W and 2/3W of the abscissa, and three parallels are set at the ordinates 2/9H, 5/9H and 8/9H.
  • the dividing line of the coordinate X axis is set at 1/3W and 2/3W of the abscissa, and three parallels are set at the ordinates 2/9H, 5/9H and 8/9H.
  • Step S20 combining the sub-images divided by the same segmentation method into a first combined image, and combining each of the first combined images into a second combined image;
  • the connection information between the obtained sub-images is separately recorded according to different division methods.
  • the same division method is respectively divided.
  • the cut sub-images are combined into a first combined image, and each of the first combined images is combined into a second combined image.
  • the sub-images may not be combined in the connection relationship of the original image to be processed, for example, when the image to be processed is segmented, according to the three different segments shown in FIG. 2 to FIG. 4 respectively.
  • the method divides the image to be processed into a plurality of sub-images, and when combined, combines to obtain a second combined image as shown in FIG. 5.
  • Step S30 performing a local binary mode LBP operation on the second combined image, and using the obtained LBP feature as a texture feature of the image to be processed.
  • an LBP (Local Binary Patterns) operation is performed on the second combined image, and the obtained LBP feature is used as the image to be processed. Texture features.
  • the LBP feature extracted to obtain the second combined image is as shown in FIG. 6.
  • each pixel on the second combined image is used as a center pixel, and the gray value i c of the center pixel is used as a threshold value, and the gray value i p of the neighborhood pixel is binarized, and the binary value is The judgment is as shown in the formula (1), and then the result of the LBP operation is obtained according to the formula (2).
  • LBP Local Binary Patterns
  • LBP operations may be performed on each of the first combined images obtained by combining the sub-images obtained by dividing the same segmentation manner, and the obtained LBP features may be obtained.
  • the LBP features combined with the second combined image are combined and verified as texture features extracted from the second combined image.
  • the texture feature extraction method in this embodiment separately divides the image to be processed into multiple sub-images by using a plurality of different preset segmentation methods, so that any sub-image obtained by one segmentation method and other sub-images are obtained.
  • the images are different, and then the sub-images obtained by each segmentation method are combined twice, so that the final combined image includes the texture features at the sub-image connection, and finally the texture features of the combined image are extracted as the texture features of the image to be processed.
  • the image to be processed is directly segmented and the texture features are respectively extracted, and the present invention can accurately extract the texture features of the image.
  • the method further includes:
  • a compression operation is performed on each of the obtained sub-images.
  • the face recognition neighborhood all face images need to be stored first, and then texture features are extracted for all face images to establish a feature database.
  • different face images occupy different sizes of storage space. If the image is too large, it needs to occupy a large storage space, and correspondingly, the extracted texture features also occupy a large storage space.
  • the obtained sub-image is respectively subjected to a compression operation, which can effectively improve the compression efficiency.
  • compression of the sub-image may be achieved by at least one of DCT (Discrete Cosine Transform), FFT (Fast Fourier Transform), and Gabor transform.
  • step S20 includes:
  • the compressed sub-images are respectively combined into a first combined image and the first combined images are combined into a second combined image.
  • the performing the compression operation on each of the obtained sub-images includes:
  • Each of the sub-images is transformed into a frequency domain by discrete cosine transform, and each of the sub-images is used as a compressed sub-image in a low-frequency region of a respective frequency domain.
  • a discrete cosine transform is preferably used as a compression mode of a sub-image, and the discrete cosine transform is a kind of one of a few DCT coefficients that converts the main information of the image into a low frequency region of the frequency domain. Transformation method. For example, as shown in FIG. 7 , for the original image DCT transformed image, it is obvious that the important information of the image is concentrated in the low frequency region of the upper left corner of the image, and the vertices of the upper left corner of the image are taken as the origin, and the 1/4 image width is intercepted, 1 /4 image is high, that is, it can effectively retain important information of the image.
  • the compression mode provided in this embodiment can compress the original image to a size of 1/16.
  • the size of the second combined image obtained by the three different segmentation methods provided in the first embodiment is the image to be processed. 3/16.
  • the size of the low-frequency region that is specifically intercepted may be determined according to actual conditions. For example, the vertices of the upper left corner of the image may be taken as the origin, and 1/3 of the image width and 1/3 of the image height are intercepted.
  • the storage space occupied by the sub-image can be greatly reduced, thereby reducing the extracted texture features.
  • the method further includes:
  • the image to be processed is preprocessed by gradation transformation, noise reduction, and Gaussian smoothing.
  • the embodiment first uses grayscale transformation, noise reduction, and Gaussian smoothing. Preprocessing the image to be processed to improve the image quality of the image to be processed.
  • the method further includes:
  • the texture features are compared with each of the texture features in the feature database, and an identification report is generated based on the results of the comparison.
  • facial texture features need to be extracted in the two processing stages of face recognition, namely: feature database establishment phase and face recognition phase, wherein the feature database establishment phase is used for Establishing a sample library for face recognition, and storing the face image with the identity information in the feature database, so as to subsequently perform feature matching on the face image that needs to be identified in the feature database; the face recognition stage is about to be performed
  • the recognized face image is subjected to feature matching in the feature database. Therefore, for the face image that determines the identity information and the face image that needs to be identified, the texture feature needs to be extracted, so that the subsequent feature matching can be successfully completed.
  • This embodiment describes the feature database establishment phase and the face recognition phase, respectively.
  • the face image of the identified identity needs to be acquired first, and the obtained face image is used as the image to be processed, and the face image is preprocessed to improve the image. And then extracting the texture feature of the face image, which can be specifically referred to the foregoing embodiment, and is not described here again; the extracted texture feature is associated with the identity information corresponding to the face image and stored in the feature database.
  • the face image to be recognized is first obtained, for example, by face detection.
  • the face image is collected, and the captured face image is taken as the image to be processed; and then the texture feature of the face image is extracted, which can be specifically implemented by referring to the foregoing embodiment, and is not described herein again;
  • the texture features of the face image are compared with the individual texture features in the feature database, and a face recognition report is generated according to the result of the comparison.
  • a face recognition report including the matching failure information is generated.
  • the prompt information is outputted for the user to input the identity information corresponding to the face image;
  • the identity information Upon receiving the identity information input by the user, the identity information is associated with the texture feature of the face image and stored in the feature database.
  • the present invention also provides a texture feature extraction device.
  • the texture feature extraction device includes:
  • the segmentation module 10 is configured to separately segment the processed image by using a plurality of preset segmentation manners, wherein any sub-image obtained by each segmentation method is different from the sub-image obtained by other segmentation methods;
  • the texture feature extraction device provided in this embodiment can be applied to the field of face recognition.
  • the texture feature extraction method provided by the present invention can accurately extract the texture feature of the face image.
  • To create a face feature database can accurately extract the texture feature of the face image.
  • the segmentation module 10 first divides the image to be processed by using a plurality of preset segmentation modes, and correspondingly obtains a plurality of sub-images, wherein any sub-image obtained by one segmentation method and sub-images obtained by other segmentation methods are obtained.
  • the purpose is to ensure that the texture features of the partial line image of any one of the segmentation methods can be extracted by the sub-images obtained by other segmentation methods to avoid the loss of texture features due to image segmentation.
  • the segmentation module 10 preferably divides the image to be processed by three different segmentation methods:
  • two parallel coordinates Y are set at 1/3W and 2/3W of the abscissa respectively.
  • the dividing line of the axis, the dividing line of the two parallel coordinate X axes is set at the ordinate 1/3H and 2/3H;
  • two dividing lines of the parallel coordinate Y-axis are set at 1/3W and 2/3W of the abscissa, and three parallels are set at the ordinates 2/9H, 5/9H and 8/9H.
  • the dividing line of the coordinate X axis is set at 1/3W and 2/3W of the abscissa, and three parallels are set at the ordinates 2/9H, 5/9H and 8/9H.
  • the combining module 20 is configured to combine the sub-images obtained by dividing the same segmentation into a first combined image, and combine the first combined images into a second combined image;
  • the connection information between the obtained sub-images is recorded separately according to different division methods.
  • the combination module 20 combines the sub-images divided by the same segmentation method into the first combined image according to the connection information between the sub-images recorded by the segmentation module 10, and each of the A combined image is combined into a second combined image.
  • the sub-images may not be combined in the connection relationship of the original image to be processed, for example, when the image to be processed is divided, the segmentation module 10 is respectively according to the three shown in FIG. 2 to FIG. A different segmentation mode divides the image to be processed into a plurality of sub-images, and when combined, the combination module 20 combines to obtain a second combined image as shown in FIG.
  • the extracting module 30 is configured to perform a local binary mode LBP operation on the second combined image, and use the obtained LBP feature as a texture feature of the image to be processed.
  • the extraction module 30 performs LBP (Local Binary Patterns) operation on the second combined image, and obtains the LBP feature.
  • LBP Local Binary Patterns
  • the LBP feature extracted by the extraction module 30 to obtain the second combined image is as shown in FIG. 6.
  • each pixel on the second combined image is used as a center pixel, and the gray value i c of the center pixel is used as a threshold value, and the gray value i p of the neighborhood pixel is binarized, and the binary value is The judgment is as shown in the formula (1), and then the result of the LBP operation is obtained according to the formula (2).
  • LBP Local Binary Patterns
  • LBP operations may be performed on each of the first combined images obtained by combining the sub-images obtained by dividing the same segmentation manner, and the obtained LBP features may be obtained.
  • the LBP features combined with the second combined image are combined and verified as texture features extracted from the second combined image.
  • the texture feature extraction device separately divides the image to be processed into multiple sub-images by using a plurality of different preset segmentation methods, so that any sub-image obtained by one segmentation method and other sub-images are obtained.
  • the images are different, and then the sub-images obtained by each segmentation method are combined twice, so that the final combined image includes the texture features at the sub-image connection, and finally the texture features of the combined image are extracted as the texture features of the image to be processed.
  • the image to be processed is directly segmented and the texture features are respectively extracted, and the present invention can accurately extract the texture features of the image.
  • the texture feature extraction device further includes a compression module, configured to perform the obtained sub-images. Compression operation.
  • the compression module respectively performs a compression operation on the obtained sub-image, which can effectively improve the compression efficiency.
  • the compression module may implement compression of the sub-image by at least one of DCT (Discrete Cosine Transform), FFT (Fast Fourier Transform), and Gabor transform.
  • the combining module is further configured to separately divide the compressed sub-images into the first combined image, and combine the first combined images into The second combined image.
  • the compression module is further configured to transform each of the sub-images into a frequency domain by discrete cosine transform respectively. And each of the sub-images is used as a sub-image after compression in a low frequency region of each frequency domain.
  • the compression module preferably adopts discrete cosine transform as a compression mode of the sub-image, and the discrete cosine transform is a transformation of the image into the frequency domain, and the main information of the image is concentrated in a few DCT coefficients in the low frequency region of the frequency domain. the way.
  • the important information of the image is concentrated in the low frequency region of the upper left corner of the image, and the vertices of the upper left corner of the image are taken as the origin, and the 1/4 image width is intercepted, 1 /4 image is high, that is, it can effectively retain important information of the image.
  • the compression mode provided in this embodiment can compress the original image to a size of 1/16.
  • the size of the second combined image obtained by the three different segmentation methods provided in the first embodiment is the image to be processed. 3/16.
  • the size of the low-frequency region that is specifically intercepted may be determined according to actual conditions. For example, the vertices of the upper left corner of the image may be taken as the origin, and 1/3 of the image width and 1/3 of the image height are intercepted.
  • the storage space occupied by the sub-image can be greatly reduced, thereby reducing the extracted texture features.
  • the texture feature extraction apparatus further includes a preprocessing module for performing grayscale transformation, noise reduction, and Gaussian smoothing preprocesses the image to be processed.
  • the preprocessing module first passes gray scale transformation, noise reduction and Gaussian smoothing. Preprocessing the image to be processed to improve the image quality of the image to be processed.
  • the texture feature extraction apparatus further includes a comparison module for using the texture feature and the feature database. Each texture feature in the comparison is compared, and an identification report is generated based on the result of the comparison.
  • facial texture features need to be extracted in the two processing stages of face recognition, namely: feature database establishment phase and face recognition phase, wherein the feature database establishment phase is used for Establishing a sample library for face recognition, and storing the face image with the identity information in the feature database, so as to subsequently perform feature matching on the face image that needs to be identified in the feature database; the face recognition stage is about to be performed
  • the recognized face image is subjected to feature matching in the feature database. Therefore, for the face image that determines the identity information and the face image that needs to be identified, the texture feature needs to be extracted, so that the subsequent feature matching can be successfully completed.
  • This embodiment describes the feature database establishment phase and the face recognition phase, respectively.
  • the texture feature extraction device further includes an acquisition module and a storage module.
  • the acquisition module acquires the face image of the determined identity, and uses the acquired face image as the image to be processed;
  • the module performs pre-processing on the face image to improve the image quality; and after the segmentation module 10 and the combination module 20 process the face image, the extraction module 30 extracts the texture feature of the face image.
  • the storage module associates the extracted texture features with the identity information corresponding to the face image and stores them in the feature database.
  • the acquiring module acquires the face image to be recognized.
  • the face image may be collected by face detection, and the collected face image is used as the image to be processed;
  • the face image is pre-processed to improve the image quality.
  • the extraction module 30 extracts the texture feature of the face image. The embodiment is not described here; after extracting the texture features of the face image, the comparison module compares the texture features with each texture feature in the feature database, and generates a result according to the comparison. Face recognition report.
  • a face recognition report including the matching failure information is generated.
  • the storage module is further configured to: when the texture feature matching the texture feature of the face image does not exist in the feature database, output prompt information for the user to input the identity information corresponding to the face image; Upon receiving the identity information input by the user, the identity information is associated with the texture feature of the face image and stored in the feature database.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种纹理特征提取装置和方法,所述纹理特征提取方法包括以下步骤:通过预设的多种分割方式分别对待处理图像进行分割(S10),其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像(S20);对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征(S30)。从而能够准确的提取图像的纹理特征。

Description

纹理特征提取方法及装置
技术邻域
本发明涉及图像处理领域,尤其涉及一种纹理特征提取方法及装置。
背景技术
人脸识别是基于人脸部的纹理特征进行身份识别的一种生物识别技术,通常也叫做人像识别或面部识别。目前,在人脸识别邻域,通常提取图像的LBP(Local Binary Patterns,局部二值模式)特征作为图像的纹理特征,在提取时,首先对图像的横纵坐标整体均等分割后进行LBP运算,再串接各子图像的LBP特征作为原图像的LBP特征。但是对图像进行分割时,图像分割线部分的纹理特征会被滤除,导致纹理特征的提取不够准确。
发明内容
本发明的主要目的在于提供一种纹理特征提取方法及装置,旨在准确的提取图像的纹理特征。
为实现上述目的,本发明提供一种纹理特征提取方法,该纹理特征提取方法包括以下步骤:
通过灰度变换、降噪以及高斯平滑对待处理图像进行预处理;
通过预设的多种分割方式分别对所述待处理图像进行分割,并对得到的各个所述子图像进行压缩操作,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
进一步地,本发明还提供一种纹理特征提取方法,该纹理特征提取方法包括以下步骤:
通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割 方式得到的任一子图像与其它分割方式得到的子图像均不相同;
分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
优选地,所述分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像的步骤之前,还包括:
对得到的各个所述子图像进行压缩操作。
优选地,所述对得到的各个所述子图像进行压缩操作包括:
分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
优选地,所述通过预设的多种分割方式分别对待处理图像进行分割的步骤之前,还包括:
通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
优选地,所述对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征的步骤之后,还包括:
将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
优选地,所述纹理特征包括人脸图像的纹理特征。
此外,为实现上述目的,本发明还提供一种纹理特征提取装置,所述纹理特征提取装置包括:
分割模块,用于通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
组合模块,用于分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
提取模块,用于对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
优选地,所述纹理特征提取装置还包括压缩模块,用于对得到的各个所 述子图像进行压缩操作。
优选地,所述压缩模块还用于分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
优选地,所述纹理特征提取装置还包括预处理模块,用于通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
优选地,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
本发明通过多种不同的预设分割方式分别将待处理图像分割为多个子图像,以使得按一种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同,然后将各分割方式得到的子图像经过两次组合,使得最终得到的组合图像包含子图像连接处的纹理特征,最后提取组合图像的纹理特征作为所述待处理图像的纹理特征。相较于现有技术直接将待处理图像分割后分别提取纹理特征,本发明能够准确的提取图像的纹理特征。
附图说明
图1为本发明纹理特征提取方法第一实施例的流程示意图;
图2为本发明纹理特征提取方法第一实施例中的一种分割方式示意图;
图3为本发明纹理特征提取方法第一实施例中的另一种分割方式示意图;
图4为本发明纹理特征提取方法第一实施例中的又一种分割方式示意图;
图5为本发明纹理特征提取方法第一实施例中第二组合图像的示例图;
图6为本发明纹理特征提取方法第一实施例中第二组合图像的LBP特征示例图;
图7为本发明纹理特征提取方法第二实施例中离散余弦变换结果示例图;
图8为本发明纹理特征提取装置第一实施例的功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种纹理特征提取方法,参照图1,在本发明纹理特征提取方法的第一实施例中,该纹理特征提取方法包括:
步骤S10,通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
本实施例提供的纹理特征提取方法,可以应用于人脸识别领域,例如,在创建人脸特征数据库时,采用本发明提供的纹理特征提取方法,能够准确的提取出人脸图像的纹理特征,以创建人脸特征数据库。
本实施例中,首先通过多种预设分割方式分别对待处理图像进行分割,相应得到多个子图像,其中,按一种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;其目的在于,确保任一种分割方式的分割线部分图像的纹理特征可以通过其他分割方式得到的子图像提取得到,避免由于图像分割导致纹理特征的缺失。
结合参照图2至图4,本实施例优选通过三种不同的分割方式对待处理图像进行分割:
以待处理图像左下角点为坐标原点,假设待处理图像宽为W,高为H;
其一,如图2所示,分别在横坐标1/3W和2/3W处设置两条平行坐标Y轴的分割线,在纵坐标1/3H和2/3H处设置两条平行坐标X轴的分割线;
其二,如图3所示,分别在横坐标1/9W、4/9W以及7/9W处设置三条平行坐标Y轴的分割线,在纵坐标1/3H和2/3H处设置两条平行坐标X轴的分割线;
其三,如图4所示,分别在横坐标1/3W和2/3W处设置两条平行坐标Y轴的分割线,在纵坐标2/9H、5/9H以及8/9H处设置三条平行坐标X轴的分割线。
步骤S20,分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
需要说明的是,本实施例在对待处理图像进行分割时,按照不同的分割方式,分别记录了得到的各子图像间的连接信息。优选地,本实施例在进行图像组合时,根据记录的各子图像间的连接信息,分别将按同一分割方式分 割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像。本领域技术人员可以理解的是,还可以不按照各子图像在原待处理图像的连接关系进行组合,例如,在对待处理图像进行分割时,分别按照图2至图4所示的三种不同分割方式将待处理图像分割为多个子图像,在组合时,组合得到如图5所示的第二组合图像。
步骤S30,对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
本实施例中,在完成所述第二组合图像的组合之后,对所述第二组合图像进行LBP(Local Binary Patterns,局部二值模式)运算,并将得到的LBP特征作为所述待处理图像的纹理特征。例如,提取得到所述第二组合图像的LBP特征如图6所示。
具体的,将所述第二组合图像上的每个像素作为中心像素,使用中心像素的灰度值ic作为阈值,对邻域像素的灰度值ip进行二值化运算,二进制值的判定如公式(1)所示,然后根据公式(2)得到LBP运算的结果。
Figure PCTCN2016084837-appb-000002
其中,p为中心像素邻域像素的个数,r为邻域的半径,LBPp,r可以产生2p种不同的模式,p=8r,r为1-3的整数,例如,当r=1时,p=8。
在其他实施例中,对于将按同一分割方式分割得到的子图像组合得到的各个所述第一组合图像,也可以进行LBP(Local Binary Patterns,局部二值模式)运算,并将得到的LBP特征组合后与所述第二组合图像的LBP特征,作为从所述第二组合图像提取的纹理特征的验证和补充。
本实施例提出的纹理特征提取方法,通过多种不同的预设分割方式分别将待处理图像分割为多个子图像,以使得按一种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同,然后将各分割方式得到的子图像经过两次组合,使得最终得到的组合图像包含子图像连接处的纹理特征,最后提取组合图像的纹理特征作为所述待处理图像的纹理特征。相较于现有技术直接将待处理图像分割后分别提取纹理特征,本发明能够准确的提取图像的纹理特征。
进一步的,基于第一实施例,提出本发明纹理特征提取方法的第二实施例,在本实施例中,上述步骤S20之前,还包括:
对得到的各个所述子图像进行压缩操作。
目前,在人脸识别邻域,首先需要存储所有的人脸图像,再对所有的人脸图像进行纹理特征的提取,用以建立特征数据库。但是,在图像存储过程中,不同的人脸图像占据存储空间的大小不同,若图像过大,则需要占据较大的存储空间,相应的,提取的纹理特征也会占据较大的存储空间。有鉴于此,本实施例中,在完成待处理图像的分割之后,分别对得到的子图像进行压缩操作,能够有效的提高压缩效率。例如,可以通过DCT(Discrete Cosine Transform,离散余弦变换)、FFT(Fast Fourier Transform,快速傅里叶变换)以及Gabor变换中的至少一种变换操作实现对子图像的压缩。
进一步的,在本实施例中,上述步骤S20包括:
分别将按同一分割方式分割得到的,且压缩后的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像。
进一步的,基于第二实施例,提出本发明纹理特征提取方法的第三实施例,在本实施例中,所述对得到的各个所述子图像进行压缩操作包括:
分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
需要说明的是,本实施例优选采用离散余弦变换作为子图像的压缩方式,离散余弦变换是将图像转换到频域,使图像的主要信息集中在频域低频区域的少数DCT系数中的一种变换方式。例如,如图7所示,为原始图像DCT变换后的图像,显而易见的,图像的重要信息都集中在图像左上角的低频区域,以图像左上角顶点为原点,截取1/4图像宽,1/4图像高,即能够有效的保留图像的重要信息。容易理解的是,按本实施例提供的压缩方式能够将原始图像压缩至1/16大小,例如,按第一实施例提供的3种不同分割方式得到的第二组合图像的大小为待处理图像的3/16。可以理解的是,在其他实施例中,具体截取的低频区域大小可以依据实际情况而定,例如,也可以是以图像左上角顶点为原点,截取1/3图像宽,1/3图像高。
本实施例通过采用离散余弦变换将子图像变换到频域,并将子图像在频域的低频区域作为压缩后的子图像,能够大幅度降低子图像占据的存储空间,进而降低提取的纹理特征所占据的存储空间。
进一步的,基于第一实施例,提出本发明纹理特征提取方法的第四实施例,在本实施例中,上述步骤S10之前,还包括:
通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
需要说明的是,由于图像采集设备条件不同,采集的图像往往存在噪声、对比度不同等缺陷,为了改善图像质量,方便后续纹理特征的提取,本实施例首先通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理,改善待处理图像的图像质量。
进一步的,基于第一实施例,提出本发明纹理特征提取方法的第五实施例,在本实施例中,上述步骤S30之后,还包括:
将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
对于人脸识别技术来说,在人脸识别的两个处理阶段中均需要进行人脸纹理特征的提取,分别是:特征数据库建立阶段和人脸识别阶段,其中,特征数据库建立阶段,用于建立人脸识别的样本库,将确定了身份信息的人脸图像存入特征数据库中,以便于后续将需要进行识别的人脸图像在特征数据库中进行特征匹配;人脸识别阶段,即将需要进行识别的人脸图像在特征数据库进行特征匹配的过程,因此,对于确定了身份信息的人脸图像及需要进行识别的人脸图像,均需要进行纹理特征的提取,才能顺利的完成后续的特征匹配过程。本实施例分别以特征数据库建立阶段和人脸识别阶段进行说明。
具体的,在特征数据库建立阶段,首先需要获取已确定身份的人脸图像,并将获取的所述人脸图像作为所述待处理图像,并对所述人脸图像进行预处理,以改善图像质量;然后提取所述人脸图像的纹理特征,具体可参照前述实施例施行,此处不再赘述;将提取的纹理特征与所述人脸图像对应的身份信息关联后存储至特征数据库。
在人脸识别阶段,首先获取待识别的人脸图像,例如,可通过人脸检测 采集人脸图像,并将采集的所述人脸图像作为待处理图像;然后提取所述人脸图像的纹理特征,具体可参照前述实施例施行,此处不再赘述;在提取到所述人脸图像的纹理特征之后,将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成人脸识别报告。其中,在特征数据库中存在与所述人脸图像的纹理特征匹配的纹理特征时,生成包括匹配的所述纹理特征所对应的身份信息的人脸识别报告;在特征数据库中不存在与所述人脸图像的纹理特征匹配的纹理特征时,生成包括匹配失败信息的人脸识别报告。
此外,在特征数据库中不存在与所述人脸图像的纹理特征匹配的纹理特征时,输出提示信息,以供用户输入所述人脸图像对应的身份信息;
在接收到用户输入的身份信息时,将所述身份信息与所述人脸图像的纹理特征关联后存储至所述特征数据库。
本发明还提供一种纹理特征提取装置,参照图8,在本发明纹理特征提取装置的第一实施例中,所述纹理特征提取装置包括:
分割模块10,用于通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
本实施例提供的纹理特征提取装置,可以应用于人脸识别领域,例如,在创建人脸特征数据库时,采用本发明提供的纹理特征提取方法,能够准确的提取出人脸图像的纹理特征,以创建人脸特征数据库。
本实施例中,分割模块10首先通过多种预设分割方式分别对待处理图像进行分割,相应得到多个子图像,其中,按一种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;其目的在于,确保任一种分割方式的分割线部分图像的纹理特征可以通过其他分割方式得到的子图像提取得到,避免由于图像分割导致纹理特征的缺失。
结合参照图2至图4,分割模块10优选通过三种不同的分割方式对待处理图像进行分割:
以待处理图像左下角点为坐标原点,假设待处理图像宽为W,高为H;
其一,如图2所示,分别在横坐标1/3W和2/3W处设置两条平行坐标Y 轴的分割线,在纵坐标1/3H和2/3H处设置两条平行坐标X轴的分割线;
其二,如图3所示,分别在横坐标1/9W、4/9W以及7/9W处设置三条平行坐标Y轴的分割线,在纵坐标1/3H和2/3H处设置两条平行坐标X轴的分割线;
其三,如图4所示,分别在横坐标1/3W和2/3W处设置两条平行坐标Y轴的分割线,在纵坐标2/9H、5/9H以及8/9H处设置三条平行坐标X轴的分割线。
组合模块20,用于分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
需要说明的是,分割模块10在对待处理图像进行分割时,按照不同的分割方式,分别记录了得到的各子图像间的连接信息。优选地,组合模块20在进行图像组合时,根据分割模块10记录的各子图像间的连接信息,分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像。本领域技术人员可以理解的是,还可以不按照各子图像在原待处理图像的连接关系进行组合,例如,在对待处理图像进行分割时,分割模块10分别按照图2至图4所示的三种不同分割方式将待处理图像分割为多个子图像,在组合时,组合模块20组合得到如图5所示的第二组合图像。
提取模块30,用于对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
本实施例中,在组合模块20完成所述第二组合图像的组合之后,提取模块30对所述第二组合图像进行LBP(Local Binary Patterns,局部二值模式)运算,并将得到的LBP特征作为所述待处理图像的纹理特征。例如,提取模块30提取得到所述第二组合图像的LBP特征如图6所示。
具体的,将所述第二组合图像上的每个像素作为中心像素,使用中心像素的灰度值ic作为阈值,对邻域像素的灰度值ip进行二值化运算,二进制值的判定如公式(1)所示,然后根据公式(2)得到LBP运算的结果。
Figure PCTCN2016084837-appb-000003
Figure PCTCN2016084837-appb-000004
其中,p为中心像素邻域像素的个数,r为邻域的半径,LBPp,r可以产生2p种不同的模式,p=8r,r为1-3的整数,例如,当r=1时,p=8。
在其他实施例中,对于将按同一分割方式分割得到的子图像组合得到的各个所述第一组合图像,也可以进行LBP(Local Binary Patterns,局部二值模式)运算,并将得到的LBP特征组合后与所述第二组合图像的LBP特征,作为从所述第二组合图像提取的纹理特征的验证和补充。
本实施例提出的纹理特征提取装置,通过多种不同的预设分割方式分别将待处理图像分割为多个子图像,以使得按一种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同,然后将各分割方式得到的子图像经过两次组合,使得最终得到的组合图像包含子图像连接处的纹理特征,最后提取组合图像的纹理特征作为所述待处理图像的纹理特征。相较于现有技术直接将待处理图像分割后分别提取纹理特征,本发明能够准确的提取图像的纹理特征。
进一步的,基于第一实施例,提出本发明纹理特征提取装置的第二实施例,在本实施例中,所述纹理特征提取装置还包括压缩模块,用于对得到的各个所述子图像进行压缩操作。
目前,在人脸识别邻域,首先需要存储所有的人脸图像,再对所有的人脸图像进行纹理特征的提取,用以建立特征数据库。但是,在图像存储过程中,不同的人脸图像占据存储空间的大小不同,若图像过大,则需要占据较大的存储空间,相应的,提取的纹理特征也会占据较大的存储空间。有鉴于此,本实施例中,在分割模块10完成待处理图像的分割之后,压缩模块分别对得到的子图像进行压缩操作,能够有效的提高压缩效率。例如,压缩模块可以通过DCT(Discrete Cosine Transform,离散余弦变换)、FFT(Fast Fourier Transform,快速傅里叶变换)以及Gabor变换中的至少一种变换操作实现对子图像的压缩。
进一步的,在本实施例中,所述组合模块还用于分别将按同一分割方式分割得到的,且压缩后的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像。
进一步的,基于第二实施例,提出本发明纹理特征提取装置的第三实施例,在本实施例中,所述压缩模块还用于分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
需要说明的是,压缩模块优选采用离散余弦变换作为子图像的压缩方式,离散余弦变换是将图像转换到频域,使图像的主要信息集中在频域低频区域的少数DCT系数中的一种变换方式。例如,如图7所示,为原始图像DCT变换后的图像,显而易见的,图像的重要信息都集中在图像左上角的低频区域,以图像左上角顶点为原点,截取1/4图像宽,1/4图像高,即能够有效的保留图像的重要信息。容易理解的是,按本实施例提供的压缩方式能够将原始图像压缩至1/16大小,例如,按第一实施例提供的3种不同分割方式得到的第二组合图像的大小为待处理图像的3/16。可以理解的是,在其他实施例中,具体截取的低频区域大小可以依据实际情况而定,例如,也可以是以图像左上角顶点为原点,截取1/3图像宽,1/3图像高。
本实施例通过采用离散余弦变换将子图像变换到频域,并将子图像在频域的低频区域作为压缩后的子图像,能够大幅度降低子图像占据的存储空间,进而降低提取的纹理特征所占据的存储空间。
进一步的,基于第一实施例,提出本发明纹理特征提取装置的第四实施例,在本实施例中,所述纹理特征提取装置还包括预处理模块,用于通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
需要说明的是,由于图像采集设备条件不同,采集的图像往往存在噪声、对比度不同等缺陷,为了改善图像质量,方便后续纹理特征的提取,预处理模块首先通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理,改善待处理图像的图像质量。
进一步的,基于第一实施例,提出本发明纹理特征提取装置的第五实施例,在本实施例中,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
对于人脸识别技术来说,在人脸识别的两个处理阶段中均需要进行人脸纹理特征的提取,分别是:特征数据库建立阶段和人脸识别阶段,其中,特征数据库建立阶段,用于建立人脸识别的样本库,将确定了身份信息的人脸图像存入特征数据库中,以便于后续将需要进行识别的人脸图像在特征数据库中进行特征匹配;人脸识别阶段,即将需要进行识别的人脸图像在特征数据库进行特征匹配的过程,因此,对于确定了身份信息的人脸图像及需要进行识别的人脸图像,均需要进行纹理特征的提取,才能顺利的完成后续的特征匹配过程。本实施例分别以特征数据库建立阶段和人脸识别阶段进行说明。
具体的,所述纹理特征提取装置还包括获取模块和存储模块,在特征数据库建立阶段,获取模块获取已确定身份的人脸图像,并将获取的所述人脸图像作为待处理图像;预处理模块对所述人脸图像进行预处理,以改善图像质量;并由分割模块10和组合模块20对所述人脸理图像进行处理后,由提取模块30提取所述人脸图像的纹理特征,具体可参照前述实施例施行,此处不再赘述;存储模块将提取的纹理特征与所述人脸图像对应的身份信息关联后存储至特征数据库。
在人脸识别阶段,获取模块获取待识别的人脸图像,例如,可通过人脸检测采集人脸图像,并将采集的所述人脸图像作为所述待处理图像;预处理模块对所述人脸图像进行预处理,以改善图像质量;并由分割模块10和组合模块20对所述人脸理图像进行处理后,由提取模块30提取所述人脸图像的纹理特征,具体可参照前述实施例施行,此处不再赘述;在提取到所述人脸图像的纹理特征之后,比对模块将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成人脸识别报告。其中,在特征数据库中存在与所述人脸图像的纹理特征匹配的纹理特征时,生成包括匹配的所述纹理特征所对应的身份信息的人脸识别报告;在特征数据库中不存在与所述人脸图像的纹理特征匹配的纹理特征时,生成包括匹配失败信息的人脸识别报告。
此外,所述存储模块还用于在特征数据库中不存在与所述人脸图像的纹理特征匹配的纹理特征时,输出提示信息,以供用户输入所述人脸图像对应的身份信息;以及在接收到用户输入的身份信息时,将所述身份信息与所述人脸图像的纹理特征关联后存储至所述特征数据库。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术邻域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种纹理特征提取方法,其特征在于,所述纹理特征提取方法包括以下步骤:
    通过灰度变换、降噪以及高斯平滑对待处理图像进行预处理;
    通过预设的多种分割方式分别对所述待处理图像进行分割,并对得到的各个所述子图像进行压缩操作,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
    分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
    对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
  2. 一种纹理特征提取方法,其特征在于,所述纹理特征提取方法包括以下步骤:
    通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
    分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
    对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
  3. 如权利要求2所述的纹理特征提取方法,其特征在于,所述分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像的步骤之前,还包括:
    对得到的各个所述子图像进行压缩操作。
  4. 如权利要求3所述的纹理特征提取方法,其特征在于,所述对得到的各个所述子图像进行压缩操作包括:
    分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
  5. 如权利要求2所述的纹理特征提取方法,其特征在于,所述通过预设的多种分割方式分别对待处理图像进行分割的步骤之前,还包括:
    通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
  6. 如权利要求2所述的纹理特征提取方法,其特征在于,所述对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征的步骤之后,还包括:
    将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  7. 如权利要求3所述的纹理特征提取方法,其特征在于,所述对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征的步骤之后,还包括:
    将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  8. 如权利要求4所述的纹理特征提取方法,其特征在于,所述对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征的步骤之后,还包括:
    将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  9. 如权利要求5所述的纹理特征提取方法,其特征在于,所述对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征的步骤之后,还包括:
    将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  10. 如权利要求4所述的纹理特征提取方法,其特征在于,所述通过预 设的多种分割方式分别对待处理图像进行分割的步骤之前,还包括:
    通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
  11. 如权利要求6所述的纹理特征提取方法,其特征在于,所述纹理特征包括人脸图像的纹理特征。
  12. 一种纹理特征提取装置,其特征在于,所述纹理特征提取装置包括:
    分割模块,用于通过预设的多种分割方式分别对待处理图像进行分割,其中,每种分割方式得到的任一子图像与其它分割方式得到的子图像均不相同;
    组合模块,用于分别将按同一分割方式分割得到的子图像组合为第一组合图像,并将各个所述第一组合图像组合为第二组合图像;
    提取模块,用于对所述第二组合图像进行局部二值模式LBP运算,并将得到的LBP特征作为所述待处理图像的纹理特征。
  13. 如权利要求12所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括压缩模块,用于对得到的各个所述子图像进行压缩操作。
  14. 如权利要求13所述的纹理特征提取装置,其特征在于,所述压缩模块还用于分别通过离散余弦变换将各个所述子图像变换到频域,并将各个所述子图像在各自频域的低频区域作为各自压缩后的子图像。
  15. 如权利要求12所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括预处理模块,用于通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
  16. 如权利要求12所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  17. 如权利要求13所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  18. 如权利要求14所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  19. 如权利要求15所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括比对模块,用于将所述纹理特征与特征数据库中的各个纹理特征进行比对,并根据比对的结果生成识别报告。
  20. 如权利要求14所述的纹理特征提取装置,其特征在于,所述纹理特征提取装置还包括预处理模块,用于通过灰度变换、降噪以及高斯平滑对所述待处理图像进行预处理。
PCT/CN2016/084837 2015-09-11 2016-06-03 纹理特征提取方法及装置 WO2017041552A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510581486.7 2015-09-11
CN201510581486.7A CN105224919B (zh) 2015-09-11 2015-09-11 纹理特征提取方法及装置

Publications (1)

Publication Number Publication Date
WO2017041552A1 true WO2017041552A1 (zh) 2017-03-16

Family

ID=54993879

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084837 WO2017041552A1 (zh) 2015-09-11 2016-06-03 纹理特征提取方法及装置

Country Status (2)

Country Link
CN (1) CN105224919B (zh)
WO (1) WO2017041552A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840914A (zh) * 2019-02-28 2019-06-04 华南理工大学 一种基于用户交互式的纹理分割方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224919B (zh) * 2015-09-11 2019-02-26 深圳Tcl新技术有限公司 纹理特征提取方法及装置
CN108036746B (zh) * 2017-12-26 2019-08-06 太原理工大学 一种基于频谱法的Gabor变换实现碳纤维复合材料表面纹理分析方法
CN108897512B (zh) * 2018-07-06 2024-07-05 北京鲸世科技有限公司 图像发送方法及装置,拼接显示系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731420A (zh) * 2005-08-19 2006-02-08 哈尔滨工业大学 斜小波图像指纹的提取和验证方法
CN101206715A (zh) * 2006-12-18 2008-06-25 索尼株式会社 面部识别设备、方法、Gabor滤波器应用设备和计算机程序
US20100310153A1 (en) * 2007-10-10 2010-12-09 Mitsubishi Electric Corporation Enhanced image identification
CN103761515A (zh) * 2014-01-27 2014-04-30 中国科学院深圳先进技术研究院 一种基于lbp的人脸特征提取方法及装置
CN104680127A (zh) * 2014-12-18 2015-06-03 闻泰通讯股份有限公司 手势识别方法及系统
CN105224919A (zh) * 2015-09-11 2016-01-06 深圳Tcl新技术有限公司 纹理特征提取方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731420A (zh) * 2005-08-19 2006-02-08 哈尔滨工业大学 斜小波图像指纹的提取和验证方法
CN101206715A (zh) * 2006-12-18 2008-06-25 索尼株式会社 面部识别设备、方法、Gabor滤波器应用设备和计算机程序
US20100310153A1 (en) * 2007-10-10 2010-12-09 Mitsubishi Electric Corporation Enhanced image identification
CN103761515A (zh) * 2014-01-27 2014-04-30 中国科学院深圳先进技术研究院 一种基于lbp的人脸特征提取方法及装置
CN104680127A (zh) * 2014-12-18 2015-06-03 闻泰通讯股份有限公司 手势识别方法及系统
CN105224919A (zh) * 2015-09-11 2016-01-06 深圳Tcl新技术有限公司 纹理特征提取方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840914A (zh) * 2019-02-28 2019-06-04 华南理工大学 一种基于用户交互式的纹理分割方法
CN109840914B (zh) * 2019-02-28 2022-12-16 华南理工大学 一种基于用户交互式的纹理分割方法

Also Published As

Publication number Publication date
CN105224919B (zh) 2019-02-26
CN105224919A (zh) 2016-01-06

Similar Documents

Publication Publication Date Title
WO2017080196A1 (zh) 基于人脸图像的视频分类方法和装置
WO2017041552A1 (zh) 纹理特征提取方法及装置
Bourlai et al. Restoring degraded face images: A case study in matching faxed, printed, and scanned photos
WO2017092272A1 (zh) 人脸识别方法和装置
Muduli et al. A novel technique for wall crack detection using image fusion
US20100079453A1 (en) 3D Depth Generation by Vanishing Line Detection
KR20090109012A (ko) 영상 처리 방법
CN115100077B (zh) 一种图像增强方法与装置
Alabbasi et al. Human face detection from images, based on skin color
Lu et al. A shadow removal method for tesseract text recognition
CN111079689B (zh) 一种指纹图像增强方法
JP2000348173A (ja) 唇抽出方法
CN108647680B (zh) 图像定位框检测方法和装置
CN108550119B (zh) 一种结合边缘信息的图像去噪方法
CN110599511B (zh) 一种保留图像边缘的图像滤波方法、装置及存储介质
Kalfon et al. A new approach to texture recognition using decorrelation stretching
CN109934190B (zh) 基于变形高斯核函数的自适应强光人脸图像纹理恢复方法
CN111369452A (zh) 大区域图像局部破损点优化提取方法
Ndjiki-Nya et al. Automatic structure-aware inpainting for complex image content
TWI613903B (zh) 結合小波轉換及邊緣偵測建立單張影像深度圖的裝置及其方法
CN110942440A (zh) 一种图像锐化的方法及装置
Borawski et al. An algorithm for the automatic estimation of image orientation
CN104867149B (zh) 基于局部平面线性点的翻拍图像鉴别方法
Vora et al. Enhanced face recognition using 8-Connectivity-of-Skin-Region and Standard-Deviation-based-Pose-Detection as preprocessing techniques
TWI847016B (zh) 提高偵測隱形眼鏡邊緣缺陷與其他缺陷的電腦實施處理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16843490

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16843490

Country of ref document: EP

Kind code of ref document: A1