CN110348457B - Image feature extraction method, image feature extraction device, electronic equipment and storage medium - Google Patents

Image feature extraction method, image feature extraction device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110348457B
CN110348457B CN201910553892.0A CN201910553892A CN110348457B CN 110348457 B CN110348457 B CN 110348457B CN 201910553892 A CN201910553892 A CN 201910553892A CN 110348457 B CN110348457 B CN 110348457B
Authority
CN
China
Prior art keywords
human body
body part
perspective image
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910553892.0A
Other languages
Chinese (zh)
Other versions
CN110348457A (en
Inventor
贺志强
牛凯
夏楚藜
张一杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201910553892.0A priority Critical patent/CN110348457B/en
Publication of CN110348457A publication Critical patent/CN110348457A/en
Application granted granted Critical
Publication of CN110348457B publication Critical patent/CN110348457B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image feature extraction method, an image feature extraction device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a perspective image of a human body part to be processed; preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part; generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions; extracting a plurality of pieces of first characteristic information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix; extracting second characteristic information of the preprocessed human body part perspective image; and fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information. The embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image.

Description

Image feature extraction method, image feature extraction device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image feature extraction method, an image feature extraction device, an electronic device, and a storage medium.
Background
The human body part fluoroscopic image is generally an image obtained by scanning with a CT (Computed Tomography) instrument, a magnetic resonance instrument, or the like, and generally carries fluoroscopic information of a human body part, for example, the fluoroscopic information of a human skeleton.
The human body part perspective image can be analyzed by extracting the characteristic information in the human body part perspective image. In the existing image feature extraction method, features in a human body part perspective image are usually marked manually to distinguish different regions in the human body part perspective image.
However, the inventor finds that the prior art has at least the following problems in the process of implementing the invention:
the existing image feature extraction method can only identify single features in a human body perspective view, such as texture features of human body tissues, so that when the perspective image of a human body part is analyzed through the marked features, only the marked single features can be used for analysis, and the method has limitations.
Disclosure of Invention
An object of the embodiments of the present invention is to provide an image feature extraction method, an image feature extraction device, an electronic device, and a storage medium, so as to further extract feature information in a human body part perspective image more comprehensively. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an image feature extraction method, where the method includes:
acquiring a perspective image of a human body part to be processed;
preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, wherein the preprocessed perspective image of the human body part is a gray image;
generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
extracting a plurality of pieces of first feature information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix, wherein the first feature information is an overall feature of the human body part perspective image, and the plurality of pieces of first feature information include: the system comprises an angle secondary moment feature, a contrast feature, an entropy feature, an inverse difference moment feature, an autocorrelation feature and an energy feature, wherein the angle secondary moment feature is used for representing the consistency degree of the preprocessed human body part perspective image, the contrast feature is used for representing the size degree of gray contrast of the preprocessed human body part perspective image, the entropy feature is used for representing the size degree of information quantity of the preprocessed human body part perspective image, the inverse difference moment feature is used for representing the local consistency degree of the preprocessed human body part perspective image, the autocorrelation feature is used for representing the correlation degree of each pixel in the preprocessed human body part perspective image, and the energy feature is used for representing the overall consistency degree of the preprocessed human body part perspective image;
extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of a human body part in the human body part perspective image;
and fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information.
Optionally, the step of performing a preprocessing operation on the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part includes:
adjusting the brightness of a human body skeleton perspective image to be processed to a preset brightness to obtain a first image;
removing noise in the first image by adopting corrosion operation to obtain a second image;
and converting the second image into a gray image to obtain a preprocessed human skeleton perspective image.
Optionally, the step of generating a gray level co-occurrence matrix of the preprocessed human body part perspective images in different preset directions includes:
generating gray level co-occurrence matrixes of the preprocessed human skeleton perspective images in different preset directions, wherein the angles corresponding to the preset directions comprise: 0 °,30 °,45 °,90 °,135 °,150 °.
Optionally, the step of extracting a plurality of first feature information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix includes:
and extracting a plurality of pieces of first characteristic information of the preprocessed human skeleton perspective image by using the generated gray level co-occurrence matrix.
Optionally, the step of extracting the second feature information of the preprocessed human body part perspective image includes:
inputting the preprocessed human body skeleton perspective image into a pre-trained convolutional neural network to obtain second characteristic information of the preprocessed human body skeleton perspective image, wherein the second characteristic information comprises: the shape characteristic, the density characteristic and the enveloping structure characteristic of the preset region of the human body part.
Optionally, the step of fusing the plurality of first feature information and the second feature information to obtain fused feature information includes:
and fusing the plurality of first characteristic information and the second characteristic information to obtain a characteristic vector, wherein the characteristic vector carries the characteristic information after characteristic fusion.
In a second aspect, an embodiment of the present invention provides an image feature extraction apparatus, including:
the acquisition module is used for acquiring a perspective image of a human body part to be processed;
the preprocessing module is used for preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, and the preprocessed perspective image of the human body part is a gray image;
the generation module is used for generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
a first extraction module, configured to extract, by using the generated gray level co-occurrence matrix, a plurality of first feature information of the preprocessed human body part perspective image, where the first feature information is an overall feature of the human body part perspective image, and the plurality of first feature information includes: the system comprises an angle secondary moment feature, a contrast feature, an entropy feature, an inverse difference moment feature, an autocorrelation feature and an energy feature, wherein the angle secondary moment feature is used for representing the consistency degree of the preprocessed human body part perspective image, the contrast feature is used for representing the size degree of gray contrast of the preprocessed human body part perspective image, the entropy feature is used for representing the size degree of information quantity of the preprocessed human body part perspective image, the inverse difference moment feature is used for representing the local consistency degree of the preprocessed human body part perspective image, the autocorrelation feature is used for representing the correlation degree of each pixel in the preprocessed human body part perspective image, and the energy feature is used for representing the overall consistency degree of the preprocessed human body part perspective image;
the second extraction module is used for extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of a human body part in the human body part perspective image;
and the fusion module is used for fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information.
Optionally, the preprocessing module includes:
the adjusting submodule is used for adjusting the brightness of the human skeleton perspective image to be processed to preset brightness to obtain a first image;
the removing submodule is used for removing the noise in the first image by adopting corrosion operation to obtain a second image;
and the conversion submodule is used for converting the second image into a gray image to obtain a preprocessed human skeleton perspective image.
Optionally, the generating module is specifically configured to:
generating gray level co-occurrence matrixes of the preprocessed human skeleton perspective images in different preset directions, wherein the angles corresponding to the preset directions comprise: 0 °,30 °,45 °,90 °,135 °,150 °.
Optionally, the first extraction module is specifically configured to:
and extracting a plurality of pieces of first characteristic information of the preprocessed human skeleton perspective image by using the generated gray level co-occurrence matrix.
Optionally, the second extraction module is specifically configured to:
inputting the preprocessed human body skeleton perspective image into a pre-trained convolutional neural network to obtain second characteristic information of the preprocessed human body skeleton perspective image, wherein the second characteristic information comprises: the shape characteristic, the density characteristic and the enveloping structure characteristic of the preset region of the human body part.
Optionally, the fusion module is specifically configured to:
and fusing the plurality of first characteristic information and the second characteristic information to obtain a characteristic vector, wherein the characteristic vector carries the characteristic information after characteristic fusion.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus; the machine-readable storage medium stores machine-executable instructions executable by the processor, the processor being caused by the machine-executable instructions to: the method steps of the image feature extraction method provided by the first aspect of the embodiment of the invention are realized.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, where the computer program is executed by a processor to perform the method steps of the image feature extraction method provided in the first aspect of the present invention.
The embodiment of the invention provides an image feature extraction method, an image feature extraction device, electronic equipment and a storage medium, after the perspective image of the human body part to be processed is obtained, the gray level image which meets the characteristic extraction requirement can be obtained by preprocessing the perspective image of the human body part to be processed, then extracting a plurality of first characteristic information used for expressing the overall characteristics of the perspective image of the human body part by generating gray level co-occurrence matrixes of the perspective image of the human body part after the preprocessing in different preset directions, and extracting second characteristic information of the preprocessed human body part perspective image, fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information, because the fused characteristic information contains richer characteristic information of the human body part perspective image, therefore, the embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image. Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image feature extraction method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of step S102 in the image feature extraction method according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image feature extraction apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a preprocessing module in the image feature extraction apparatus according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an image feature extraction method, where the process may include the following steps:
s101, obtaining a perspective image of a human body part to be processed.
The perspective image of the human body part to be processed in the embodiment of the present invention may be an image obtained by scanning with a CT instrument, a nuclear magnetic resonance device, or other instruments, and it can be understood that the image usually contains perspective information of the human body part, and the image is an image to be subjected to feature extraction processing, and thus may be referred to as a perspective image of the human body part to be processed.
S102, preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part.
Because a lot of noises exist in the perspective image of the human body part to be processed, and the existence of the noises can influence the feature extraction, the perspective image of the human body part to be processed needs to be preprocessed to obtain an image meeting the feature extraction requirement, and the preprocessed perspective image of the human body part can be a gray image.
As an optional implementation manner of the embodiment of the present invention, as shown in fig. 2, the step S102 may specifically include the following steps:
s1021, adjusting the brightness of the human skeleton perspective image to be processed to be preset brightness, and obtaining a first image.
And S1022, removing noise in the first image by adopting corrosion operation to obtain a second image.
And S1023, converting the second image into a gray image to obtain a preprocessed human skeleton perspective image.
In the embodiment of the invention, the brightness of the human body skeleton perspective image to be processed can be adjusted to the preset brightness, so that the brightness of the processed image is adjusted to the same brightness, namely, the first image is obtained, then, the noise in the first image is removed by using corrosion operation, the second image is obtained, and the second image is converted into the gray image, so that the human body skeleton perspective image after being preprocessed is obtained, the subsequent characteristic extraction can be facilitated, and the operation speed and the judgment accuracy are improved.
In the above etching operation, the operation of etching the structure block a by the structure block B may be defined as:
Figure BDA0002106307130000071
in the formula (I), the compound is shown in the specification,
Figure BDA0002106307130000072
indicating that the structure block A is corroded by the structure block B, Z is a translation term of the structure block B, and the coordinate of Z is assumed to be (x)0,y0) Then (B)zAbscissa increasing x representing all elements in the structural block B0Ordinate increased by y0. By adjusting the size of the structure block B, smaller noise points in the image can be removed, and the subsequent feature extraction result is more accurate.
Optionally, the preprocessed human skeleton perspective image in the embodiment of the present invention may specifically be an image with 32 gray levels.
Referring to fig. 1, S103, a gray level co-occurrence matrix of the preprocessed human body part perspective images in different preset directions is generated.
The gray level co-occurrence matrix is obtained by counting the preset gray level conditions of two pixels which keep a preset distance in the image in a preset direction, and the texture of the image can be described by researching the spatial correlation characteristics of the gray level through the gray level co-occurrence matrix. The process of generating the gray level co-occurrence matrix of the image is the prior art, and the embodiment of the present invention is not described herein again.
As an optional implementation manner of the embodiment of the present invention, a gray level co-occurrence matrix of the preprocessed human skeleton perspective image in different preset directions may be generated.
In the embodiment of the invention, for a preprocessed human skeleton perspective image (M and N respectively represent the row number and the column number of pixels in the image) with the size of M multiplied by N, any pixel in the image is taken as (x, y), and a difference value x is set0、y0For point (x + x)0,y+y0) To obtain its gray value of (g)1,g2) Let (x, y) shift up the imageDynamic, obtaining a difference of (g)1,g2) Statistics of each kind (g)1,g2) And normalizing the occurrence times into probability to obtain a gray level co-occurrence matrix P with the dimensionality of 32 multiplied by 32. In the embodiment of the present invention, the difference value may be preset, for example, the difference value x0、y0Presetting as follows: (1,0), or
Figure BDA0002106307130000081
Or
Figure BDA0002106307130000082
Or (0,1), or
Figure BDA0002106307130000083
Or
Figure BDA0002106307130000084
By making a pair of x0、y0And carrying out different values to obtain gray level co-occurrence matrixes in different directions. In this embodiment, the angles corresponding to the preset directions include: and 6 gray level co-occurrence matrixes can be generated in six directions of 0 degrees, 30 degrees, 45 degrees, 90 degrees, 135 degrees and 150 degrees.
Referring to fig. 1, S104, a plurality of first feature information of the preprocessed human body part perspective image is extracted by using the generated gray level co-occurrence matrix.
After the gray level co-occurrence matrix is obtained, a plurality of pieces of first characteristic information of the preprocessed human body part perspective image can be extracted by using the gray level co-occurrence matrix, the first characteristic information is an overall characteristic of the human body part perspective image, and the plurality of pieces of first characteristic information comprise: angular second moment features, contrast features, entropy features, inverse difference moment features, autocorrelation features, and energy features.
Wherein the angular second moment is defined as follows:
ASM=∑ijP(i,j)2
in the formula, ASM represents angular second moment, P is gray level co-occurrence matrix, P (i, j) is element value in the matrix, i and j represent row and column of the matrix respectively, the angular second moment is the square sum of each element in the gray level co-occurrence matrix, which reflects the consistency of the image, and the more consistent the image, the larger the angular second moment.
The contrast is defined as follows:
CON=∑ij(i-j)2P(i,j)
in the formula, CON represents contrast and is used for representing the degree of gray scale contrast of the preprocessed human body part perspective image, P is a gray scale co-occurrence matrix, P (i, j) is an element value in the matrix, and i and j respectively represent rows and columns of the matrix.
The definition of entropy is shown as follows:
ENT=-∑ijP(i,j)logP(i,j)
in the formula, ENT represents entropy, the entropy reflects the amount of information in the image, and the larger the amount of information in the image is, the larger the entropy of the image is; the smaller the entropy, the smoother the texture of the image is also illustrated. P is a gray level co-occurrence matrix, P (i, j) is the value of an element in the matrix, and i and j respectively represent the rows and columns of the matrix.
The moment of dissimilarity is defined as follows:
Figure BDA0002106307130000091
in the formula, IDM represents an inverse difference moment, and reflects local consistency of an image, and the higher the local consistency of the image is, the larger the IDM is. P is a gray level co-occurrence matrix, P (i, j) is the value of an element in the matrix, and i and j respectively represent the rows and columns of the matrix.
The autocorrelation is defined as follows:
Figure BDA0002106307130000092
wherein the content of the first and second substances,
Figure BDA0002106307130000093
in the formula, COR represents autocorrelation, and autocorrelation is used to represent pre-correlationAnd the degree of correlation of each pixel in the processed human body part perspective image. P is a gray level co-occurrence matrix, P (i, j) is the value of an element in the matrix, i and j respectively represent the row and column of the matrix, ui,ujRespectively representing the expected values, s, of the matrix in the i, j directionsi,sjThe standard deviations of the matrices in the i, j directions are indicated, respectively.
The definition of energy is shown as follows:
Figure BDA0002106307130000094
in the formula, Energy represents Energy, ASM represents angular second moment, and the Energy is used for representing the overall consistency degree of the preprocessed human body part perspective image.
In the embodiment of the invention, 6 first features are respectively extracted from the gray level co-occurrence matrixes in 6 directions, the number of the extracted features of each image is 36, and the features comprehensively reflect the feature information of different preset regions in the perspective image of the human body part to be processed, such as bone density information of a bone contour region, an envelope structure of a tumor tissue region and the like.
As an optional implementation manner of the embodiment of the present invention, the human body part perspective image may be a human body skeleton perspective image, and a plurality of pieces of first feature information of the human body skeleton perspective image after the preprocessing may be extracted by using the generated gray-level co-occurrence matrix.
Referring to fig. 1, S105, second feature information of the preprocessed human body part perspective image is extracted.
For the preprocessed human body part perspective image, the embodiment of the present invention may extract second feature information of the image, where the second feature information is a local detail feature of the human body part in the human body part perspective image, such as a morphological feature, a density feature, and an envelope structure feature.
In the embodiment of the invention, the preprocessed human body part perspective image can be input into a pre-trained convolutional neural network to obtain second characteristic information, in the embodiment, 34 layers of residual error networks can be selected as convolutional neural network models, high-precision trained convolutional neural network models are obtained after training, and output vectors of full-connection layers are extracted by using the models to serve as the second characteristic information. It should be noted that the convolutional neural network according to the embodiment of the present invention may adopt an existing convolutional neural network, for example, an existing ResNet convolutional neural network, ImageNet convolutional neural network, and google convolutional neural network, and the convolutional neural network according to the embodiment of the present invention may be trained by using an existing training process, which is not described herein again.
The human body part perspective image can be a human body skeleton perspective image, the preprocessed human body skeleton perspective image can be input into a pre-trained convolutional neural network, and second characteristic information of the preprocessed human body skeleton perspective image is obtained, wherein the second characteristic information comprises: morphological characteristics, density characteristics and enveloping structure characteristics of a preset region of the human body part. The predetermined region may be a predetermined region, such as a bone contour region, a tumor tissue region, or the like.
Referring to fig. 1, S106, multiple pieces of first feature information and second feature information are fused to obtain fused feature information.
The plurality of first feature information and the second feature information may be specifically represented in the form of feature vectors, and the feature fusion process may be a process of splicing a plurality of feature vectors, so as to form a pair-wise vector, where the vector carries the fused feature information.
Specifically, 36 feature vectors extracted from each perspective image of the human body part to be processed and feature vectors extracted from the convolutional neural network can be spliced into one feature vector.
In the image feature extraction method provided by the embodiment of the invention, after the perspective image of the human body part to be processed is obtained, by carrying out preprocessing operation on the perspective image of the human body part to be processed, a gray image meeting the characteristic extraction requirement can be obtained, then extracting a plurality of first characteristic information used for expressing the overall characteristics of the perspective image of the human body part by generating gray level co-occurrence matrixes of the perspective image of the human body part after the preprocessing in different preset directions, and extracting second characteristic information of the preprocessed human body part perspective image, fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information, because the fused characteristic information contains richer characteristic information of the human body part perspective image, therefore, the embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image.
A specific embodiment of an image feature extraction device provided in an embodiment of the present invention corresponds to the flow shown in fig. 1, and referring to fig. 3, fig. 3 is a schematic structural diagram of an image feature extraction device according to an embodiment of the present invention, including:
an obtaining module 301, configured to obtain a perspective image of a human body part to be processed.
The preprocessing module 302 is configured to perform preprocessing operation on the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, where the preprocessed perspective image of the human body part is a grayscale image.
The generating module 303 is configured to generate gray level co-occurrence matrices of the preprocessed human body part perspective images in different preset directions.
A first extracting module 304, configured to extract, by using the generated gray level co-occurrence matrix, a plurality of pieces of first feature information of the preprocessed human body part perspective image, where the first feature information is an overall feature of the human body part perspective image, and the plurality of pieces of first feature information include: the system comprises an angle secondary moment feature, a contrast feature, an entropy feature, an inverse difference moment feature, an autocorrelation feature and an energy feature, wherein the angle secondary moment feature is used for representing the consistency degree of a preprocessed human body part perspective image, the contrast feature is used for representing the gray contrast degree of the preprocessed human body part perspective image, the entropy feature is used for representing the information content degree of the preprocessed human body part perspective image, the inverse difference moment feature is used for representing the local consistency degree of the preprocessed human body part perspective image, the autocorrelation feature is used for representing the correlation degree of each pixel in the preprocessed human body part perspective image, and the energy feature is used for representing the overall consistency degree of the preprocessed human body part perspective image.
The second extracting module 305 is configured to extract second feature information of the preprocessed human body part perspective image, where the second feature information is a local feature of a human body part in the human body part perspective image.
And a fusion module 306, configured to fuse the plurality of first feature information and the second feature information to obtain fused feature information.
As shown in fig. 4, the preprocessing module 302 includes:
the adjusting submodule 3021 is configured to adjust the brightness of the human skeleton perspective image to be processed to a preset brightness, so as to obtain a first image.
And the removing submodule 3022 is configured to remove noise in the first image by using an erosion operation to obtain a second image.
And the conversion module 3023 is configured to convert the second image into a grayscale image to obtain a preprocessed human skeleton perspective image.
Wherein, the generating module is specifically configured to:
generating gray level co-occurrence matrixes of the preprocessed human skeleton perspective images in different preset directions, wherein the angles corresponding to the preset directions comprise: 0 °,30 °,45 °,90 °,135 °,150 °.
Wherein, the first extraction module is specifically configured to:
and extracting a plurality of pieces of first characteristic information of the preprocessed human skeleton perspective image by using the generated gray level co-occurrence matrix.
Wherein, the second extraction module is specifically configured to:
inputting the preprocessed human skeleton perspective image into a pre-trained convolutional neural network to obtain second characteristic information of the preprocessed human skeleton perspective image, wherein the second characteristic information comprises: morphological characteristics, density characteristics and enveloping structure characteristics of a preset region of the human body part.
Wherein, the fusion module is specifically configured to:
and fusing the plurality of first characteristic information and the second characteristic information to obtain a characteristic vector, wherein the characteristic vector carries the characteristic information after characteristic fusion.
The image feature extraction device provided by the embodiment of the invention has the advantages that after the perspective image of the human body part to be processed is obtained, by carrying out preprocessing operation on the perspective image of the human body part to be processed, a gray image meeting the characteristic extraction requirement can be obtained, then extracting a plurality of first characteristic information used for expressing the overall characteristics of the perspective image of the human body part by generating gray level co-occurrence matrixes of the perspective image of the human body part after the preprocessing in different preset directions, and extracting second characteristic information of the preprocessed human body part perspective image, fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information, because the fused characteristic information contains richer characteristic information of the human body part perspective image, therefore, the embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, which includes a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
acquiring a perspective image of a human body part to be processed;
preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, wherein the preprocessed perspective image of the human body part is a gray image;
generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
extracting a plurality of pieces of first characteristic information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix, wherein the first characteristic information is the overall characteristic of the human body part perspective image;
extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of the human body part in the human body part perspective image;
and fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information.
The electronic equipment provided by the embodiment of the invention has the advantages that after the perspective image of the human body part to be processed is obtained, by carrying out preprocessing operation on the perspective image of the human body part to be processed, a gray image meeting the characteristic extraction requirement can be obtained, then extracting a plurality of first characteristic information used for expressing the overall characteristics of the perspective image of the human body part by generating gray level co-occurrence matrixes of the perspective image of the human body part after the preprocessing in different preset directions, and extracting second characteristic information of the preprocessed human body part perspective image, fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information, because the fused characteristic information contains richer characteristic information of the human body part perspective image, therefore, the embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and is configured to execute the following steps:
acquiring a perspective image of a human body part to be processed;
preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, wherein the preprocessed perspective image of the human body part is a gray image;
generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
extracting a plurality of pieces of first characteristic information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix, wherein the first characteristic information is the overall characteristic of the human body part perspective image;
extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of the human body part in the human body part perspective image;
and fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information.
In the computer-readable storage medium provided by the embodiment of the invention, after the perspective image of the human body part to be processed is acquired, by carrying out preprocessing operation on the perspective image of the human body part to be processed, a gray image meeting the characteristic extraction requirement can be obtained, then extracting a plurality of first characteristic information used for expressing the overall characteristics of the perspective image of the human body part by generating gray level co-occurrence matrixes of the perspective image of the human body part after the preprocessing in different preset directions, and extracting second characteristic information of the preprocessed human body part perspective image, fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information, because the fused characteristic information contains richer characteristic information of the human body part perspective image, therefore, the embodiment of the invention can more comprehensively extract the characteristic information in the human body part perspective image.
For the apparatus/electronic device/storage medium embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment.
It should be noted that, the apparatus, the electronic device and the storage medium according to the embodiments of the present invention are respectively an apparatus, an electronic device and a storage medium to which the above-mentioned image feature extraction method is applied, and all embodiments of the above-mentioned image feature extraction method are applicable to the apparatus, the electronic device and the storage medium, and can achieve the same or similar beneficial effects.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (7)

1. An image feature extraction method, characterized in that the method comprises:
acquiring a perspective image of a human body part to be processed;
preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, wherein the preprocessed perspective image of the human body part is a gray image;
generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
extracting a plurality of pieces of first feature information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix, wherein the first feature information is an overall feature of the human body part perspective image, and the plurality of pieces of first feature information include: the system comprises an angle secondary moment feature, a contrast feature, an entropy feature, an inverse difference moment feature, an autocorrelation feature and an energy feature, wherein the angle secondary moment feature is used for representing the consistency degree of the preprocessed human body part perspective image, the contrast feature is used for representing the size degree of gray contrast of the preprocessed human body part perspective image, the entropy feature is used for representing the size degree of information quantity of the preprocessed human body part perspective image, the inverse difference moment feature is used for representing the local consistency degree of the preprocessed human body part perspective image, the autocorrelation feature is used for representing the correlation degree of each pixel in the preprocessed human body part perspective image, and the energy feature is used for representing the overall consistency degree of the preprocessed human body part perspective image;
extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of a human body part in the human body part perspective image;
fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information;
the step of preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part comprises the following steps:
adjusting the brightness of a human body skeleton perspective image to be processed to a preset brightness to obtain a first image;
removing noise in the first image by adopting corrosion operation to obtain a second image;
converting the second image into a gray image to obtain a preprocessed human skeleton perspective image;
in the etching operation, the etching operation of the structure block A by the structure block B is defined as:
Figure FDA0003107478100000021
wherein A ^ B indicates that the structure block A is corroded by the structure block B, Z is a translation item of the structure block B, and the coordinate of Z is (x0, y0), then (B) Z indicates that the abscissa of all elements in the structure block B is increased by x0, and the ordinate is increased by y 0;
the step of extracting the second characteristic information of the preprocessed human body part perspective image comprises the following steps:
inputting the preprocessed human body skeleton perspective image into a pre-trained convolutional neural network to obtain second characteristic information of the preprocessed human body skeleton perspective image, wherein the second characteristic information comprises: the shape characteristic, the density characteristic and the enveloping structure characteristic of the preset region of the human body part.
2. The method according to claim 1, wherein the step of generating a gray level co-occurrence matrix of the preprocessed human body part perspective images in different preset directions comprises:
generating gray level co-occurrence matrixes of the preprocessed human skeleton perspective images in different preset directions, wherein the angles corresponding to the preset directions comprise: 0 °,30 °,45 °,90 °,135 °,150 °.
3. The method according to claim 2, wherein the step of extracting a plurality of first feature information of the preprocessed human body part perspective image by using the generated gray level co-occurrence matrix comprises:
and extracting a plurality of pieces of first characteristic information of the preprocessed human skeleton perspective image by using the generated gray level co-occurrence matrix.
4. The method according to claim 1, wherein the step of fusing the plurality of first feature information and the second feature information to obtain fused feature information includes:
and fusing the plurality of first characteristic information and the second characteristic information to obtain a characteristic vector, wherein the characteristic vector carries the characteristic information after characteristic fusion.
5. An image feature extraction device characterized by comprising:
the acquisition module is used for acquiring a perspective image of a human body part to be processed;
the preprocessing module is used for preprocessing the perspective image of the human body part to be processed to obtain a preprocessed perspective image of the human body part, and the preprocessed perspective image of the human body part is a gray image;
the generation module is used for generating gray level co-occurrence matrixes of the preprocessed human body part perspective images in different preset directions;
a first extraction module, configured to extract, by using the generated gray level co-occurrence matrix, a plurality of first feature information of the preprocessed human body part perspective image, where the first feature information is an overall feature of the human body part perspective image, and the plurality of first feature information includes: the system comprises an angle secondary moment feature, a contrast feature, an entropy feature, an inverse difference moment feature, an autocorrelation feature and an energy feature, wherein the angle secondary moment feature is used for representing the consistency degree of the preprocessed human body part perspective image, the contrast feature is used for representing the size degree of gray contrast of the preprocessed human body part perspective image, the entropy feature is used for representing the size degree of information quantity of the preprocessed human body part perspective image, the inverse difference moment feature is used for representing the local consistency degree of the preprocessed human body part perspective image, the autocorrelation feature is used for representing the correlation degree of each pixel in the preprocessed human body part perspective image, and the energy feature is used for representing the overall consistency degree of the preprocessed human body part perspective image;
the second extraction module is used for extracting second characteristic information of the preprocessed human body part perspective image, wherein the second characteristic information is local characteristics of a human body part in the human body part perspective image;
the fusion module is used for fusing the plurality of first characteristic information and the second characteristic information to obtain fused characteristic information;
the preprocessing module comprises:
the adjusting submodule is used for adjusting the brightness of the human skeleton perspective image to be processed to preset brightness to obtain a first image;
the removing submodule is used for removing the noise in the first image by adopting corrosion operation to obtain a second image;
the conversion submodule is used for converting the second image into a gray image to obtain a preprocessed human skeleton perspective image;
the second extraction module is specifically configured to input the preprocessed human skeleton perspective image into a pre-trained convolutional neural network, so as to obtain second feature information of the preprocessed human skeleton perspective image, where the second feature information includes: morphological characteristics, density characteristics and enveloping structure characteristics of a preset region of the human body part.
6. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-4.
7. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-4.
CN201910553892.0A 2019-06-25 2019-06-25 Image feature extraction method, image feature extraction device, electronic equipment and storage medium Active CN110348457B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910553892.0A CN110348457B (en) 2019-06-25 2019-06-25 Image feature extraction method, image feature extraction device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910553892.0A CN110348457B (en) 2019-06-25 2019-06-25 Image feature extraction method, image feature extraction device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110348457A CN110348457A (en) 2019-10-18
CN110348457B true CN110348457B (en) 2021-09-21

Family

ID=68182974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910553892.0A Active CN110348457B (en) 2019-06-25 2019-06-25 Image feature extraction method, image feature extraction device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110348457B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115159B (en) * 2023-10-23 2024-03-15 北京壹点灵动科技有限公司 Bone lesion determination device, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282025A (en) * 2014-10-17 2015-01-14 中山大学 Biomedical image feature extraction method
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN109063208A (en) * 2018-09-19 2018-12-21 桂林电子科技大学 A kind of medical image search method merging various features information
CN109376782A (en) * 2018-10-26 2019-02-22 北京邮电大学 Support vector machines cataract stage division and device based on eye image feature
CN109598709A (en) * 2018-11-29 2019-04-09 东北大学 Mammary gland assistant diagnosis system and method based on fusion depth characteristic

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101669824B (en) * 2009-09-22 2012-01-25 浙江工业大学 Biometrics-based device for detecting indentity of people and identification
CN102640168B (en) * 2009-12-31 2016-08-03 诺基亚技术有限公司 Method and apparatus for facial Feature Localization based on local binary pattern
CN101916443B (en) * 2010-08-19 2012-10-17 中国科学院深圳先进技术研究院 Processing method and system of CT image
EP2991029A1 (en) * 2014-08-29 2016-03-02 Thomson Licensing Method for inserting features into a three-dimensional object and method for obtaining features from a three-dimensional object
CN104732230A (en) * 2015-03-27 2015-06-24 麦克奥迪(厦门)医疗诊断系统有限公司 Pathology image local-feature extracting method based on cell nucleus statistical information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104282025A (en) * 2014-10-17 2015-01-14 中山大学 Biomedical image feature extraction method
CN105426889A (en) * 2015-11-13 2016-03-23 浙江大学 PCA mixed feature fusion based gas-liquid two-phase flow type identification method
CN106600585A (en) * 2016-12-08 2017-04-26 北京工商大学 Skin condition quantitative evaluation method based on gray level co-occurrence matrix
CN109063208A (en) * 2018-09-19 2018-12-21 桂林电子科技大学 A kind of medical image search method merging various features information
CN109376782A (en) * 2018-10-26 2019-02-22 北京邮电大学 Support vector machines cataract stage division and device based on eye image feature
CN109598709A (en) * 2018-11-29 2019-04-09 东北大学 Mammary gland assistant diagnosis system and method based on fusion depth characteristic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图像目标的特征提取技术研究;曹健等;《计算机仿真》;20130131;409-414 *
融合全局和局部特征的医学图像分类;武京相;《中国优秀硕士学位论文全文数据库信息科技辑》;20110415;第2011年卷(第4期);第2.4节、第四-六章 *

Also Published As

Publication number Publication date
CN110348457A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN111583165B (en) Image processing method, device, equipment and storage medium
CN111080527B (en) Image super-resolution method and device, electronic equipment and storage medium
CN107229918B (en) SAR image target detection method based on full convolution neural network
CN111476719B (en) Image processing method, device, computer equipment and storage medium
CN112017192B (en) Glandular cell image segmentation method and glandular cell image segmentation system based on improved U-Net network
CN115170934B (en) Image segmentation method, system, equipment and storage medium
CN111553420B (en) X-ray image identification method and device based on neural network
CN109978888B (en) Image segmentation method, device and computer readable storage medium
CN110544214A (en) Image restoration method and device and electronic equipment
CN110443254B (en) Method, device, equipment and storage medium for detecting metal area in image
CN109635714B (en) Correction method and device for document scanning image
CN111325695B (en) Low-dose image enhancement method and system based on multi-dose grade and storage medium
CN112560957B (en) Neural network training and detecting method, device and equipment
CN110348457B (en) Image feature extraction method, image feature extraction device, electronic equipment and storage medium
CN112241952A (en) Method and device for recognizing brain central line, computer equipment and storage medium
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN112348774A (en) CT image segmentation method, terminal and storage medium suitable for bladder cancer
CN117079005A (en) Optical cable fault monitoring method, system, device and readable storage medium
CN107220651B (en) Method and device for extracting image features
CN114372970B (en) Surgical reference information generation method and device
CN111047525A (en) Method for translating SAR remote sensing image into optical remote sensing image
CN114239760A (en) Multi-modal model training and image recognition method and device, and electronic equipment
CN113033542B (en) Method and device for generating text recognition model
CN110134813B (en) Image retrieval method, image retrieval device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant