CN114822781A - Medical image desensitization method based on examination images - Google Patents

Medical image desensitization method based on examination images Download PDF

Info

Publication number
CN114822781A
CN114822781A CN202210434694.4A CN202210434694A CN114822781A CN 114822781 A CN114822781 A CN 114822781A CN 202210434694 A CN202210434694 A CN 202210434694A CN 114822781 A CN114822781 A CN 114822781A
Authority
CN
China
Prior art keywords
image
desensitization
scale
gaussian
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210434694.4A
Other languages
Chinese (zh)
Inventor
王莹
刘玉洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tangshan Shinow Technology Co ltd
Original Assignee
Tangshan Shinow Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tangshan Shinow Technology Co ltd filed Critical Tangshan Shinow Technology Co ltd
Priority to CN202210434694.4A priority Critical patent/CN114822781A/en
Publication of CN114822781A publication Critical patent/CN114822781A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

The present invention relates to a medical image desensitization method based on an inspection image, which includes the steps of, S1, judging whether an original image is readable; s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image; s3, processing the binary image, cutting the sensitive information, detecting the characteristics by SIFT, extracting the inspection image, obtaining the key points and the image descriptors, and obtaining the desensitization information database; s4, performing feature matching on the obtained image descriptor and an image descriptor of a desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching: s5, performing data desensitization on the content behind the matched desensitization information area; the invention has reasonable design, compact structure and convenient use.

Description

Medical image desensitization method based on examination images
Technical Field
The invention relates to a medical image desensitization method based on examination images.
Background
At present, with the rapid development of big data and the internet, network information grows, and China faces a great challenge in the aspect of network information security. One of the problems of network security in the medical industry is that data encryption measures are not implemented and the necessary system protection is lacked. Medical information contains a large amount of sensitive data, and if effective encryption measures are not implemented in the collection, storage and transmission processes, the information is at a great risk of leakage.
The medical industry produces a large number of examination images containing a large amount of private information of patients, such as names, patient cases, clinic numbers, etc., for which desensitization is required, and the prior art adopts a method for the private information of the patients in the examination images, which is generally opening the images, manually desensitizing by using a tool, and storing desensitized pictures. The above operation is directed to a large amount of image data, which is costly and time consuming. In view of this, the present invention implements automatic data desensitization of the detected images based on computer vision processing techniques (feature extraction, image pre-processing, etc.).
The main data format of the CT image file in the medical industry is DICOM, data desensitization is realized for the image in the format in the prior art, but a large amount of inspection image data in formats of png, jpg and the like exist in the medical industry, and the desensitization problem of the inspection image needs to be considered in the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is generally to provide a medical image desensitization method based on examination images. Aiming at the problems that a medical desensitization system is not popularized yet, manual desensitization efficiency is low and the like at present, automatic desensitization of document image sensitive information is realized by using a computer vision processing technology based on a pycharm development platform and an opencv library. The desensitization precision of the algorithm is 95%, and the requirements of scene application are met.
Image preprocessing, image binarization, gradient calculation, filtering and other processing, intercepting the sensitive information image, making a desensitization information database, extracting the characteristics of the image, obtaining the position of the sensitive information by adopting a characteristic matching algorithm, and desensitizing the sensitive information by obtaining the position information.
Image binarization: the gray value of a pixel point on the image is set to be 0 or 255, so that the whole image presents an obvious black-and-white effect.
Image feature extraction: feature extraction refers to a method and a process for extracting information which is characteristic in an image by using a computer.
And (3) feature matching: a process of extracting features of an image and then matching the same or similar features.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
a method of medical image desensitization based on examination images, comprising the steps of:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching:
s5, performing data desensitization on the content behind the matched desensitization information area; and if the desensitization is successful, acquiring the desensitized image, and if the desensitization is unsuccessful, reminding the user of needing manual participation.
The invention has the advantages of reasonable design, low cost, firmness, durability, safety, reliability, simple operation, time and labor saving, capital saving, compact structure and convenient use.
Compared with the prior art, the invention has the advantages that: according to the technical scheme provided by the invention, automatic desensitization of the sensitive information of the inspection image is realized through algorithms such as feature extraction and feature matching, the data desensitization can be accurately carried out on the inspection image, the privacy of a patient is protected, and a large amount of manpower and material resources are saved.
The invention can automatically realize the desensitization of the sensitive information of the inspection image, and saves the labor compared with the manual desensitization. According to the method, SIFT feature extraction and FlanBasedMatcher matching methods are adopted, and desensitization is realized on pictures with different scales. And the stability is better.
Drawings
FIG. 1 is a flow diagram of a medical image data desensitization technique based on an examination image;
FIG. 2 is a SIFT feature extraction flowchart
FIG. 3 is a diagram of the effect of the binarized image after the image preprocessing in step (2);
FIG. 4 is a diagram of an embodiment of inspection image raw data provided in FIG. 1;
FIG. 5 is a graph of desensitization results using a masking approach after an inspection image desensitization process based on data desensitization;
FIG. 6 is a diagram of an embodiment of inspection image raw data provided FIG. 2;
fig. 7 is a diagram of an example of raw data of the inspection image provided in fig. 3.
Detailed Description
In order to solve the problems of the related art, as shown in fig. 1 to 7, the present invention aims to achieve privacy protection of an examination image of a patient based on automatic desensitization of the examination image. To achieve this object, a medical image data desensitization technique based on examination images includes the following steps for ensuring proper operation of a procedure:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image; when the image is preprocessed, preprocessing the image to be checked through algorithms such as smoothing, median filtering, edge detection, gradient calculation, equalization and the like, eliminating irrelevant information in the image, and enhancing the detectability of the relevant information and simplifying data to the maximum extent;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching: secondly, the features extracted from the CT image are used as conjugate entities, the extracted feature attributes or description parameters are used as matching entities, and the image matching method of conjugate entity registration is realized by calculating the similarity measure between the matching entities to obtain the position of sensitive information; the features may be features of an actual picture, or may be considered features of an image;
s5, performing data desensitization on the content behind the matched desensitization information area; if the desensitization is successful, acquiring an image after the desensitization, and if the desensitization is unsuccessful, reminding people of needing manual participation; when the sensitive information is desensitized, the mask operation is used for desensitizing the sensitive information such as the name and the case number of a patient in the examination image.
Regarding feature extraction;
common image features include color features, shape features, texture features, and edge features.
Aiming at the characteristics of the inspection image, the characteristics of characters are mainly extracted, the paper or the paper concave-convex property of the characters is not taken into consideration, and the characters are not influenced by the paper color and the character color. Therefore, color and texture features are excluded, and edge features and shape features of the inspection image are mainly extracted.
Edge characteristics: and detecting whether the edge of the inspection image has obvious change or a discontinuous region by using a sobel operator, and extracting the edge characteristic of the inspection image.
Shape characteristics: the expression of shape features must be based on the segmentation of objects or regions in the image. And the SIFT is adopted to extract image local feature points in a scale space, the image local feature points keep invariance to rotation, scale scaling and brightness change, and the image local feature points also keep certain stability to view angle change, affine transformation and noise.
Scale-invariant feature transform (SIFT) is an algorithm for computer vision. The method is used for detecting and describing the local characteristics in the image, searching an extreme point in a space scale, and extracting the position, the scale and the rotation invariant of the extreme point, and the algorithm comprises the following steps:
and (3) detection of extreme values in the scale space: the image locations are searched for on all scales. Potential scale-and rotation-invariant points of interest are identified by gaussian derivative functions.
Feature point filtering and key point positioning: at each candidate location, the location and scale are determined by fitting a fine model. The selection of the key points depends on their degree of stability.
Direction determination: one or more directions are assigned to each keypoint location based on the local gradient direction of the image. All subsequent operations on the image data are transformed with respect to the orientation, scale and location of the keypoints, providing invariance to these transformations.
Key point descriptor: local gradients of the image are measured at a selected scale in a neighborhood around each keypoint. These gradients are transformed into a representation that allows for relatively large local shape deformations and illumination variations.
In S3, the SIFT feature extraction step is as follows;
s3.1, detecting an extreme value of the scale space, acquiring the scale space, and constructing an image pyramid;
in different scale spaces, the same window cannot be used for detecting extreme points, a small window is used for small key points, and a large window is used for large key points;
defining a scale space of an image
Figure 506896DEST_PATH_IMAGE001
The following are:
Figure 219069DEST_PATH_IMAGE002
formula (1);
Figure 557647DEST_PATH_IMAGE003
formula (2);
wherein the content of the first and second substances,
Figure 525603DEST_PATH_IMAGE004
the position of a pixel representing the image,
Figure 434784DEST_PATH_IMAGE005
which represents the original image or images of the original image,
Figure 943126DEST_PATH_IMAGE006
which represents a convolution operation, is a function of,
Figure 73893DEST_PATH_IMAGE007
the expression of the function of gaussian is given,
Figure 25800DEST_PATH_IMAGE008
the scale space factor is a standard deviation of Gaussian normal distribution, reflects the degree of image blurring, and the larger the value is, the more blurred the image is, and the larger the corresponding scale is.
The scale spaces of different images form an image Gaussian pyramid, the images are blurred and subjected to down-sampling through functions of formulas (1) and (2) to obtain a plurality of groups of images, and different groups of images comprise a plurality of layers of images.
The calculation formula of the number of groups of the Gaussian pyramid is as follows:
Figure 405965DEST_PATH_IMAGE009
(3);
wherein the content of the first and second substances,
Figure 655681DEST_PATH_IMAGE010
representing the number of groups of the gaussian pyramid,
Figure 719583DEST_PATH_IMAGE011
respectively, the rows and columns of the original image. Coefficient of performance
Figure 91659DEST_PATH_IMAGE012
Is that
Figure 972503DEST_PATH_IMAGE013
Any value in between;
gaussian filter parameters
Figure 25909DEST_PATH_IMAGE008
The relationship is given by equation (4):
Figure 928006DEST_PATH_IMAGE014
(4);
wherein the content of the first and second substances,
Figure 221715DEST_PATH_IMAGE015
is the layer in which it is located,
Figure 514156DEST_PATH_IMAGE016
is the initial scale of the measurement,
Figure 246620DEST_PATH_IMAGE017
is the number of layers in each group,
Figure 940907DEST_PATH_IMAGE010
the number of groups is the number of the groups;
relationship between image scales of adjacent layers within the same group:
Figure 654785DEST_PATH_IMAGE018
Figure 700101DEST_PATH_IMAGE019
(5);
relationship between adjacent groups:
Figure 908360DEST_PATH_IMAGE020
(6);
s3.2, constructing an image Gaussian difference pyramid;
performing Gaussian difference on the image along a scale axis to obtain points which are set to be more remarkable in scale space, namely gradient extreme values on the scale axis, calculating the gradient extreme values by adopting a dog function, and forming a Gaussian difference pyramid by using the dog function at two adjacent layers in each group in the Gaussian pyramid;
DOG function:
Figure 457153DEST_PATH_IMAGE021
(7);
s3.3, detecting a DOG spatial extreme value;
and searching extreme points, and searching extreme values in the DOG space, wherein points with the extreme values larger or smaller than the set surrounding points are regarded as key points.
S3.4, feature point filtering and key point positioning
Because DOG is sensitive to noise and edges, local extreme points detected in the Gaussian difference pyramid of S3.2 can be accurately positioned as feature points through further inspection;
first, removing smaller extreme values, and in order to obtain more accurate key point positions, performing taylor quadratic expansion on each segment of DOG function of each key point:
Figure 607511DEST_PATH_IMAGE022
(8);
wherein the content of the first and second substances,
Figure 690524DEST_PATH_IMAGE023
,
Figure 623845DEST_PATH_IMAGE024
is a parameter of the gaussian filtering and,
Figure 355040DEST_PATH_IMAGE025
is an image pixel point;
then, the extreme value is obtained by solving the formula (8), and the derivative of the formula (8) is made to be zero to obtain the extreme value point
Figure 613983DEST_PATH_IMAGE026
Figure 915783DEST_PATH_IMAGE027
(9);
Next, an extreme point is obtained for equation (9)
Figure 918374DEST_PATH_IMAGE028
Extreme value of (c)
Figure 504076DEST_PATH_IMAGE029
Figure 668341DEST_PATH_IMAGE030
(10);
And get rid of
Figure 723016DEST_PATH_IMAGE031
To remove edge noise, i.e. to eliminate ridge lines.
DOG often appears as a ridgeline in an edge-wise manner, i.e. a slower change in the direction along the line with less curvature and a more drastic change in the direction of the perpendicular with more curvature. And removing edge noise, namely removing ridge lines.
S3.5, firstly, describing the change trend around the extreme point (9) through a Hessian matrix, wherein the eigenvalue of the covariance matrix corresponds to the projection in the direction of the eigenvector; the larger the value is, the slower the change trend of the reaction function in the direction is, namely the larger the curvature is, and the eigenvalue of the Hessian matrix is in direct proportion to the curvature of the eigenvalue in the eigenvector direction;
and (3) calculating a Hessian matrix through a second order difference formula (11):
Figure 263719DEST_PATH_IMAGE032
(11) ;
wherein the content of the first and second substances,
Figure 703927DEST_PATH_IMAGE033
representing DOG function with respect to pixel points
Figure 117722DEST_PATH_IMAGE034
A second partial derivative;
calculating the ratio of the characteristic values to obtain the variation trend of the characteristic values in the direction of the characteristic vector;
then, assume that the two eigenvalues are each
Figure 581065DEST_PATH_IMAGE035
And then:
Figure 987775DEST_PATH_IMAGE036
(12) ;
Figure 295872DEST_PATH_IMAGE037
(13) ;
wherein the content of the first and second substances,
Figure 801940DEST_PATH_IMAGE038
respectively, the trace of the matrix and the determinant of the matrix;
secondly, set up
Figure 80474DEST_PATH_IMAGE039
Is a large eigenvalue, and
Figure 962980DEST_PATH_IMAGE040
then, then
Figure 128513DEST_PATH_IMAGE041
(14);
Wherein the content of the first and second substances,
Figure 805482DEST_PATH_IMAGE038
respectively the traces of the matrix and the determinant of the matrix,
Figure 305734DEST_PATH_IMAGE042
is that
Figure 804979DEST_PATH_IMAGE043
The ratio of (A) to (B);
when in use
Figure 11970DEST_PATH_IMAGE044
Is that
Figure 187736DEST_PATH_IMAGE045
At a minimum, when
Figure 112967DEST_PATH_IMAGE046
The larger the size, the corresponding
Figure 150324DEST_PATH_IMAGE042
The larger. Will be provided with
Figure 539717DEST_PATH_IMAGE047
Removing points;
s3.6, determination of orientation
And finding more accurate key points through the key point positioning of S3.5, wherein the key points have scale invariance. In order to realize rotation invariance, a direction angle needs to be allocated to each key point, namely, the direction of the key point is confirmed according to the domain structure of the Gaussian scale image in which the detected key point is located.
Firstly, for any key point, acquiring gradient characteristics of all pixels in a region of a Gaussian pyramid image with radius r as the radius, wherein the radius r is as follows:
Figure 824068DEST_PATH_IMAGE048
(15);
magnitude of gradient
Figure 798713DEST_PATH_IMAGE049
And direction
Figure 889029DEST_PATH_IMAGE050
The calculation formula of (2) is as follows:
Figure 883661DEST_PATH_IMAGE051
(16);
Figure 338913DEST_PATH_IMAGE052
(17);
wherein the content of the first and second substances,
Figure 301053DEST_PATH_IMAGE034
the number of pixels is represented by a number of pixels,
Figure 680212DEST_PATH_IMAGE053
is a scale image of the corresponding scale;
then, calculating gradient values and directions of all sample points in the area around the key point through a formula (16) and a formula (17);
secondly, dividing the direction into a plurality of bins, weighting and counting the direction histogram of the sample point by using a Gaussian function, and taking the bin corresponding to the maximum peak value, which is the direction of the key point;
s3,7, keypoint descriptors;
after finding out the key points of the image at different scales, in order to realize subsequent classification or matching, the features around the key points need to be obtained.
First, the key point attachment radius is
Figure 778619DEST_PATH_IMAGE054
Division of
Figure 404772DEST_PATH_IMAGE055
At each sub-region of statistical length
Figure 339361DEST_PATH_IMAGE056
Each histogram is used as a seed point, and a length of the seed point is obtained
Figure 771479DEST_PATH_IMAGE057
The vector of (a);
then, to ensure rotational invariance, the direction of the fixed key points is phasedSame direction, i.e. the direction of key point is the coordinate axis by rotating the image
Figure 737774DEST_PATH_IMAGE058
Carrying out regional statistics on a direction histogram of the rotated image along the axial direction;
the values after coordinate rotation are:
Figure 597145DEST_PATH_IMAGE059
(18);
wherein the content of the first and second substances,
Figure 19031DEST_PATH_IMAGE060
is the direction of the key point and
Figure 989261DEST_PATH_IMAGE058
the clockwise rotation angle of the included angle of the coordinate axes is a negative value, and the anticlockwise rotation angle is a positive value;
secondly, the gradient of the pixels within the sub-region is calculated and is followed
Figure 999942DEST_PATH_IMAGE061
Carrying out Gaussian weighting, and obtaining the gradients of each seed in eight directions by adopting a bilinear interpolation method;
again, in the direction histogram
Figure 843264DEST_PATH_IMAGE062
In the direction of
Figure 939396DEST_PATH_IMAGE010
The increments above are:
Figure 464050DEST_PATH_IMAGE063
(19);
wherein the content of the first and second substances,
Figure 594817DEST_PATH_IMAGE064
is that
Figure 795991DEST_PATH_IMAGE062
The rotated sample points, around the point, are limited in distance,
Figure 113840DEST_PATH_IMAGE065
is that
Figure 445114DEST_PATH_IMAGE064
Is determined by the coordinate of (a) in the space,
Figure 430387DEST_PATH_IMAGE066
is a weight of a gaussian function that is,
Figure 802463DEST_PATH_IMAGE067
are respectively
Figure 420657DEST_PATH_IMAGE064
The influence rate of the network point in two directions and the influence rate in the required direction;
then, the selection of the size of the region and the selection of the Gaussian weight scale are carried out, and the selection of each sub-region is consistent with the size of the region when the direction of the key point is calculated, namely
Figure 474064DEST_PATH_IMAGE068
Wherein
Figure 376161DEST_PATH_IMAGE008
Is the scale of the image in scale space; then, considering the problem of rotation, the radius is set to avoid the rotation
Figure 856820DEST_PATH_IMAGE068
The selected area is still after rotating
Figure 227890DEST_PATH_IMAGE069
The radius of each sub-region is:
Figure 819408DEST_PATH_IMAGE070
(20) ;
the overall zone radius is therefore:
Figure 841591DEST_PATH_IMAGE071
(21);
later, to remove the illumination effect: feature vector generated by key points
Figure 493152DEST_PATH_IMAGE072
Normalization, the calculation formula is as follows:
Figure 85939DEST_PATH_IMAGE073
(22);
wherein the content of the first and second substances,
Figure 481148DEST_PATH_IMAGE074
is the variance.
The present invention has been described in sufficient detail for clarity of disclosure and is not exhaustive of the prior art.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; it is obvious as a person skilled in the art to combine several aspects of the invention. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A method of desensitizing a medical image based on an examination image, comprising: the method comprises the following steps:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching:
s5, performing data desensitization on the content behind the matched desensitization information area; and if the desensitization is successful, acquiring an image after the desensitization, and if the desensitization is unsuccessful, reminding that manual participation is needed.
2. Examination image-based medical image desensitization method according to claim 1, characterized in that: in S4, taking the features extracted from the CT image as conjugate entities, taking the extracted feature attributes or description parameters as matching entities, and obtaining the sensitive information position by calculating the similarity measure between the matching entities to implement an image matching method of conjugate entity registration; the features may be features that are actually images or may be considered features of images.
3. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S5, when desensitizing the sensitive information, the desensitizing process is performed on the sensitive information such as the name and the case number of the patient in the examination image by a masking operation.
4. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S1, regarding feature extraction;
firstly, checking image characteristics, and extracting edge characteristics and shape characteristics of a checked image; then, processing edge features, detecting whether the inspection image has edges with obvious change or discontinuous areas by using a sobel operator, and extracting the edge features of the inspection image; secondly, processing shape features, wherein the expression of the shape features is based on the segmentation of objects or regions in the image, SIFT is adopted to extract local feature points of the image in a scale space, the local feature points keep invariance to rotation, scale scaling and brightness change, and keep stability to perspective change, affine transformation and noise.
5. Examination image-based medical image desensitization method according to claim 4, characterized in that: in the process of carrying out scale invariant feature conversion, firstly, carrying out scale space extreme value detection, searching image positions on all scales, and identifying potential interest points which are invariant to scale and rotation through Gaussian differential functions; then, feature point filtering and key point positioning are carried out, the position and the scale are determined through a fitting model on each candidate position, and the key points are selected according to the fact that the stability degree of the key points meets a set threshold; secondly, determining directions, and distributing the directions to a plurality of directions of each key point position based on the local gradient direction of the image; again, the keypoint descriptors are processed, and the local gradient of the image is measured at a selected scale in a neighborhood around each keypoint.
6. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S3, the SIFT feature extraction step is as follows;
s3.1, detecting an extreme value of the scale space, acquiring the scale space, and constructing an image pyramid;
firstly, based on the principle that a small window is used for a small key point and a large window is used for a large key point, a scale space filter is used, and a unique Gaussian kernel capable of generating a kernel function of a multi-scale space is adopted;
then, a scale space of an image is defined
Figure 369655DEST_PATH_IMAGE001
The following are:
Figure 125253DEST_PATH_IMAGE002
formula (1);
Figure 563187DEST_PATH_IMAGE003
formula (2);
wherein the content of the first and second substances,
Figure 46121DEST_PATH_IMAGE004
the position of a pixel representing the image,
Figure 331740DEST_PATH_IMAGE005
which represents the original image or images of the original image,
Figure 507507DEST_PATH_IMAGE006
which represents a convolution operation, is a function of,
Figure 248716DEST_PATH_IMAGE007
the expression of the function of gaussian is given,
Figure 473024DEST_PATH_IMAGE008
the scale space factor is a standard deviation of Gaussian normal distribution, reflects the degree of the blurred image, and the larger the value of the scale space factor is, the more blurred the image is, the larger the corresponding scale is;
secondly, the scale spaces of different images form an image Gaussian pyramid, the images are blurred and down-sampled through functions of formulas (1) and (2) to obtain a plurality of groups of images, different groups of images comprise a plurality of layers of images, and the group number calculation formula of the Gaussian pyramid is as follows:
Figure 596838DEST_PATH_IMAGE009
(3);
wherein the content of the first and second substances,
Figure 881189DEST_PATH_IMAGE010
representing the number of groups of the gaussian pyramid,
Figure 106765DEST_PATH_IMAGE011
respectively, the rows and columns of the original image; coefficient of performance
Figure 134764DEST_PATH_IMAGE012
Is that
Figure 378663DEST_PATH_IMAGE013
Any value in between;
thirdly, the Gaussian filter parameters are obtained according to the relation of the formula (4)
Figure 381386DEST_PATH_IMAGE008
Figure 281209DEST_PATH_IMAGE014
(4);
Wherein the content of the first and second substances,
Figure 909636DEST_PATH_IMAGE015
is the layer in which it is located,
Figure 211304DEST_PATH_IMAGE016
is the initial scale of the measurement,
Figure 384928DEST_PATH_IMAGE017
is the number of layers in each group,
Figure 772047DEST_PATH_IMAGE010
the number of groups is the number of the groups; after that time, the user can use the device,
determining the relationship between the image scales of adjacent layers in the same group:
Figure 204165DEST_PATH_IMAGE018
Figure 94761DEST_PATH_IMAGE019
(5);
determining the relationship between neighboring groups:
Figure 701935DEST_PATH_IMAGE020
(6);
s3.2, constructing an image Gaussian difference pyramid; the image is subjected to Gaussian difference along a scale axis to obtain points which are set to be more remarkable in the scale space, namely gradient extreme values on the scale axis, a DOG function is adopted to calculate the gradient extreme values, two adjacent layers in each group in the Gaussian pyramid use the DOG function to form the Gaussian difference pyramid, and the DOG function is as follows:
Figure 373088DEST_PATH_IMAGE021
(7);
s3.3, detecting a DOG spatial extreme value;
searching extreme points, and searching extreme values in a DOG space, wherein points with the extreme values larger or smaller than the set surrounding points are regarded as key points;
s3.4, filtering feature points and positioning key points;
first, removing smaller extreme values, and in order to obtain more accurate key point positions, performing taylor quadratic expansion on each segment of DOG function of each key point:
Figure 359630DEST_PATH_IMAGE022
(8);
wherein the content of the first and second substances,
Figure 432628DEST_PATH_IMAGE023
,
Figure 213633DEST_PATH_IMAGE024
is a parameter of the gaussian filtering and,
Figure 372082DEST_PATH_IMAGE025
is an image pixel point;
then, the extreme value is obtained by solving the formula (8), and the derivative of the formula (8) is made to be zero to obtain the extreme value point
Figure 896735DEST_PATH_IMAGE026
Figure 824240DEST_PATH_IMAGE027
(9);
Next, an extreme point is obtained for equation (9)
Figure 767358DEST_PATH_IMAGE028
Extreme value of (c)
Figure 413103DEST_PATH_IMAGE029
Figure 662818DEST_PATH_IMAGE030
(10);
And get rid of
Figure 461141DEST_PATH_IMAGE031
Removing edge noise, namely removing ridge lines;
s3.5, firstly, describing the variation trend around the extreme point of the formula (9) through a Hessian matrix, wherein the eigenvalue of the covariance matrix corresponds to the projection in the direction of the eigenvector, and the eigenvalue of the Hessian matrix is in direct proportion to the curvature of the eigenvalue in the direction of the eigenvector;
and (3) calculating a Hessian matrix by a second-order difference formula (11):
Figure 833217DEST_PATH_IMAGE032
(11) ;
wherein the content of the first and second substances,
Figure 716990DEST_PATH_IMAGE033
representing DOG function with respect to pixel points
Figure 832714DEST_PATH_IMAGE034
A second partial derivative;
calculating the ratio of the characteristic values to obtain the variation trend of the characteristic values in the direction of the characteristic vector;
then, assume that the two eigenvalues are each
Figure 672494DEST_PATH_IMAGE035
And then:
Figure 966203DEST_PATH_IMAGE036
(12) ;
Figure 258644DEST_PATH_IMAGE037
(13) ;
wherein the content of the first and second substances,
Figure 178059DEST_PATH_IMAGE038
respectively, the trace of the matrix and the determinant of the matrix;
secondly, set up
Figure 872345DEST_PATH_IMAGE039
Is a large eigenvalue, and
Figure 334026DEST_PATH_IMAGE040
then, then
Figure 441659DEST_PATH_IMAGE041
(14);
Wherein the content of the first and second substances,
Figure 836869DEST_PATH_IMAGE038
respectively the traces of the matrix and the determinant of the matrix,
Figure 464290DEST_PATH_IMAGE042
is that
Figure 349069DEST_PATH_IMAGE043
The ratio of (A) to (B);
when in use
Figure 429152DEST_PATH_IMAGE044
Is that
Figure 690369DEST_PATH_IMAGE045
At a minimum, when
Figure 906718DEST_PATH_IMAGE046
The larger the size, the corresponding
Figure 165661DEST_PATH_IMAGE042
The larger the size, the
Figure 247886DEST_PATH_IMAGE047
Removing points;
s3.6, determination of orientation
In order to realize rotation invariance, a direction angle needs to be allocated to each key point, namely the direction of the key point is confirmed in the domain structure of the Gaussian scale image where the detected key point is located;
firstly, for any key point, acquiring gradient characteristics of all pixels in a region of a Gaussian pyramid image with radius r as the radius, wherein the radius r is as follows:
Figure 984898DEST_PATH_IMAGE048
(15);
magnitude of gradient
Figure 58683DEST_PATH_IMAGE049
And direction
Figure 550845DEST_PATH_IMAGE050
The calculation formula of (2) is as follows:
Figure 605519DEST_PATH_IMAGE051
(16);
Figure 146222DEST_PATH_IMAGE052
(17);
wherein the content of the first and second substances,
Figure 586431DEST_PATH_IMAGE034
the number of pixels is represented by a number of pixels,
Figure 921597DEST_PATH_IMAGE053
is a scale image of the corresponding scale;
then, calculating gradient values and directions of all sample points in the area around the key point through a formula (16) and a formula (17);
secondly, dividing the direction into a plurality of bins, weighting and counting the direction histogram of the sample point by using a Gaussian function, and taking the bin corresponding to the maximum peak value, which is the direction of the key point;
s3,7, keypoint descriptors;
after key points of the image at different scales are found, in order to realize subsequent classification or matching, the features around the key points need to be obtained;
first, the key point attachment radius is
Figure 729147DEST_PATH_IMAGE054
Division of
Figure 604699DEST_PATH_IMAGE055
Sub-region ofThe statistical length in each sub-region is
Figure 102677DEST_PATH_IMAGE056
Each histogram is used as a seed point, and a length of the seed point is obtained
Figure 421794DEST_PATH_IMAGE057
The vector of (a);
then, to ensure rotational invariance, the direction of the key points is fixed to be the same direction, i.e. the image is rotated so that the direction of the key points is the coordinate axis
Figure 965908DEST_PATH_IMAGE058
Carrying out regional statistics on a direction histogram of the rotated image along the axial direction;
the values after coordinate rotation are:
Figure 848413DEST_PATH_IMAGE059
(18);
wherein the content of the first and second substances,
Figure 745437DEST_PATH_IMAGE060
is the direction of the key point and
Figure 687986DEST_PATH_IMAGE058
the clockwise rotation angle of the included angle of the coordinate axes is a negative value, and the anticlockwise rotation angle is a positive value;
secondly, the gradient of the pixels within the sub-region is calculated and is followed
Figure 453816DEST_PATH_IMAGE061
Carrying out Gaussian weighting, and obtaining the gradients of each seed in eight directions by adopting a bilinear interpolation method;
again, in the direction histogram
Figure 874433DEST_PATH_IMAGE062
In the direction of
Figure 956790DEST_PATH_IMAGE010
The increments above are:
Figure 804660DEST_PATH_IMAGE063
(19);
wherein the content of the first and second substances,
Figure 808519DEST_PATH_IMAGE064
is that
Figure 32827DEST_PATH_IMAGE062
The rotated sample points, around the point, are limited in distance,
Figure 156641DEST_PATH_IMAGE065
is that
Figure 519621DEST_PATH_IMAGE064
Is determined by the coordinate of (a) in the space,
Figure 932147DEST_PATH_IMAGE066
is the weight of the gaussian is the weight of,
Figure 756884DEST_PATH_IMAGE067
are respectively
Figure 938466DEST_PATH_IMAGE064
The influence rate of the network point in two directions and the influence rate in the required direction;
then, the selection of the size of the region and the selection of the Gaussian weight scale are carried out, and the selection of each sub-region is consistent with the size of the region when the direction of the key point is calculated, namely
Figure 955837DEST_PATH_IMAGE068
Wherein
Figure 855660DEST_PATH_IMAGE008
Is the scale of the image in scale space;
then, considering the problem of rotation, the radius is set to avoid the rotation
Figure 749667DEST_PATH_IMAGE068
Is partially empty, and the selected area is still after rotation
Figure 785756DEST_PATH_IMAGE069
The radius of each sub-region is:
Figure 224959DEST_PATH_IMAGE070
(20) ;
the overall zone radius is therefore:
Figure 674394DEST_PATH_IMAGE071
(21);
later, to remove the illumination effect: feature vector generated by key points
Figure 778617DEST_PATH_IMAGE072
Normalization, the calculation formula is as follows:
Figure 747841DEST_PATH_IMAGE073
(22);
wherein the content of the first and second substances,
Figure 544896DEST_PATH_IMAGE074
is the variance.
CN202210434694.4A 2022-04-24 2022-04-24 Medical image desensitization method based on examination images Pending CN114822781A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210434694.4A CN114822781A (en) 2022-04-24 2022-04-24 Medical image desensitization method based on examination images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210434694.4A CN114822781A (en) 2022-04-24 2022-04-24 Medical image desensitization method based on examination images

Publications (1)

Publication Number Publication Date
CN114822781A true CN114822781A (en) 2022-07-29

Family

ID=82506817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210434694.4A Pending CN114822781A (en) 2022-04-24 2022-04-24 Medical image desensitization method based on examination images

Country Status (1)

Country Link
CN (1) CN114822781A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306287A (en) * 2011-08-24 2012-01-04 百度在线网络技术(北京)有限公司 Method and equipment for identifying sensitive image
KR20160057024A (en) * 2014-11-12 2016-05-23 한국전기연구원 Markerless 3D Object Tracking Apparatus and Method therefor
CN108921939A (en) * 2018-07-04 2018-11-30 王斌 A kind of method for reconstructing three-dimensional scene based on picture
CN113688837A (en) * 2021-09-29 2021-11-23 平安科技(深圳)有限公司 Image desensitization method, device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306287A (en) * 2011-08-24 2012-01-04 百度在线网络技术(北京)有限公司 Method and equipment for identifying sensitive image
KR20160057024A (en) * 2014-11-12 2016-05-23 한국전기연구원 Markerless 3D Object Tracking Apparatus and Method therefor
CN108921939A (en) * 2018-07-04 2018-11-30 王斌 A kind of method for reconstructing three-dimensional scene based on picture
CN113688837A (en) * 2021-09-29 2021-11-23 平安科技(深圳)有限公司 Image desensitization method, device, electronic equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王阳 刘立波: "《基于DICOM的CT医疗图像脱敏系统的研究与实现》", 《现代计算机(专业版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients
CN117278692B (en) * 2023-11-16 2024-02-13 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Similar Documents

Publication Publication Date Title
Hughes et al. A deep learning framework for matching of SAR and optical imagery
Ye et al. A robust multimodal remote sensing image registration method and system using steerable filters with first-and second-order gradients
Zhang Image analysis
Xu et al. Automatic nuclei detection based on generalized laplacian of gaussian filters
İlsever et al. Two-dimensional change detection methods: remote sensing applications
Sirmacek et al. Urban-area and building detection using SIFT keypoints and graph theory
JP5844783B2 (en) Method for processing grayscale document image including text region, method for binarizing at least text region of grayscale document image, method and program for extracting table for forming grid in grayscale document image
Bouchiha et al. Automatic remote-sensing image registration using SURF
CN110021024B (en) Image segmentation method based on LBP and chain code technology
CN110298376B (en) Bank bill image classification method based on improved B-CNN
Prakash et al. Detection of copy-move forgery using AKAZE and SIFT keypoint extraction
CN113392856B (en) Image forgery detection device and method
Özgen et al. Text detection in natural and computer-generated images
CN108550165A (en) A kind of image matching method based on local invariant feature
CN111242050A (en) Automatic change detection method for remote sensing image in large-scale complex scene
CN114549603B (en) Method, system, equipment and medium for converting labeling coordinate of cytopathology image
CN114822781A (en) Medical image desensitization method based on examination images
CN111709426A (en) Diatom identification method based on contour and texture
Isaac et al. Image forgery detection using region–based Rotation Invariant Co-occurrences among adjacent LBPs
Merchant et al. Object measurement
Huang et al. Morphological building index (MBI) and its applications to urban areas
Zambanini et al. Robust automatic segmentation of ancient coins
CN111768368B (en) Image area copying and tampering detection method based on maximum stable extremal area
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration
Prince et al. Multifeature fusion for automatic building change detection in wide-area imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220729