CN114822781A - Medical image desensitization method based on examination images - Google Patents
Medical image desensitization method based on examination images Download PDFInfo
- Publication number
- CN114822781A CN114822781A CN202210434694.4A CN202210434694A CN114822781A CN 114822781 A CN114822781 A CN 114822781A CN 202210434694 A CN202210434694 A CN 202210434694A CN 114822781 A CN114822781 A CN 114822781A
- Authority
- CN
- China
- Prior art keywords
- image
- desensitization
- scale
- gaussian
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000586 desensitisation Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000007689 inspection Methods 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 8
- 239000000126 substance Substances 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 2
- 241000170489 Upis Species 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 230000000873 masking effect Effects 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000011524 similarity measure Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Abstract
The present invention relates to a medical image desensitization method based on an inspection image, which includes the steps of, S1, judging whether an original image is readable; s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image; s3, processing the binary image, cutting the sensitive information, detecting the characteristics by SIFT, extracting the inspection image, obtaining the key points and the image descriptors, and obtaining the desensitization information database; s4, performing feature matching on the obtained image descriptor and an image descriptor of a desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching: s5, performing data desensitization on the content behind the matched desensitization information area; the invention has reasonable design, compact structure and convenient use.
Description
Technical Field
The invention relates to a medical image desensitization method based on examination images.
Background
At present, with the rapid development of big data and the internet, network information grows, and China faces a great challenge in the aspect of network information security. One of the problems of network security in the medical industry is that data encryption measures are not implemented and the necessary system protection is lacked. Medical information contains a large amount of sensitive data, and if effective encryption measures are not implemented in the collection, storage and transmission processes, the information is at a great risk of leakage.
The medical industry produces a large number of examination images containing a large amount of private information of patients, such as names, patient cases, clinic numbers, etc., for which desensitization is required, and the prior art adopts a method for the private information of the patients in the examination images, which is generally opening the images, manually desensitizing by using a tool, and storing desensitized pictures. The above operation is directed to a large amount of image data, which is costly and time consuming. In view of this, the present invention implements automatic data desensitization of the detected images based on computer vision processing techniques (feature extraction, image pre-processing, etc.).
The main data format of the CT image file in the medical industry is DICOM, data desensitization is realized for the image in the format in the prior art, but a large amount of inspection image data in formats of png, jpg and the like exist in the medical industry, and the desensitization problem of the inspection image needs to be considered in the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is generally to provide a medical image desensitization method based on examination images. Aiming at the problems that a medical desensitization system is not popularized yet, manual desensitization efficiency is low and the like at present, automatic desensitization of document image sensitive information is realized by using a computer vision processing technology based on a pycharm development platform and an opencv library. The desensitization precision of the algorithm is 95%, and the requirements of scene application are met.
Image preprocessing, image binarization, gradient calculation, filtering and other processing, intercepting the sensitive information image, making a desensitization information database, extracting the characteristics of the image, obtaining the position of the sensitive information by adopting a characteristic matching algorithm, and desensitizing the sensitive information by obtaining the position information.
Image binarization: the gray value of a pixel point on the image is set to be 0 or 255, so that the whole image presents an obvious black-and-white effect.
Image feature extraction: feature extraction refers to a method and a process for extracting information which is characteristic in an image by using a computer.
And (3) feature matching: a process of extracting features of an image and then matching the same or similar features.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
a method of medical image desensitization based on examination images, comprising the steps of:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching:
s5, performing data desensitization on the content behind the matched desensitization information area; and if the desensitization is successful, acquiring the desensitized image, and if the desensitization is unsuccessful, reminding the user of needing manual participation.
The invention has the advantages of reasonable design, low cost, firmness, durability, safety, reliability, simple operation, time and labor saving, capital saving, compact structure and convenient use.
Compared with the prior art, the invention has the advantages that: according to the technical scheme provided by the invention, automatic desensitization of the sensitive information of the inspection image is realized through algorithms such as feature extraction and feature matching, the data desensitization can be accurately carried out on the inspection image, the privacy of a patient is protected, and a large amount of manpower and material resources are saved.
The invention can automatically realize the desensitization of the sensitive information of the inspection image, and saves the labor compared with the manual desensitization. According to the method, SIFT feature extraction and FlanBasedMatcher matching methods are adopted, and desensitization is realized on pictures with different scales. And the stability is better.
Drawings
FIG. 1 is a flow diagram of a medical image data desensitization technique based on an examination image;
FIG. 2 is a SIFT feature extraction flowchart
FIG. 3 is a diagram of the effect of the binarized image after the image preprocessing in step (2);
FIG. 4 is a diagram of an embodiment of inspection image raw data provided in FIG. 1;
FIG. 5 is a graph of desensitization results using a masking approach after an inspection image desensitization process based on data desensitization;
FIG. 6 is a diagram of an embodiment of inspection image raw data provided FIG. 2;
fig. 7 is a diagram of an example of raw data of the inspection image provided in fig. 3.
Detailed Description
In order to solve the problems of the related art, as shown in fig. 1 to 7, the present invention aims to achieve privacy protection of an examination image of a patient based on automatic desensitization of the examination image. To achieve this object, a medical image data desensitization technique based on examination images includes the following steps for ensuring proper operation of a procedure:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image; when the image is preprocessed, preprocessing the image to be checked through algorithms such as smoothing, median filtering, edge detection, gradient calculation, equalization and the like, eliminating irrelevant information in the image, and enhancing the detectability of the relevant information and simplifying data to the maximum extent;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching: secondly, the features extracted from the CT image are used as conjugate entities, the extracted feature attributes or description parameters are used as matching entities, and the image matching method of conjugate entity registration is realized by calculating the similarity measure between the matching entities to obtain the position of sensitive information; the features may be features of an actual picture, or may be considered features of an image;
s5, performing data desensitization on the content behind the matched desensitization information area; if the desensitization is successful, acquiring an image after the desensitization, and if the desensitization is unsuccessful, reminding people of needing manual participation; when the sensitive information is desensitized, the mask operation is used for desensitizing the sensitive information such as the name and the case number of a patient in the examination image.
Regarding feature extraction;
common image features include color features, shape features, texture features, and edge features.
Aiming at the characteristics of the inspection image, the characteristics of characters are mainly extracted, the paper or the paper concave-convex property of the characters is not taken into consideration, and the characters are not influenced by the paper color and the character color. Therefore, color and texture features are excluded, and edge features and shape features of the inspection image are mainly extracted.
Edge characteristics: and detecting whether the edge of the inspection image has obvious change or a discontinuous region by using a sobel operator, and extracting the edge characteristic of the inspection image.
Shape characteristics: the expression of shape features must be based on the segmentation of objects or regions in the image. And the SIFT is adopted to extract image local feature points in a scale space, the image local feature points keep invariance to rotation, scale scaling and brightness change, and the image local feature points also keep certain stability to view angle change, affine transformation and noise.
Scale-invariant feature transform (SIFT) is an algorithm for computer vision. The method is used for detecting and describing the local characteristics in the image, searching an extreme point in a space scale, and extracting the position, the scale and the rotation invariant of the extreme point, and the algorithm comprises the following steps:
and (3) detection of extreme values in the scale space: the image locations are searched for on all scales. Potential scale-and rotation-invariant points of interest are identified by gaussian derivative functions.
Feature point filtering and key point positioning: at each candidate location, the location and scale are determined by fitting a fine model. The selection of the key points depends on their degree of stability.
Direction determination: one or more directions are assigned to each keypoint location based on the local gradient direction of the image. All subsequent operations on the image data are transformed with respect to the orientation, scale and location of the keypoints, providing invariance to these transformations.
Key point descriptor: local gradients of the image are measured at a selected scale in a neighborhood around each keypoint. These gradients are transformed into a representation that allows for relatively large local shape deformations and illumination variations.
In S3, the SIFT feature extraction step is as follows;
s3.1, detecting an extreme value of the scale space, acquiring the scale space, and constructing an image pyramid;
in different scale spaces, the same window cannot be used for detecting extreme points, a small window is used for small key points, and a large window is used for large key points;
wherein the content of the first and second substances,the position of a pixel representing the image,which represents the original image or images of the original image,which represents a convolution operation, is a function of,the expression of the function of gaussian is given,the scale space factor is a standard deviation of Gaussian normal distribution, reflects the degree of image blurring, and the larger the value is, the more blurred the image is, and the larger the corresponding scale is.
The scale spaces of different images form an image Gaussian pyramid, the images are blurred and subjected to down-sampling through functions of formulas (1) and (2) to obtain a plurality of groups of images, and different groups of images comprise a plurality of layers of images.
The calculation formula of the number of groups of the Gaussian pyramid is as follows:
wherein the content of the first and second substances,representing the number of groups of the gaussian pyramid,respectively, the rows and columns of the original image. Coefficient of performanceIs thatAny value in between;
wherein the content of the first and second substances,is the layer in which it is located,is the initial scale of the measurement,is the number of layers in each group,the number of groups is the number of the groups;
relationship between image scales of adjacent layers within the same group:
relationship between adjacent groups:
s3.2, constructing an image Gaussian difference pyramid;
performing Gaussian difference on the image along a scale axis to obtain points which are set to be more remarkable in scale space, namely gradient extreme values on the scale axis, calculating the gradient extreme values by adopting a dog function, and forming a Gaussian difference pyramid by using the dog function at two adjacent layers in each group in the Gaussian pyramid;
s3.3, detecting a DOG spatial extreme value;
and searching extreme points, and searching extreme values in the DOG space, wherein points with the extreme values larger or smaller than the set surrounding points are regarded as key points.
S3.4, feature point filtering and key point positioning
Because DOG is sensitive to noise and edges, local extreme points detected in the Gaussian difference pyramid of S3.2 can be accurately positioned as feature points through further inspection;
first, removing smaller extreme values, and in order to obtain more accurate key point positions, performing taylor quadratic expansion on each segment of DOG function of each key point:
wherein the content of the first and second substances,,is a parameter of the gaussian filtering and,is an image pixel point;
then, the extreme value is obtained by solving the formula (8), and the derivative of the formula (8) is made to be zero to obtain the extreme value point:
DOG often appears as a ridgeline in an edge-wise manner, i.e. a slower change in the direction along the line with less curvature and a more drastic change in the direction of the perpendicular with more curvature. And removing edge noise, namely removing ridge lines.
S3.5, firstly, describing the change trend around the extreme point (9) through a Hessian matrix, wherein the eigenvalue of the covariance matrix corresponds to the projection in the direction of the eigenvector; the larger the value is, the slower the change trend of the reaction function in the direction is, namely the larger the curvature is, and the eigenvalue of the Hessian matrix is in direct proportion to the curvature of the eigenvalue in the eigenvector direction;
and (3) calculating a Hessian matrix through a second order difference formula (11):
wherein the content of the first and second substances,representing DOG function with respect to pixel pointsA second partial derivative;
calculating the ratio of the characteristic values to obtain the variation trend of the characteristic values in the direction of the characteristic vector;
wherein the content of the first and second substances,respectively, the trace of the matrix and the determinant of the matrix;
Wherein the content of the first and second substances,respectively the traces of the matrix and the determinant of the matrix,is thatThe ratio of (A) to (B);
when in useIs thatAt a minimum, whenThe larger the size, the correspondingThe larger. Will be provided withRemoving points;
s3.6, determination of orientation
And finding more accurate key points through the key point positioning of S3.5, wherein the key points have scale invariance. In order to realize rotation invariance, a direction angle needs to be allocated to each key point, namely, the direction of the key point is confirmed according to the domain structure of the Gaussian scale image in which the detected key point is located.
Firstly, for any key point, acquiring gradient characteristics of all pixels in a region of a Gaussian pyramid image with radius r as the radius, wherein the radius r is as follows:
wherein the content of the first and second substances,the number of pixels is represented by a number of pixels,is a scale image of the corresponding scale;
then, calculating gradient values and directions of all sample points in the area around the key point through a formula (16) and a formula (17);
secondly, dividing the direction into a plurality of bins, weighting and counting the direction histogram of the sample point by using a Gaussian function, and taking the bin corresponding to the maximum peak value, which is the direction of the key point;
s3,7, keypoint descriptors;
after finding out the key points of the image at different scales, in order to realize subsequent classification or matching, the features around the key points need to be obtained.
First, the key point attachment radius isDivision ofAt each sub-region of statistical lengthEach histogram is used as a seed point, and a length of the seed point is obtainedThe vector of (a);
then, to ensure rotational invariance, the direction of the fixed key points is phasedSame direction, i.e. the direction of key point is the coordinate axis by rotating the imageCarrying out regional statistics on a direction histogram of the rotated image along the axial direction;
the values after coordinate rotation are:
wherein the content of the first and second substances,is the direction of the key point andthe clockwise rotation angle of the included angle of the coordinate axes is a negative value, and the anticlockwise rotation angle is a positive value;
secondly, the gradient of the pixels within the sub-region is calculated and is followedCarrying out Gaussian weighting, and obtaining the gradients of each seed in eight directions by adopting a bilinear interpolation method;
wherein the content of the first and second substances,is thatThe rotated sample points, around the point, are limited in distance,is thatIs determined by the coordinate of (a) in the space,is a weight of a gaussian function that is,are respectivelyThe influence rate of the network point in two directions and the influence rate in the required direction;
then, the selection of the size of the region and the selection of the Gaussian weight scale are carried out, and the selection of each sub-region is consistent with the size of the region when the direction of the key point is calculated, namelyWhereinIs the scale of the image in scale space; then, considering the problem of rotation, the radius is set to avoid the rotationThe selected area is still after rotatingThe radius of each sub-region is:
the overall zone radius is therefore:
later, to remove the illumination effect: feature vector generated by key pointsNormalization, the calculation formula is as follows:
The present invention has been described in sufficient detail for clarity of disclosure and is not exhaustive of the prior art.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; it is obvious as a person skilled in the art to combine several aspects of the invention. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (6)
1. A method of desensitizing a medical image based on an examination image, comprising: the method comprises the following steps:
s1, judging whether the original image is readable;
s2, if the original image is readable, carrying out data preprocessing on the provided original image to obtain a binary image of the original image;
s3, processing the binary image, cutting the sensitive information, detecting the characteristic by SIFT, extracting the checking image, obtaining the key point and the image descriptor, and obtaining the desensitization information database, wherein, cutting the sensitive information in the preprocessed image, extracting the characteristic of the information needing desensitization, obtaining the characteristic descriptor, and forming the desensitization information database;
s4, firstly, carrying out feature matching on the obtained image descriptor and the image descriptor of the desensitization information database to obtain an information area needing desensitization; then, in the process of feature matching, fast nearest neighbor search is adopted for feature matching:
s5, performing data desensitization on the content behind the matched desensitization information area; and if the desensitization is successful, acquiring an image after the desensitization, and if the desensitization is unsuccessful, reminding that manual participation is needed.
2. Examination image-based medical image desensitization method according to claim 1, characterized in that: in S4, taking the features extracted from the CT image as conjugate entities, taking the extracted feature attributes or description parameters as matching entities, and obtaining the sensitive information position by calculating the similarity measure between the matching entities to implement an image matching method of conjugate entity registration; the features may be features that are actually images or may be considered features of images.
3. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S5, when desensitizing the sensitive information, the desensitizing process is performed on the sensitive information such as the name and the case number of the patient in the examination image by a masking operation.
4. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S1, regarding feature extraction;
firstly, checking image characteristics, and extracting edge characteristics and shape characteristics of a checked image; then, processing edge features, detecting whether the inspection image has edges with obvious change or discontinuous areas by using a sobel operator, and extracting the edge features of the inspection image; secondly, processing shape features, wherein the expression of the shape features is based on the segmentation of objects or regions in the image, SIFT is adopted to extract local feature points of the image in a scale space, the local feature points keep invariance to rotation, scale scaling and brightness change, and keep stability to perspective change, affine transformation and noise.
5. Examination image-based medical image desensitization method according to claim 4, characterized in that: in the process of carrying out scale invariant feature conversion, firstly, carrying out scale space extreme value detection, searching image positions on all scales, and identifying potential interest points which are invariant to scale and rotation through Gaussian differential functions; then, feature point filtering and key point positioning are carried out, the position and the scale are determined through a fitting model on each candidate position, and the key points are selected according to the fact that the stability degree of the key points meets a set threshold; secondly, determining directions, and distributing the directions to a plurality of directions of each key point position based on the local gradient direction of the image; again, the keypoint descriptors are processed, and the local gradient of the image is measured at a selected scale in a neighborhood around each keypoint.
6. The examination image-based medical image desensitization method according to claim 1, characterized in that: in S3, the SIFT feature extraction step is as follows;
s3.1, detecting an extreme value of the scale space, acquiring the scale space, and constructing an image pyramid;
firstly, based on the principle that a small window is used for a small key point and a large window is used for a large key point, a scale space filter is used, and a unique Gaussian kernel capable of generating a kernel function of a multi-scale space is adopted;
wherein the content of the first and second substances,the position of a pixel representing the image,which represents the original image or images of the original image,which represents a convolution operation, is a function of,the expression of the function of gaussian is given,the scale space factor is a standard deviation of Gaussian normal distribution, reflects the degree of the blurred image, and the larger the value of the scale space factor is, the more blurred the image is, the larger the corresponding scale is;
secondly, the scale spaces of different images form an image Gaussian pyramid, the images are blurred and down-sampled through functions of formulas (1) and (2) to obtain a plurality of groups of images, different groups of images comprise a plurality of layers of images, and the group number calculation formula of the Gaussian pyramid is as follows:
wherein the content of the first and second substances,representing the number of groups of the gaussian pyramid,respectively, the rows and columns of the original image; coefficient of performanceIs thatAny value in between;
Wherein the content of the first and second substances,is the layer in which it is located,is the initial scale of the measurement,is the number of layers in each group,the number of groups is the number of the groups; after that time, the user can use the device,
determining the relationship between the image scales of adjacent layers in the same group:
determining the relationship between neighboring groups:
s3.2, constructing an image Gaussian difference pyramid; the image is subjected to Gaussian difference along a scale axis to obtain points which are set to be more remarkable in the scale space, namely gradient extreme values on the scale axis, a DOG function is adopted to calculate the gradient extreme values, two adjacent layers in each group in the Gaussian pyramid use the DOG function to form the Gaussian difference pyramid, and the DOG function is as follows:
s3.3, detecting a DOG spatial extreme value;
searching extreme points, and searching extreme values in a DOG space, wherein points with the extreme values larger or smaller than the set surrounding points are regarded as key points;
s3.4, filtering feature points and positioning key points;
first, removing smaller extreme values, and in order to obtain more accurate key point positions, performing taylor quadratic expansion on each segment of DOG function of each key point:
wherein the content of the first and second substances,,is a parameter of the gaussian filtering and,is an image pixel point;
then, the extreme value is obtained by solving the formula (8), and the derivative of the formula (8) is made to be zero to obtain the extreme value point:
s3.5, firstly, describing the variation trend around the extreme point of the formula (9) through a Hessian matrix, wherein the eigenvalue of the covariance matrix corresponds to the projection in the direction of the eigenvector, and the eigenvalue of the Hessian matrix is in direct proportion to the curvature of the eigenvalue in the direction of the eigenvector;
and (3) calculating a Hessian matrix by a second-order difference formula (11):
wherein the content of the first and second substances,representing DOG function with respect to pixel pointsA second partial derivative;
calculating the ratio of the characteristic values to obtain the variation trend of the characteristic values in the direction of the characteristic vector;
wherein the content of the first and second substances,respectively, the trace of the matrix and the determinant of the matrix;
Wherein the content of the first and second substances,respectively the traces of the matrix and the determinant of the matrix,is thatThe ratio of (A) to (B);
when in useIs thatAt a minimum, whenThe larger the size, the correspondingThe larger the size, theRemoving points;
s3.6, determination of orientation
In order to realize rotation invariance, a direction angle needs to be allocated to each key point, namely the direction of the key point is confirmed in the domain structure of the Gaussian scale image where the detected key point is located;
firstly, for any key point, acquiring gradient characteristics of all pixels in a region of a Gaussian pyramid image with radius r as the radius, wherein the radius r is as follows:
wherein the content of the first and second substances,the number of pixels is represented by a number of pixels,is a scale image of the corresponding scale;
then, calculating gradient values and directions of all sample points in the area around the key point through a formula (16) and a formula (17);
secondly, dividing the direction into a plurality of bins, weighting and counting the direction histogram of the sample point by using a Gaussian function, and taking the bin corresponding to the maximum peak value, which is the direction of the key point;
s3,7, keypoint descriptors;
after key points of the image at different scales are found, in order to realize subsequent classification or matching, the features around the key points need to be obtained;
first, the key point attachment radius isDivision ofSub-region ofThe statistical length in each sub-region isEach histogram is used as a seed point, and a length of the seed point is obtainedThe vector of (a);
then, to ensure rotational invariance, the direction of the key points is fixed to be the same direction, i.e. the image is rotated so that the direction of the key points is the coordinate axisCarrying out regional statistics on a direction histogram of the rotated image along the axial direction;
the values after coordinate rotation are:
wherein the content of the first and second substances,is the direction of the key point andthe clockwise rotation angle of the included angle of the coordinate axes is a negative value, and the anticlockwise rotation angle is a positive value;
secondly, the gradient of the pixels within the sub-region is calculated and is followedCarrying out Gaussian weighting, and obtaining the gradients of each seed in eight directions by adopting a bilinear interpolation method;
wherein the content of the first and second substances,is thatThe rotated sample points, around the point, are limited in distance,is thatIs determined by the coordinate of (a) in the space,is the weight of the gaussian is the weight of,are respectivelyThe influence rate of the network point in two directions and the influence rate in the required direction;
then, the selection of the size of the region and the selection of the Gaussian weight scale are carried out, and the selection of each sub-region is consistent with the size of the region when the direction of the key point is calculated, namelyWhereinIs the scale of the image in scale space;
then, considering the problem of rotation, the radius is set to avoid the rotationIs partially empty, and the selected area is still after rotationThe radius of each sub-region is:
the overall zone radius is therefore:
later, to remove the illumination effect: feature vector generated by key pointsNormalization, the calculation formula is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210434694.4A CN114822781A (en) | 2022-04-24 | 2022-04-24 | Medical image desensitization method based on examination images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210434694.4A CN114822781A (en) | 2022-04-24 | 2022-04-24 | Medical image desensitization method based on examination images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114822781A true CN114822781A (en) | 2022-07-29 |
Family
ID=82506817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210434694.4A Pending CN114822781A (en) | 2022-04-24 | 2022-04-24 | Medical image desensitization method based on examination images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114822781A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278692A (en) * | 2023-11-16 | 2023-12-22 | 邦盛医疗装备(天津)股份有限公司 | Desensitization protection method for monitoring data of medical detection vehicle patients |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102306287A (en) * | 2011-08-24 | 2012-01-04 | 百度在线网络技术(北京)有限公司 | Method and equipment for identifying sensitive image |
KR20160057024A (en) * | 2014-11-12 | 2016-05-23 | 한국전기연구원 | Markerless 3D Object Tracking Apparatus and Method therefor |
CN108921939A (en) * | 2018-07-04 | 2018-11-30 | 王斌 | A kind of method for reconstructing three-dimensional scene based on picture |
CN113688837A (en) * | 2021-09-29 | 2021-11-23 | 平安科技(深圳)有限公司 | Image desensitization method, device, electronic equipment and computer readable storage medium |
-
2022
- 2022-04-24 CN CN202210434694.4A patent/CN114822781A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102306287A (en) * | 2011-08-24 | 2012-01-04 | 百度在线网络技术(北京)有限公司 | Method and equipment for identifying sensitive image |
KR20160057024A (en) * | 2014-11-12 | 2016-05-23 | 한국전기연구원 | Markerless 3D Object Tracking Apparatus and Method therefor |
CN108921939A (en) * | 2018-07-04 | 2018-11-30 | 王斌 | A kind of method for reconstructing three-dimensional scene based on picture |
CN113688837A (en) * | 2021-09-29 | 2021-11-23 | 平安科技(深圳)有限公司 | Image desensitization method, device, electronic equipment and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
王阳 刘立波: "《基于DICOM的CT医疗图像脱敏系统的研究与实现》", 《现代计算机(专业版)》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278692A (en) * | 2023-11-16 | 2023-12-22 | 邦盛医疗装备(天津)股份有限公司 | Desensitization protection method for monitoring data of medical detection vehicle patients |
CN117278692B (en) * | 2023-11-16 | 2024-02-13 | 邦盛医疗装备(天津)股份有限公司 | Desensitization protection method for monitoring data of medical detection vehicle patients |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hughes et al. | A deep learning framework for matching of SAR and optical imagery | |
Ye et al. | A robust multimodal remote sensing image registration method and system using steerable filters with first-and second-order gradients | |
Zhang | Image analysis | |
Xu et al. | Automatic nuclei detection based on generalized laplacian of gaussian filters | |
İlsever et al. | Two-dimensional change detection methods: remote sensing applications | |
Sirmacek et al. | Urban-area and building detection using SIFT keypoints and graph theory | |
JP5844783B2 (en) | Method for processing grayscale document image including text region, method for binarizing at least text region of grayscale document image, method and program for extracting table for forming grid in grayscale document image | |
Bouchiha et al. | Automatic remote-sensing image registration using SURF | |
CN110021024B (en) | Image segmentation method based on LBP and chain code technology | |
CN110298376B (en) | Bank bill image classification method based on improved B-CNN | |
Prakash et al. | Detection of copy-move forgery using AKAZE and SIFT keypoint extraction | |
CN113392856B (en) | Image forgery detection device and method | |
Özgen et al. | Text detection in natural and computer-generated images | |
CN108550165A (en) | A kind of image matching method based on local invariant feature | |
CN111242050A (en) | Automatic change detection method for remote sensing image in large-scale complex scene | |
CN114549603B (en) | Method, system, equipment and medium for converting labeling coordinate of cytopathology image | |
CN114822781A (en) | Medical image desensitization method based on examination images | |
CN111709426A (en) | Diatom identification method based on contour and texture | |
Isaac et al. | Image forgery detection using region–based Rotation Invariant Co-occurrences among adjacent LBPs | |
Merchant et al. | Object measurement | |
Huang et al. | Morphological building index (MBI) and its applications to urban areas | |
Zambanini et al. | Robust automatic segmentation of ancient coins | |
CN111768368B (en) | Image area copying and tampering detection method based on maximum stable extremal area | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
Prince et al. | Multifeature fusion for automatic building change detection in wide-area imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220729 |