CN113539442A - Working method for acquiring abnormal medical image - Google Patents

Working method for acquiring abnormal medical image Download PDF

Info

Publication number
CN113539442A
CN113539442A CN202110882682.3A CN202110882682A CN113539442A CN 113539442 A CN113539442 A CN 113539442A CN 202110882682 A CN202110882682 A CN 202110882682A CN 113539442 A CN113539442 A CN 113539442A
Authority
CN
China
Prior art keywords
gray
feature
characteristic
region
characteristic region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110882682.3A
Other languages
Chinese (zh)
Inventor
刘玉蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhenni Thinking Technology Co ltd
Original Assignee
Chongqing Zhenni Thinking Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhenni Thinking Technology Co ltd filed Critical Chongqing Zhenni Thinking Technology Co ltd
Priority to CN202110882682.3A priority Critical patent/CN113539442A/en
Publication of CN113539442A publication Critical patent/CN113539442A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2451Classification techniques relating to the decision surface linear, e.g. hyperplane

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a working method for acquiring an abnormal medical image, which comprises the following steps: s1, acquiring an original medical image, drawing coordinates of the medical image, dividing boundaries of feature areas, describing the feature areas according to a time track, and extracting features of the feature areas in the time development track; s2, after the features are extracted, calculating the extracted feature area through a training analysis model, and screening abnormal features; and transmitted to the remote terminal.

Description

Working method for acquiring abnormal medical image
Technical Field
The invention relates to the field of big data, in particular to a working method for acquiring abnormal medical images.
Background
In the medical detection process of a patient, a large number of acquired medical images cannot meet the detection accuracy by extracting abnormal images through a traditional detection algorithm, the judgment area of the abnormal images is dynamically changed, abnormal characteristic coordinates need to meet new detection criteria to complete abnormal area judgment, but when the new abnormal images appear in the same area, an acceptable adjacent abnormal area does not exist, so that the gray value or the abnormal area cannot obtain the characteristic value of a communicated target, the threshold content needs to be repeatedly debugged, the time consumption is long, the system overhead is increased, and the problem of abnormal image missing detection is caused, so that technical personnel in the field need to solve the corresponding technical problem urgently.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly provides a working method for acquiring abnormal medical images.
In order to achieve the above object, the present invention provides a working method for acquiring an abnormal medical image, which includes:
s1, acquiring an original medical image, drawing coordinates of the medical image, dividing boundaries of feature areas, describing the feature areas according to a time track, and extracting features of the feature areas in the time development track;
s2, after the features are extracted, calculating the extracted feature area through a training analysis model, and screening abnormal features; and transmitted to the remote terminal.
Preferably, the S1 includes:
s1-1, extracting a medical image of the patient, and setting a coordinate starting point (x, y) of the medical image; traversing pixel points P along the coordinate starting point line by line, marking the gray distribution information of all medical images and the distribution condition of a gray mutation area according to the gray value of the pixel points P, and delimiting a characteristic area of a target medical image; the boundary of the characteristic region is divided by adjusting a gray level cognitive model K (x, y),
Figure BDA0003192918850000021
because the pixel point coordinates form a coordinate matrix, specific coordinates are extracted from each pixel point coordinate through a gray scale cost function w (a), wherein a is the frequency of gray scale change, in the gray scale cost function calculation process, the characteristic attribute of the pixel point coordinates is converged, the extracted pixel point coordinates are subjected to gray scale deviation judgment, and p (x) isn) The probability of the x-axis gray level change of the nth pixel point, p (y)n) The probability of gray scale change on the y-axis of the nth pixel point, u (p (x)n) Is the x-axis gray scale change weight function of the nth pixel point, v (p (y)n) Is the y-axis gray scale change weight function of the nth pixel,
in the process of judging the gray scale cognitive model, the threshold value judgment is carried out on the pixel point coordinate noise generated by the gray scale change judgment result, and the specific coordinate is extracted by using a gray scale value function w (a) threshold value decision mechanism
Figure BDA0003192918850000022
Q (a) is an offset value of the entire gradation change, and δ is a gradation change degree adjustment factor.
Preferably, the S1 further includes:
s1-2, performing feature region description on the divided medical images according to time development tracks, generating a feature region change queue according to a time sequence, and setting the time for recording the images as t1、t2、t3、…、tmWherein t ismFor ending time, averagely cutting the divided characteristic regions, comparing the color difference of each sub-block in a reference characteristic image, and calculating the characteristic region matching function in a time development track
M(t)=F(t)·γ·E(η(t+1))+T(t)
F (t) is a judgment function for traversing the feature image according to the time t sequence, gamma is a fitting factor, E (eta (t +1)) is a matching value of a t +1 time node feature image expected function eta (t +1), and T (t) is a time consumption value of a feature image extraction framework; and extracting the sub-block characteristics in the characteristic region according to the time development track.
Preferably, the S1 further includes:
s1-3, judging the gray threshold value, preliminarily extracting the characteristic region in the time development track through the threshold value, generating the gradual change degree threshold value mu of the integral gray of the characteristic region, and judging the formula as
Figure BDA0003192918850000031
Wherein c is the total number of sub-blocks with gray level change in the characteristic region, | EtAnd | is a related image comparison sequence of the subblocks in the characteristic region in the time development track process, epsilon is the gray gradient of the subblock to be extracted, and lambda is a threshold value set for the gray.
Preferably, the S2 includes:
s2-1, when the extracted characteristic region is calculated by training the analysis model, observing the comprehensive difference of the characteristic region, carrying out abnormal extraction on the difference,
training analytical model
Figure BDA0003192918850000032
Wherein N isjFeatures of grey scale normalized for the region of the feature, DjFor the class of grey levels of the characteristic region, BjIn order to obtain the gradient of the gray level of the characteristic region, A (C (e)) is a gray level classification function of the characteristic region, C (e) is a gray level shift characteristic quantity, e is a gray level, and m is a positive integer.
Preferably, the S2 includes:
s2-2, screening the abnormal gray features through the definition of the optimal hyperplane according to the gray difference of the feature areas; the optimal hyperplane is
Figure BDA0003192918850000033
Wherein w is a feature region weight vector, w0Called bias (bias), x represents the gray pixel closest to the hyperplane, T is the transposition, when the pixel coordinate is to the hyperplane (w, w) of the feature area0) The distance of (a) is:
Figure BDA0003192918850000041
wherein, the gray difference of the characteristic region is a distance gray learning value rjAnd the transformed gray training loss value sj(ii) a Through scalar sigma adjustment, converging the gray training sample z which implies all the characteristic regions in the hyperplane through a characteristic region screening function R (w),
Figure BDA0003192918850000042
wherein HrIs a characteristic region grayDegree sample class, HcA threshold value is trained for the feature region gray scale,
Figure BDA0003192918850000043
to adjust the parameters, p is the number of iterations.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the method comprises the steps of extracting characteristic images aiming at abnormal lesions of the liver or the lung, more accurately obtaining position information of the abnormal lesions by setting a judgment threshold, screening abnormal images after comparison with a reference image, and transmitting the abnormal images to a remote server for reference of the abnormal images.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of an embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of the present invention;
fig. 3 is a general schematic of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 3, the present invention discloses a working method for obtaining an abnormal medical image, which comprises the following steps:
s1, acquiring an original medical image, drawing coordinates of the medical image, dividing boundaries of feature areas, describing the feature areas according to a time track, and extracting features of the feature areas in the time development track;
s2, after the features are extracted, calculating the extracted feature area through a training analysis model, and screening abnormal features; and transmitted to the remote terminal.
The S1 includes:
s1-1, extracting a medical image of the patient, and setting a coordinate starting point (x, y) of the medical image; traversing pixel points P along the coordinate starting point line by line, marking the gray distribution information of all medical images and the distribution condition of a gray mutation area according to the gray value of the pixel points P, and delimiting a characteristic area of a target medical image; the boundary of the characteristic region is divided by adjusting a gray level cognitive model K (x, y),
Figure BDA0003192918850000051
because the pixel point coordinates form a coordinate matrix, specific coordinates are extracted from each pixel point coordinate through a gray scale cost function w (a), wherein a is the frequency of gray scale change, in the gray scale cost function calculation process, the characteristic attribute of the pixel point coordinates is converged, the extracted pixel point coordinates are subjected to gray scale deviation judgment, and p (x) isn) The probability of the x-axis gray level change of the nth pixel point, p (y)n) The probability of gray scale change on the y-axis of the nth pixel point, u (p (x)n) Is the x-axis gray scale change weight function of the nth pixel point, v (p (y)n) Is the y-axis gray scale change weight function of the nth pixel,
in the process of judging the gray scale cognitive model, the threshold value judgment is carried out on the pixel point coordinate noise generated by the gray scale change judgment result, and the specific coordinate is extracted by using a gray scale value function w (a) threshold value decision mechanism
Figure BDA0003192918850000052
Q (a) is an offset value of the entire gray-scale variation, delta is a variation degree adjustment factor of gray-scale,
s1-2, dividing the medical scienceThe image carries out characteristic region description according to a time development track, a characteristic region change queue is generated according to a time sequence, and the time for recording the image is set as t1、t2、t3、…、tmWherein t ismTo end the time, the divided feature regions are averagely cut into blocks, each sub-block is compared with the color difference in the reference feature image, a feature region matching function in a time development track is calculated (the feature region matching function can record the features of the sub-blocks and complete the boundary division of the color difference, thereby extracting how much color difference is generated inside the feature regions, as shown in fig. 1 and 2)
M(t)=F(t)·γ·E(η(t+1))+T(t)
F (t) is a judgment function for traversing the feature image according to the time t sequence, gamma is a fitting factor, E (eta (t +1)) is a matching value of a t +1 time node feature image expected function eta (t +1), and T (t) is a time consumption value of a feature image extraction framework; extracting the sub-block characteristics in the characteristic region according to the time development track,
s1-3, judging the gray threshold value, preliminarily extracting the characteristic region in the time development track through the threshold value, generating the gradual change degree threshold value mu of the integral gray of the characteristic region, and judging the formula as
Figure BDA0003192918850000061
Wherein c is the total number of sub-blocks with gray level change in the characteristic region, | EtI is a related image comparison sequence of the subblocks in the characteristic region in the time development track process, epsilon is the gray gradient of the subblock to be extracted, and lambda is a threshold value set for the gray;
the gray judgment of the sub-blocks can be completed in the time development track process according to the gray threshold value, so that the whole characteristic region can be accurately extracted.
The S2 includes:
s2-1, when the extracted characteristic region is calculated by training the analysis model, observing the comprehensive difference of the characteristic region, carrying out abnormal extraction on the difference,
training analytical model
Figure BDA0003192918850000071
Wherein N isjFeatures of grey scale normalized for the region of the feature, DjFor the class of grey levels of the characteristic region, BjTaking the gradient of the gray level of the characteristic region, A (C (e)) is a gray level classification function of the characteristic region, C (e) is a gray level shift characteristic quantity, e is the gray level, m is a positive integer, performing gray level extraction on the sub-blocks of a plurality of practical development tracks by summing up the gray level differences of all the characteristic regions,
s2-2, screening the abnormal gray features through the definition of the optimal hyperplane according to the gray difference of the feature areas; the optimal hyperplane is
Figure BDA0003192918850000072
Wherein w is a feature region weight vector, w0Called bias (bias), x represents the gray pixel closest to the hyperplane, T is the transposition, when the pixel coordinate (x, y) is to the hyperplane (w, w) of the feature area0) The distance of (a) is:
Figure BDA0003192918850000073
distance gray learning value r of gray difference in characteristic regionjAnd the transformed gray training loss value sj(ii) a Through scalar sigma adjustment, converging the gray training sample z which implies all the characteristic regions in the hyperplane through a characteristic region screening function R (w),
Figure BDA0003192918850000074
wherein HrFor the feature region gray sample class, HcA threshold value is trained for the feature region gray scale,
Figure BDA0003192918850000075
to adjust the parameters, p is the number of iterations.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. A working method for acquiring abnormal medical images is characterized by comprising the following steps:
s1, acquiring an original medical image, drawing coordinates of the medical image, dividing boundaries of feature areas, describing the feature areas according to a time track, and extracting features of the feature areas in the time development track;
s2, after the features are extracted, calculating the extracted feature area through a training analysis model, and screening abnormal features; and transmitted to the remote terminal.
2. The method of claim 1, wherein said S1 includes:
s1-1, extracting a medical image of the patient, and setting a coordinate starting point (x, y) of the medical image; traversing pixel points P along the coordinate starting point line by line, marking the gray distribution information of all medical images and the distribution condition of a gray mutation area according to the gray value of the pixel points P, and delimiting a characteristic area of a target medical image; the boundary of the characteristic region is divided by adjusting a gray level cognitive model K (x, y),
Figure FDA0003192918840000011
because the pixel point coordinates form a coordinate matrix, specific coordinates are extracted from each pixel point coordinate through a gray scale cost function w (a), wherein a is the frequency of gray scale change generation, and the characteristic attributes of the pixel point coordinates are subjected to calculation in the gray scale cost function calculation processLine convergence, gray level deviation judgment is carried out on the extracted pixel point coordinates, p (x)n) The probability of the x-axis gray level change of the nth pixel point, p (y)n) The probability of gray scale change on the y-axis of the nth pixel point, u (p (x)n) Is the x-axis gray scale change weight function of the nth pixel point, v (p (y)n) Is the y-axis gray scale change weight function of the nth pixel,
in the process of judging the gray scale cognitive model, the threshold value judgment is carried out on the pixel point coordinate noise generated by the gray scale change judgment result, and the specific coordinate is extracted by using a gray scale value function w (a) threshold value decision mechanism
Figure FDA0003192918840000021
Q (a) is an offset value of the entire gradation change, and δ is a gradation change degree adjustment factor.
3. The method of claim 2, wherein said S1 further includes:
s1-2, performing feature region description on the divided medical images according to time development tracks, generating a feature region change queue according to a time sequence, and setting the time for recording the images as t1、t2、t3、…、tmWherein t ismFor ending time, averagely cutting the divided characteristic regions, comparing the color difference of each sub-block in a reference characteristic image, and calculating the characteristic region matching function in a time development track
M(t)=F(t)·γ·E(η(t+1))+T(t)
F (t) is a judgment function for traversing the feature image according to the time t sequence, gamma is a fitting factor, E (eta (t +1)) is a matching value of a t +1 time node feature image expected function eta (t +1), and T (t) is a time consumption value of a feature image extraction framework; and extracting the sub-block characteristics in the characteristic region according to the time development track.
4. The method of claim 3, wherein said S1 further includes:
s1-3, judging the gray threshold value, preliminarily extracting the characteristic region in the time development track through the threshold value, generating the gradual change degree threshold value mu of the integral gray of the characteristic region, and judging the formula as
Figure FDA0003192918840000022
Wherein c is the total number of sub-blocks with gray level change in the characteristic region, | EtAnd | is a related image comparison sequence of the subblocks in the characteristic region in the time development track process, epsilon is the gray gradient of the subblock to be extracted, and lambda is a threshold value set for the gray.
5. The method of claim 1, wherein said S2 includes:
s2-1, when the extracted characteristic region is calculated by training the analysis model, observing the comprehensive difference of the characteristic region, carrying out abnormal extraction on the difference,
training analytical model
Figure FDA0003192918840000031
Wherein N isjFeatures of grey scale normalized for the region of the feature, DjFor the class of grey levels of the characteristic region, BjIn order to obtain the gradient of the gray level of the characteristic region, A (C (e)) is a gray level classification function of the characteristic region, C (e) is a gray level shift characteristic quantity, e is a gray level, and m is a positive integer.
6. The method of claim 5, wherein said step S2 includes:
s2-2, screening the abnormal gray features through the definition of the optimal hyperplane according to the gray difference of the feature areas; the optimal hyperplane is
Figure FDA0003192918840000032
Wherein w is a feature region weight vector, w0Called bias (bias), x represents the gray pixel closest to the hyperplane, T is the transposition, when the pixel coordinate is to the hyperplane (w, w) of the feature area0) The distance of (a) is:
Figure FDA0003192918840000033
wherein, the gray difference of the characteristic region is a distance gray learning value rjAnd the transformed gray training loss value sj(ii) a Through scalar sigma adjustment, converging the gray training sample z which implies all the characteristic regions in the hyperplane through a characteristic region screening function R (w),
Figure FDA0003192918840000034
wherein HrFor the feature region gray sample class, HcA threshold value is trained for the feature region gray scale,
Figure FDA0003192918840000041
to adjust the parameters, p is the number of iterations.
CN202110882682.3A 2021-08-02 2021-08-02 Working method for acquiring abnormal medical image Withdrawn CN113539442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110882682.3A CN113539442A (en) 2021-08-02 2021-08-02 Working method for acquiring abnormal medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110882682.3A CN113539442A (en) 2021-08-02 2021-08-02 Working method for acquiring abnormal medical image

Publications (1)

Publication Number Publication Date
CN113539442A true CN113539442A (en) 2021-10-22

Family

ID=78090169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110882682.3A Withdrawn CN113539442A (en) 2021-08-02 2021-08-02 Working method for acquiring abnormal medical image

Country Status (1)

Country Link
CN (1) CN113539442A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (en) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 Tissue offset monitoring method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (en) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 Tissue offset monitoring method
CN117315032B (en) * 2023-11-28 2024-03-08 北京智愈医疗科技有限公司 Tissue offset monitoring method

Similar Documents

Publication Publication Date Title
CN111814871B (en) Image classification method based on reliable weight optimal transmission
CN110321811A (en) Depth is against the object detection method in the unmanned plane video of intensified learning
CN111784022B (en) Short-time adjacent large fog prediction method based on combination of Wrapper method and SVM method
CN114897804A (en) Ground penetrating radar tunnel lining quality detection method based on self-supervision learning
CN112966740A (en) Small sample hyperspectral image classification method based on core sample adaptive expansion
CN113539442A (en) Working method for acquiring abnormal medical image
CN109002792B (en) SAR image change detection method based on layered multi-model metric learning
CN113569848A (en) Extraction working method for analyzing medical image through big data
CN113160222A (en) Production data identification method for industrial information image
CN109598681A (en) The reference-free quality evaluation method of image after a kind of symmetrical Tangka repairs
CN111239137B (en) Grain quality detection method based on transfer learning and adaptive deep convolution neural network
CN113807452B (en) Business process abnormality detection method based on attention mechanism
CN112949517A (en) Plant stomata density and opening degree identification method and system based on deep migration learning
CN111144462A (en) Unknown individual identification method and device for radar signals
CN114201632A (en) Label noisy data set amplification method for multi-label target detection task
CN109636194B (en) Multi-source cooperative detection method and system for major change of power transmission and transformation project
CN115376315B (en) Multi-level bayonet quality control method for road network emission accounting
CN115641335A (en) Embryo abnormity multi-cascade intelligent comprehensive analysis system based on time difference incubator
CN110349119A (en) Pavement disease detection method and device based on edge detection neural network
CN113835964B (en) Cloud data center server energy consumption prediction method based on small sample learning
CN116089944A (en) Cross-platform application program abnormality detection method and system based on transfer learning
CN111881125B (en) Real-time cleaning method and system for offshore non-combat target
CN111488907B (en) Robust image recognition method based on dense PCANet
CN110348452B (en) Image binarization processing method and system
CN113468720A (en) Service life prediction method for digital-analog linked random degradation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211022

WW01 Invention patent application withdrawn after publication