CN114093462A - Medical image processing method, computer device, and storage medium - Google Patents

Medical image processing method, computer device, and storage medium Download PDF

Info

Publication number
CN114093462A
CN114093462A CN202010754855.9A CN202010754855A CN114093462A CN 114093462 A CN114093462 A CN 114093462A CN 202010754855 A CN202010754855 A CN 202010754855A CN 114093462 A CN114093462 A CN 114093462A
Authority
CN
China
Prior art keywords
image
medical image
feature points
target
image feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010754855.9A
Other languages
Chinese (zh)
Inventor
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010754855.9A priority Critical patent/CN114093462A/en
Publication of CN114093462A publication Critical patent/CN114093462A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image processing method, an object searching method, computer equipment and a storage medium. The method comprises the following steps: acquiring a target medical image to be identified; determining a characteristic point identification model corresponding to the target medical image, wherein the characteristic point identification model is used for extracting image information related to image characteristic points and the connection relation between the image characteristic points, and carrying out image characteristic point identification according to the extracted image information, and the image characteristic points are related to a processing process corresponding to a detection object of the target medical image; acquiring image feature points in the target medical image according to the feature point identification model; and (5) associating and outputting the target medical image and the image feature points. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.

Description

Medical image processing method, computer device, and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a medical image processing method, a medical image recognition model processing method, an image processing method, a medical image device, a computer device, and a computer-readable storage medium.
Background
Medical images are internal tissue images obtained by a medical imaging device in a non-invasive manner from a human body or a part of the human body, and are analyzed and judged by a diagnostician according to information provided by the images.
Before the medical operation is performed, the anatomical parameters required by the operation are usually determined according to medical images, for example, in the case of an orthopedic total replacement operation, the anatomical parameters related to the femur and the acetabulum required by the operation are determined to be used as a data reference for subsequent use in preoperative planning.
Currently, the identification of the anatomical point in the medical image is usually performed by a doctor himself, and the doctor manually identifies and determines the medical image according to his own experience, thereby bringing a large amount of work and possibly causing errors or errors in the identification of the anatomical point by human errors. Therefore, it is desirable to provide a more accurate medical image identification technique.
Disclosure of Invention
In view of the above, the present application is made to provide an image processing method, an image searching method, an object searching method, and a computer device, computer-readable storage medium that overcome or at least partially solve the above problems.
According to an aspect of the present application, there is provided a medical image processing method, including:
acquiring a target medical image to be identified;
determining a feature point identification model corresponding to the target medical image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
acquiring image feature points in the target medical image according to a feature point identification model;
and outputting the target medical image and the image feature points in a correlated manner.
According to another aspect of the present application, there is provided a method for processing medical images, comprising:
acquiring a target medical image based on a detection object;
acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the connection relationship between the image feature points and the image feature points, and identifying the image feature points according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and displaying the target medical image and the image feature points in an associated manner on a display interface of the medical image equipment.
According to another aspect of the present application, there is provided a method for processing a medical image recognition model, including:
acquiring a first medical image sample and a connection relation between image characteristic points and image characteristic points which are correspondingly marked;
training a feature point recognition model according to the marked medical image sample, wherein the feature point recognition model is used for extracting image information related to image feature points and connection relations among the image feature points, and performing image feature point recognition according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image.
In accordance with another aspect of the present application, there is provided an image processing method including:
acquiring a target image to be identified;
determining that the target image conforms to an image content rule corresponding to a target detection object;
adjusting pixels of the target image to a target window width and a target window level;
determining a feature point identification model corresponding to the target image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target image;
acquiring image feature points in the target image according to a feature point identification model;
and outputting the target image, the image characteristic points and a schematic diagram of a processing process of a detection object of the target image in a correlated manner.
According to another aspect of the present application, there is provided a medical imaging apparatus comprising:
the image acquisition module is used for acquiring a target medical image to be identified;
the model determining module is used for determining a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
the characteristic point acquisition module is used for acquiring image characteristic points in the target medical image according to the characteristic point identification model;
and the characteristic point output module is used for outputting the target medical image and the image characteristic points in a correlation manner.
According to another aspect of the present application, a medical imaging apparatus is provided, which includes an image acquisition device, an image processing device and a display interface;
the image acquisition device is used for acquiring a target medical image based on a detection object;
the image processing device is used for acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the image feature points and the connection relation between the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and the display interface is used for displaying the target medical image and the image feature points in an associated manner.
In accordance with another aspect of the present application, there is provided an electronic device including: a processor; and
a memory having executable code stored thereon, which when executed, causes the processor to perform a method as in any one of the above.
According to another aspect of the application, there is provided one or more machine-readable media having stored thereon executable code that, when executed, causes a processor to perform a method as any one of the above.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a schematic diagram showing anatomical points in a hip X-ray film;
fig. 2 shows a specific example of a medical image processing method of the present application;
FIG. 3 is a flowchart illustrating an embodiment of a medical image processing method according to a first embodiment of the present application;
FIG. 4 is a flowchart illustrating an embodiment of a method for processing medical images according to a second embodiment of the present application;
FIG. 5 is a flow chart of an embodiment of an image processing method according to the third embodiment of the present application;
FIG. 6 is a flowchart illustrating an embodiment of a method for processing medical images according to a fourth embodiment of the present application;
FIG. 7 is a flow chart of an embodiment of a processing method for a medical image recognition model according to the fifth embodiment of the present application;
FIG. 8 is a flowchart of an embodiment of an image processing method according to the sixth embodiment of the present application;
fig. 9 is a block diagram illustrating an embodiment of a medical image processing apparatus according to a seventh embodiment of the present application;
fig. 10 is a block diagram illustrating an embodiment of a medical image processing apparatus according to an eighth embodiment of the present application;
FIG. 11 is a block diagram illustrating an embodiment of a processing apparatus for medical image recognition model according to the ninth embodiment of the present application;
FIG. 12 is a block diagram illustrating an embodiment of a processing apparatus of an image processing model according to a tenth embodiment of the present application;
fig. 13 is a block diagram illustrating an embodiment of a medical imaging apparatus according to an eleventh embodiment of the present application;
fig. 14 is a block diagram illustrating an embodiment of a medical imaging apparatus according to a twelfth embodiment of the present application;
fig. 15 illustrates an exemplary system that can be used to implement various embodiments described in this disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The identification of medical images is of great significance, taking the total replacement surgery as an example, the total replacement surgery is an effective means for treating hip degenerative diseases, and the number of surgeries is increasing as the aging process of the population is accelerated. The success of a total replacement procedure depends on a careful and accurate preoperative planning, which entails first taking a positive lateral X-ray of the hip joint, then measuring on the X-ray several anatomical parameters of the femur and acetabulum, including but not limited to the femoral shaft angle, acetabular cup anteversion angle, femoral offset, lower limb length difference, and then selecting an appropriate implant prosthesis template based on these anatomical parameters. Measuring these anatomical parameters typically relies on corresponding key points on the femur and acetabulum, the accuracy of which determines the success or failure of preoperative planning.
In addition, the imaging principle of the medical image is different from that of the picture, and the imaging definition is far lower than that of the picture, so that the improvement of the identification accuracy of the medical image is very important.
At present, the identification of the anatomical point in the medical image is usually completed by the doctor himself, which brings about the errors or errors of the identification of the anatomical point caused by human errors. In the conventional machine learning method, a predefined feature extraction method is used to obtain the features of each pixel or region of an image, and then the image features are classified or regressed to locate the positions of image feature points. Because the scheme is a feature designed manually, only partial information on the image is utilized, and the identification precision is not high. Therefore, it is desirable to provide a more accurate medical image identification technique.
In view of the above problems, in the embodiment of the present application, a feature point identification model is trained in advance, and is configured to extract image information related to image feature points and a connection relationship between the image feature points, and perform image feature point identification according to the extracted image information, where the identified image feature points are related to a processing procedure corresponding to a detection object of a target medical image, and further when performing feature point identification on the target medical image to be identified, first determine a feature point identification model corresponding to the target medical image, further obtain image feature points in the target medical image according to the feature point identification model, and output the target medical image and the image feature points in a correlated manner. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
The medical image according to the embodiments of the present application may be an image obtained by using various medical imaging techniques. Medical imaging techniques include X-ray irradiation, Angiography (Angiography), cardioangiography (Cardiac tomography), Computed Tomography (CT), Positron Emission Tomography (PET), Nuclear Magnetic Resonance Imaging (NMRI), Medical ultrasonography (Medical ultrasound), and the like. Accordingly, a planar image or a stereoscopic multi-dimensional image can be obtained as a medical image.
The image feature points in the embodiment of the application are related to a processing process of a detection object corresponding to the medical image, the processing process of the detection object can be an operation or a medical processing process of the detection object, and the image feature points are key points related to the processing process in the medical image. Taking a hip X-ray film (specifically, a hip joint X-ray positive position film) shot before a total replacement operation as an example, a corresponding detection object is a human hip, the processing process of the detection object is an operation process of the human hip, and the image feature points are set as anatomical points related to the total hip replacement operation in the hip X-ray film.
The medical image can be acquired from a corresponding storage space, the medical images detected by a plurality of medical imaging devices are stored in the storage space, the medical images in the storage space are identified by corresponding detection objects, and after the detection objects of the medical images needing to be screened are determined, the medical images of the determined detection objects can be screened according to the detection objects and the identification carried by the medical images in the storage space, for example, the identification corresponding to the HIP X-ray product can be 'PELVIS' or 'HIP'.
In an alternative embodiment, the medical image may be acquired from a PACS (Picture Archiving and Communication Systems). The PACS system is a system applied to a hospital image department, and mainly aims to store various daily medical images (including images generated by equipment such as nuclear magnetism, CT, ultrasound, various X-ray machines, various infrared instruments, microscopy instruments and the like) in a digital mode through various interfaces in a large quantity, and can be quickly called back for use under certain authorization when needed
After the medical image is acquired, the medical image can be screened, the wrong medical image can be deleted, the image content rule of the medical image can be configured for the detection object, the image content rule defines the characteristics which the medical image corresponding to the specific detection object should have, including the position, shape and the like of the detection object in the medical image, and whether the medical image is the medical image corresponding to the detection object can be determined by identifying whether the acquired medical image meets the image content rule corresponding to the detection object.
The medical image is different from a common image, because the medical image acquires images of human tissues, and based on different principles of image acquisition, the definition of the medical image is far lower than that of the common image acquired by image acquisition equipment such as a camera, and the like, in the same medical image, tissue information with different densities often shows the difference of gray scale ranges, a reasonable gray scale value section can be found, and the concerned content or the main content of the whole image is retained to the maximum extent in the section, so that the information in the image can be more fully utilized.
In an optional embodiment of the present application, the target medical image may be adjusted to a first target window width and a first target window level.
In an alternative embodiment, the window width and level may be calculated based on the histogram, and the image threshold determined from the histogram is extended to both sides, which is the main part of the interval containing the image as much as possible. Specifically, the image threshold of the target medical image may be determined according to the histogram of the target medical image, the histogram of the image may be calculated, and the image threshold may be calculated according to the histogram of the image, specifically, the image threshold may be calculated by a maximum class spacing method, where the image threshold includes a maximum pixel value and a minimum pixel value of a pixel of the medical image.
And further determining a window width boundary value according to the maximum pixel value and the minimum pixel value to obtain a first target window width of the target medical image, and determining a first target window level of the target medical image according to the first target window width.
Specifically, a minimum pixel value minT and a maximum pixel value maxT of the image pixel may be calculated, an initial value of a left boundary of the window width is minT, and an initial value of a right boundary of the window width is maxT. The left boundary is moved up if a partial region near the smallest pixel can be cut; partial areas close to the maximum pixel value (such as 0%, 1%, 2.5% and the like) can be cut off, and then the right boundary is moved downwards; or, judging which part of the corresponding histograms at the left boundary and the right boundary are sparser, and cutting off a partial region from the sparser part, namely increasing the left boundary or decreasing the right boundary. And finally, taking the average value of the right boundary and the left boundary as a window level, and taking the difference value of the right boundary and the left boundary as a window width.
And acquiring medical images through medical imaging equipment, and identifying the image characteristic points through the characteristic point identification model. The feature point recognition model is obtained by training a medical image sample, and is trained by marking image feature points on the medical image sample and connecting the image feature points. In the embodiment of the application, the feature point identification model can be a deep neural network model which is commonly used for solving algorithm models in application fields of artificial intelligence, pattern identification and the like, and mathematical calculations such as data analysis, data prediction and the like are realized by a processing mechanism of connection interactive calculation of multiple layers and multiple calculation units.
Different feature point recognition models can be configured for the medical images of different detection objects, and the feature point recognition models used correspondingly can be determined according to the detection objects corresponding to the medical images.
The image information is acquired through the deep learning technology, and compared with the image information which is manually defined and used for characteristic point identification, the deep learning technology can depict the internal information with rich data, so that the identification of the characteristic points of the image is more accurate.
The deep neural network model may include a hidden layer, a convolutional layer and a pooling layer, the hidden layer is configured to extract image information related to image feature points and connection relationships between the image feature points from the medical image, the convolutional layer is configured to identify the image feature points and the connection relationships between the image feature points according to the extracted image information, and the pooling layer is configured to generate characterization data of the identified image feature points and connection relationships. The characterization data can be a distribution thermodynamic diagram for characterizing the distribution of the image feature points and edge vectors for characterizing the connection relationship between the images.
Wherein the convolutional layer and the pooling layer may be divided into two branches for two identification directions. That is, the convolution layer includes a first convolution layer for identifying the image feature points according to the extracted image information and a second convolution layer for identifying the connection relationship between the image feature points according to the extracted image information. The pooling layer comprises a first pooling layer and a second pooling layer, the first pooling layer is used for generating a distribution thermodynamic diagram for representing the distribution condition of the image feature points, the second pooling layer is used for generating edge vectors for representing the connection relation between the image points, specifically, the dimension reduction of the features is performed by using a convolution layer, and then the regression edge vectors are obtained through a global average pooling layer. The two branches share the preceding deep convolutional neural network to reduce the model complexity.
In the embodiment of the present application, the medical image samples may be divided into two types, one type is a first medical image sample for training the model, and the other type is a second medical image sample for detecting the feature point recognition model.
The image feature points corresponding to the markers in the first medical image sample may be determined by a processing procedure corresponding to the detected object, and first, processing parameters related to the processing procedure corresponding to the detected object in the first medical image sample are determined, taking a medical operation as an example, the processing parameters related to the processing procedure may be anatomical parameters in an operation procedure, and the anatomical parameters required to be measured for preoperative planning of total hip replacement surgery include, but are not limited to: femoral neck angle, center-edge angle, acetabular cup anteversion angle, acetabular cup back-tilt angle, femoral offset, lower limb length variation, femoral stem length, and the like.
Further, the image data points involved in the calculation process of the processing parameters are used as image feature points related to the processing process in the first medical image sample. The processing parameters of the detected object in the processing process are calculated according to the image data points in the medical image, so the image data points involved in the processing parameter calculation process are used as the image characteristic points related to the processing process. Taking the anatomical parameters involved in total hip replacement surgery as an example, a plurality of image data points are involved in the calculation process of the anatomical parameters, and the image data points are taken as anatomical key points.
Referring to fig. 1, a schematic diagram of anatomical points in a hip X-ray film is shown. Defining anatomical key points on the femur and acetabulum according to anatomical parameters, wherein the left hip and the right hip are symmetric, and each defining 11 anatomical key points, comprising:
right hip: a right tear drop (1), a right hip socket upper edge (2), a right ischial tuberosity (3), a right lesser trochanter (tuberosity) (4), a right greater trochanter (tuberosity) (5), a right femoral head center (6), a right hip socket lower edge (7), a right hip socket top (8), a right femoral head center (9), a right femoral shaft axis upper end (10) and a right femoral shaft axis lower end (11)
Hip on left side: a left tear drop (12), a left acetabulum upper edge (13), a left ischial tuberosity (14), a left lesser trochanter (tuberosity) (15), a left greater trochanter (tuberosity) (16), a left femoral head center (17), a left acetabulum lower edge (18), a left acetabulum top (19), a left femoral head center (20), a left femoral shaft axis upper end (21), a left femoral shaft axis lower end (22)
Further, a connection relation related to a processing procedure corresponding to the detection object of the first medical image sample may be determined as a connection relation between image feature points of the corresponding mark of the first medical image sample from the determined connection relations between the image feature points. It is understood that there are a large number of connections between image feature points, some of which are related to the procedure and some of which have no actual anatomical reference value, and therefore, only the portion of the image data points related to the procedure can be used as the image feature points.
As shown in fig. 1, the connecting lines define the connection relationship between the key points according to the anatomical parameters, the left hip and the right hip each define 10 sides, and the two sides define 4 sides, which are expressed by the number pairs as follows:
right hip: (1, 3), (6, 1), (6, 2), (6, 7), (6, 8), (6, 9), (9, 4), (9, 10), (10, 5), (10, 11)
Hip on left side: (12, 14), (17, 12), (17, 13), (17, 18), (17, 19), (17, 20), (20, 15), (20, 21), (21, 16), (21, 22)
Between the left and right hip: (1, 12), (3, 14), (4, 15), (6, 17)
In the same manner, after the first medical image sample is obtained, the first medical image sample may be adjusted to the second target window width and the second target window level. The logic for calculating the second target window width and the second target window level corresponding to the first medical image sample is the same as the logic for calculating the first target window width and the first target window level, and details thereof are omitted here.
Because the first medical image sample includes other image contents except for the detected object, when the other image contents are more, the first medical image sample can be cropped to remove the contents except for the detected object in the first medical image sample, and the hip X-ray film is taken as an example, the parts except for the acetabulum and the femur can be cropped. Scope of medical image cropping (coordinates of upper left corner points o1=(x1,y1) Coordinate o of lower right corner point2=(x2,y2) Namely:
I2=I1[o1:o2]=I1[x1:x2,y1:y2]
in the formula x*Representing the line direction, y, of the image*Indicating the column direction of the image.
The first medical image sample can be adjusted to the target resolution and then cut, so that inconsistency of image cutting results with different resolutions is avoided, for example, other image contents corresponding to the high-resolution image are more than other image contents corresponding to the low-resolution image, and more contents for cutting the high-resolution image are required.
The specific resolution adjustment can be realized by adopting a bilinear interpolation mode and recording the scale scaling factor s1Assume that the original image is I0Then the interpolated image is I0
I1=s1×I0
After the first medical image sample is cut, the first medical image sample can be adjusted to a target size, so that the situation that the image size difference is too large, the identified image information is not in the same level, and the identification accuracy of the model is not high is avoided. For the clipped image I2Scaling the size to obtain image data I required by model training3Recording the scaling factor s2
I3=s2×I2
Because the connection relation between the image characteristic points and the image characteristic points is marked for the first medical image sample, the data are expressed by the coordinates of the data points in the medical image, when the medical image changes, the connection relation between the corresponding image characteristic points and the image characteristic points also changes, so that the first scaling parameter adjusted to be corresponding to the target resolution and the second scaling parameter adjusted to be corresponding to the target size can be recorded, and the connection relation between the first medical image sample and the image characteristic points correspondingly marked can be updated according to the first scaling parameter and the second scaling parameter.
For example, the coordinates of the key points marked on the original drawing are changed in the same manner to obtain the coordinates I3The corresponding coordinates on the image are transformed by the following formula:
Figure BDA0002611186320000121
in the formula c0=(px,py) As coordinates marked on the original image, c1=(p′x,p′y) In order to be able to convert the coordinates,
Figure BDA0002611186320000122
the expression rounding-down corresponds to two directions of rows and columns, and the conversion relation is as follows:
Figure BDA0002611186320000123
Figure BDA0002611186320000124
according to the above formula, the thermodynamic diagram H of the image feature point can be generated according to the following formula:
Figure BDA0002611186320000125
in the formula, exp is a natural base number, σ is a variance, the value in the experiment is 2, the range of the gaussian distribution is controlled, and the size of the input clipping image is 512 × 512, and the size of the generated thermodynamic diagram is 128 × 128, so the center of the gaussian distribution is taken as an example
Figure BDA0002611186320000126
The value outside the gaussian distribution range on the thermodynamic diagram is 0. Each image feature point generates a thermodynamic diagram, and a medical image generates 22 thermodynamic diagrams.
The connection relation between the image characteristic points adopts an edge vector
Figure BDA0002611186320000127
The characterization can be generated according to the following formula:
Figure BDA0002611186320000128
Figure BDA0002611186320000129
in the formula
Figure BDA00026111863200001210
The end point of the edge is represented,
Figure BDA00026111863200001211
indicating the starting point of the edge, the indices x, y indicate the row direction and the column direction respectively,
Figure BDA00026111863200001212
is the result of e normalization. Taking an example where a medical image includes 24 edges, the resulting vector size of the edges is 24 × 2, which is tiled to a size of 48 × 1.
Before the first image sample is cut, window level adjustment and imaging type adjustment can be performed on the image.
Correspondingly, the processes of cutting, adjusting to the target resolution and adjusting to the target size can be executed when the target medical image is detected, so that the accuracy of the detection result is improved.
After the image feature point identification model is obtained, a second medical image sample can be adopted to detect the feature point identification model, and the second medical image sample is input into the feature point identification model to obtain a corresponding output result; and if the output result and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks of the second medical image sample do not meet the similar condition, continuing training the characteristic point recognition model. And until the output result of the second medical image sample in the feature point recognition model and the connection relation between the image feature points and the image feature points of the corresponding marks of the second medical image sample meet the similar condition.
The embodiment of the application can be implemented on medical image equipment and also can be implemented on equipment with certain computing capacity, and the equipment can be a user side terminal, a conventional server, a cloud host, a virtual center and the like. The identified image feature points can be output to medical image equipment or terminal equipment for display, for example, the target medical image and the image feature points can be displayed in a related manner on a display interface of the medical image equipment, so that the image feature points on the medical image can be directly obtained after the medical image equipment is acquired, and can also be output to the terminal equipment of a doctor for the doctor to check.
When the server side executes the scheme, the target medical image can be sent to the medical image server, and the image feature points identified by the medical image server according to the feature point identification model are obtained.
After the identified image feature points are obtained, a processing schematic diagram for a processing procedure can be generated according to the image feature points and the processing procedure of the detection object of the target medical image. Taking the hip replacement operation as an example, a processing schematic diagram corresponding to the hip replacement operation may be further generated according to the image feature points related to the hip replacement operation, for example, an execution flow of the hip replacement operation is acquired, and the processing schematic diagram for performing the hip replacement operation on the target medical image is generated by combining the image feature points of the target medical image, so as to be referred to or adjusted by the doctor.
Referring to fig. 2, a specific example of a medical image processing method according to the present application is shown, which specifically includes: the method comprises the steps of acquiring a hip X-ray of a human body by using medical imaging equipment, further determining a feature point identification model of which a detected object is the hip, wherein the feature point identification model comprises a hidden layer, a convolutional layer and a pooling layer and is used for extracting image information related to connection relations between image feature points and image feature points, the convolutional layer identifies the image feature points and the connection relations between the image feature points in the medical image based on the image information, and further generating characterization data corresponding to the connection relations between the image feature points and the image feature points by the pooling layer, such as distribution thermodynamic diagrams (Gaussian thermodynamic diagrams) and edge vectors of the image feature points.
Referring to fig. 3, a flowchart of an embodiment of a method for processing a medical image according to an embodiment of the present application is shown, where the method specifically includes the following steps:
step 101, acquiring a target medical image to be identified.
Step 102, determining a feature point identification model corresponding to the target medical image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and performing image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image.
And 103, acquiring image feature points in the target medical image according to the feature point identification model.
And 104, outputting the target medical image and the image feature points in a correlated manner.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 4, a flowchart of an embodiment of a method for processing a medical image according to a second embodiment of the present application is shown, where the method specifically includes the following steps:
step 201, determining a detection object of the target medical image.
Step 202, screening the medical image corresponding to the determined detection object from the medical image database.
Step 203, determining that the target medical image to be identified conforms to the image content rule corresponding to the detection object.
Step 204, determining a feature point identification model corresponding to the target medical image, where the feature point identification model is used to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
Step 205, sending the target medical image to a medical image server, and obtaining image feature points identified by the medical image server according to a feature point identification model.
And step 206, displaying the target medical image and the image feature points in a related manner on a display interface of the medical image equipment.
Step 207, generating a processing schematic diagram for the processing process according to the image feature points and the processing process of the detection object of the target medical image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 5, a flowchart of an embodiment of a method for processing a medical image according to a third embodiment of the present application is shown, where the method specifically includes the following steps:
step 301, acquiring a first medical image sample and a connection relationship between image feature points and image feature points of corresponding marks.
Step 302, the first medical image sample is cut to remove the content except the detected object in the first medical image sample.
Step 303, training a feature point recognition model according to the marked medical image sample.
In the embodiment of the application, 80% of samples can be used for training the model, and 20% of samples can be used for verifying the model. The loss function used to train the feature point recognition model is:
Loss=Losslandmarks+λ×Lossedges
Figure BDA0002611186320000161
Figure BDA0002611186320000162
in the formula, lambda is a balance factor, and the value in the experiment is 0.001, LosslandmarksError representing keypoint prediction, defined using mean square error function (MSE), LossedgesError representing edge regression was defined using a smoothed L1-Norm function (Smooth L1-Norm). N is the batch size of the network training,
Figure BDA0002611186320000163
and
Figure BDA0002611186320000164
respectively a predicted thermodynamic diagram and a labeled thermodynamic diagram,
Figure BDA0002611186320000165
and
Figure BDA0002611186320000166
respectively, the predicted edge vector and the actual edge vector.
The model is subjected to iterative optimization by using an Adam (Adaptive moment estimation) algorithm, and specific iterative parameters can be set according to actual requirements. For example, an initial learning rate of 0.001 may be set and iterated 200 times. The learning rate dropped to 0.0001 at iteration 120 and 0.00001 at iteration 170. During training, can adopt data augmentation operation to extension training data avoids the overfitting of model, includes: left-right flipping (probability of 0.5), random scaling (0.5-1.5 times) and random rotation (-45 ° +45 °). Training until the model converges to obtain the optimized model.
Accordingly, when the model is verified, the Mean Radial Error (MRE) can be used to evaluate the detection effect of the model on the anatomical key points:
Figure BDA0002611186320000167
wherein M is the number of test data, R represents the Euclidean distance,
Figure BDA0002611186320000168
and
Figure BDA0002611186320000169
representing the predicted keypoint coordinates and the annotated keypoint coordinates. For example, MRE < 2mm may be required, and if the requirement is met, the optimization of the model is ended; if not, the model and the training parameters are readjusted, and the training is restarted until the requirements are met.
Step 304, acquiring a target medical image to be identified.
Step 305, determining a feature point identification model corresponding to the target medical image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations between the image feature points, and performing image feature point identification according to the extracted image information, and the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
And step 306, acquiring image feature points in the target medical image according to the feature point identification model.
Step 307, the target medical image and the image feature points are output in a correlated manner.
In an optional embodiment of the present application, the method further comprises: and adjusting the target medical image to a first target window width and a first target window level.
In an optional embodiment of the present application, the first medical image sample may be further adjusted to a second target window width and a second target window level.
The target window width and the target window level can be obtained by calculation according to the target medical image, and can also be preset.
Taking the determination of the first target window width and the first target window level as an example, in an optional embodiment of the present application, the method further includes the step of determining the target window width and the target window level of the target medical image: determining an image threshold of the target medical image according to the histogram of the target medical image, wherein the image threshold comprises a maximum pixel value and a minimum pixel value of medical image pixels; and determining a window width boundary value according to the maximum pixel value and the minimum pixel value to obtain a first target window width of the target medical image, and determining a first target window level of the target medical image according to the first target window width.
The logic for calculating the second target window width and the second target window level corresponding to the first medical image sample is the same as the logic for calculating the first target window width and the first target window level, and details thereof are omitted here.
It should be noted that, for the actual application, all medical image samples correspond to the same monitored object, for example, for a hip X-ray image, the medical image samples and the target medical image may use the same window width and window level, that is, the first target window width is consistent with the second target window width, and the first target window level is consistent with the second target window level. The window width level of one of the medical image samples may be applied to other medical image samples and the target medical image, or may be set according to actual experience. In summary, the computational resources consumed by window width and window level calculations can be greatly reduced.
In an optional embodiment of the present application, the feature point identification model includes a hidden layer, a convolutional layer, and a pooling layer, the hidden layer is configured to extract image information related to image feature points and connection relationships between the image feature points from a medical image, the convolutional layer is configured to identify the connection relationships between the image feature points and the image feature points according to the extracted image information, and the pooling layer is configured to generate characterization data of the identified image feature points and connection relationships.
In an optional embodiment of the present application, the convolutional layers include a first convolutional layer and a second convolutional layer, the first convolutional layer is configured to identify the image feature points according to the extracted image information, and the second convolutional layer is configured to identify the connection relationship between the image feature points according to the extracted image information;
the pooling layer comprises a first pooling layer and a second pooling layer, the first pooling layer is used for generating a distribution thermodynamic diagram for representing the distribution condition of the image feature points, and the second pooling layer is used for generating edge vectors for representing the connection relation between the image points.
In an optional embodiment of the present application, before the cropping the first medical image sample and removing the content in the first medical image sample except the detected object, the method further includes: adjusting the first medical image sample to a target resolution; after the cropping the first medical image sample and removing the content except the detected object in the first medical image sample, the method further comprises: and adjusting the cut first medical image sample to a target size.
In an optional embodiment of the present application, the method further includes: recording a first scaling parameter corresponding to the target resolution and a second scaling parameter corresponding to the target size; and updating the first medical image sample and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks according to the first scaling parameter and the second scaling parameter.
In an optional embodiment of the present application, the first medical image sample and the correspondingly marked image feature points are obtained by: determining processing parameters related to a processing process corresponding to a detection object of the first medical image sample; and taking the image data points involved in the calculation process of the processing parameters as image characteristic points related to the processing process in the first medical image sample.
In an optional embodiment of the present application, the connection relationship between the image feature points correspondingly marked on the first medical image sample is obtained by: and determining a connection relation related to a processing procedure corresponding to the detection object of the first medical image sample from the connection relation among the image feature points.
In an optional embodiment of the present application, the method further includes: inputting a second medical image sample into the feature point identification model to obtain a corresponding output result; and if the output result and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks of the second medical image sample do not meet the similar condition, continuing to train the characteristic point recognition model.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 6, a flowchart of an embodiment of a method for processing a medical image according to a fourth embodiment of the present application is shown, where the method specifically includes the following steps:
step 401, acquiring a target medical image based on a detection object.
Step 401, obtaining image feature points in the target medical image, where the image feature points are determined based on a feature point identification model corresponding to the target medical image, where the feature point identification model is used to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
Step 401, the target medical image and the image feature points are displayed in a display interface of the medical image device in a related manner.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 7, a flowchart of an embodiment of a processing method of a medical image recognition model according to a fifth embodiment of the present application is shown, where the method specifically includes the following steps:
step 501, acquiring a first medical image sample and a connection relation between image feature points and image feature points of corresponding marks.
Step 502, training a feature point recognition model according to the labeled medical image sample, wherein the feature point recognition model is used for extracting image information related to image feature points and connection relations among the image feature points, and performing image feature point recognition according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 8, a flowchart of an embodiment of an image processing method according to the sixth embodiment of the present application is shown, where the method specifically includes the following steps:
step 601, acquiring a target image to be identified.
Step 602, determining that the target image conforms to the video content rule corresponding to the target detection object.
Step 603, adjusting the pixels of the target image to a target window width and a target window level.
Step 604, determining a feature point identification model corresponding to the target image, where the feature point identification model is used to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target image.
605, acquiring image feature points in the target image according to a feature point identification model;
step 606, outputting the target image and the image feature point in a correlated manner, and a schematic diagram of a processing process of a detection object of the target image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target image, when the feature point identification is carried out on the target image to be identified, the feature point identification model corresponding to the target image is determined firstly, the image feature points in the target image are obtained according to the feature point identification model, and the target image and the image feature points are output in a correlated mode. Therefore, according to the image feature point identification scheme, the extracted image information does not need to be designed manually, effective image information is directly extracted from the image to predict the position of the image feature point, and high image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point identification model is also used for predicting the connection relationship between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relationship between the image feature points, the image information related to the connection relationship between the image feature points is referred to for the identification of the image feature points, the predicted deviation of the image feature points can be corrected according to the connection relationship in the training process, and the detection accuracy of the image feature points is improved.
Referring to fig. 9, a block diagram of an embodiment of a medical image processing apparatus according to a seventh embodiment of the present application is shown, which may specifically include:
the image acquiring module 701 is configured to acquire a target medical image to be identified.
A model determining module 702, configured to determine a feature point identification model corresponding to the target medical image, where the feature point identification model is configured to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
A feature point obtaining module 703, configured to obtain image feature points in the target medical image according to a feature point identification model.
A feature point output module 704, configured to output the target medical image and the image feature points in a correlated manner.
In an optional embodiment of the present application, the feature point output module is specifically configured to display the target medical image and the image feature points in association with each other on a display interface of a medical image device.
In an optional embodiment of the present application, the image acquisition module includes:
the detection object determining submodule is used for determining a detection object of the target medical image;
and the image screening submodule is used for screening the medical image corresponding to the determined detection object from the medical image database.
In an optional embodiment of the present application, the apparatus further comprises:
and the image judgment module is used for determining that the target medical image to be identified accords with the image content rule corresponding to the detection object after the target medical image to be identified is acquired.
In an optional embodiment of the present application, the apparatus further comprises:
and the first window width and window level adjusting module is used for adjusting the target medical image to a first target window width and a first target window level.
In an optional embodiment of the present application, the apparatus further comprises:
an image threshold determination module, configured to determine an image threshold of the target medical image according to a histogram of the target medical image, where the image threshold includes a maximum pixel value and a minimum pixel value of a medical image pixel;
and the image threshold value calculation module is used for determining a window width boundary value according to the maximum pixel value and the minimum pixel value to obtain a first target window width of the target medical image, and determining a first target window level of the target medical image according to the first target window width.
In an optional embodiment of the present application, the apparatus further comprises:
and the second window width and window level adjusting module is used for adjusting the first medical image sample to a second target window width and a second target window level.
In an optional embodiment of the present application, the feature point identification model includes a hidden layer, a convolutional layer, and a pooling layer, the hidden layer is configured to extract image information related to image feature points and connection relationships between the image feature points from a medical image, the convolutional layer is configured to identify the connection relationships between the image feature points and the image feature points according to the extracted image information, and the pooling layer is configured to generate characterization data of the identified image feature points and connection relationships.
In an optional embodiment of the present application, the convolutional layers include a first convolutional layer and a second convolutional layer, the first convolutional layer is configured to identify the image feature points according to the extracted image information, and the second convolutional layer is configured to identify the connection relationship between the image feature points according to the extracted image information;
the pooling layer comprises a first pooling layer and a second pooling layer, the first pooling layer is used for generating a distribution thermodynamic diagram for representing the distribution condition of the image feature points, and the second pooling layer is used for generating edge vectors for representing the connection relation between the image points.
In an optional embodiment of the present application, the apparatus further comprises:
the sample acquisition module is used for acquiring a first medical image sample and the connection relation between the image characteristic points and the image characteristic points which are correspondingly marked;
and the model training module is used for training the feature point recognition model according to the marked medical image sample.
In an optional embodiment of the present application, the apparatus further comprises:
and the image cutting module is used for cutting the first medical image sample and removing the contents except the detected object in the first medical image sample.
In an optional embodiment of the present application, the apparatus further comprises:
a first adjusting module, configured to adjust the first medical image sample to a target resolution before the first medical image sample is cropped and content other than a detected object in the first medical image sample is removed;
the device further comprises:
and the second adjusting module is used for adjusting the cut first medical image sample to a target size after the first medical image sample is cut and the content except the detected object in the first medical image sample is removed.
In an optional embodiment of the present application, the apparatus further comprises:
the parameter recording module is used for recording a first zooming parameter corresponding to the target resolution and a second zooming parameter corresponding to the target size;
and the data updating module is used for updating the first medical image sample and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks according to the first scaling parameter and the second scaling parameter.
In an optional embodiment of the present application, the sample acquiring module comprises:
the parameter determining submodule is used for determining processing parameters related to a processing process corresponding to a detection object of the first medical image sample;
and the characteristic point determining submodule is used for taking the image data points involved in the calculation process of the processing parameters as image characteristic points related to the processing process in the first medical image sample.
In an optional embodiment of the application, the sample obtaining module is specifically configured to determine a connection relation related to a processing procedure corresponding to a detection object of the first medical image sample from connection relations between the image feature points.
In an optional embodiment of the present application, the apparatus further comprises:
the sample input module is used for inputting a second medical image sample into the feature point identification model to obtain a corresponding output result;
and the model correction module is used for continuing training the feature point recognition model if the output result and the connection relation between the image feature points and the image feature points of the corresponding marks of the second medical image sample do not meet the similar condition.
In an optional embodiment of the present application, the feature point obtaining module is specifically configured to send the target medical image to a medical image server, and obtain image feature points identified by the medical image server according to a feature point identification model.
In an optional embodiment of the present application, the apparatus further comprises:
and the schematic diagram generating module is used for generating a processing schematic diagram aiming at the processing process according to the image feature points and the processing process of the detection object of the target medical image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 10, a block diagram of an embodiment of a medical image processing apparatus according to an eighth embodiment of the present application is shown, which may specifically include:
an image acquisition module 801, configured to acquire a target medical image based on a detected object.
A feature point obtaining module 802, configured to obtain image feature points in the target medical image, where the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is configured to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
A feature point display module 803, configured to display the target medical image and the image feature points in association on a display interface of a medical image device.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 11, a block diagram of an embodiment of a processing apparatus for a medical image recognition model according to a ninth embodiment of the present application is shown, which may specifically include:
a sample obtaining module 901, configured to obtain a first medical image sample and a connection relationship between image feature points and image feature points of corresponding marks.
A model training module 902, configured to train a feature point recognition model according to a labeled medical image sample, where the feature point recognition model is configured to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point recognition according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 12, a block diagram illustrating a structure of an embodiment of a processing apparatus of an image processing model according to a tenth embodiment of the present application may specifically include:
an image acquisition module 1001 configured to acquire a target image to be identified;
a rule determining module 1002, configured to determine that the target image meets an image content rule corresponding to a target detection object;
a window width and window level determining module 1003, configured to adjust pixels of the target image to a target window width and a target window level;
a model determining module 1004, configured to determine a feature point identification model corresponding to the target image, where the feature point identification model is configured to extract image information related to image feature points and a connection relationship between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target image;
a feature point determining module 1005, configured to obtain an image feature point in the target image according to a feature point identification model;
and a schematic diagram output module 1006, configured to output the target image and the image feature points in association with a schematic diagram of a processing procedure of a detection object of the target image.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target image, when the feature point identification is carried out on the target image to be identified, the feature point identification model corresponding to the target image is determined firstly, the image feature points in the target image are obtained according to the feature point identification model, and the target image and the image feature points are output in a correlated mode. Therefore, according to the image feature point identification scheme, the extracted image information does not need to be designed manually, effective image information is directly extracted from the image to predict the position of the image feature point, and high image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point identification model is also used for predicting the connection relationship between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relationship between the image feature points, the image information related to the connection relationship between the image feature points is referred to for the identification of the image feature points, the predicted deviation of the image feature points can be corrected according to the connection relationship in the training process, and the detection accuracy of the image feature points is improved.
Referring to fig. 13, a block diagram of a medical imaging apparatus according to an eleventh embodiment of the present application is shown, which may specifically include:
an image acquisition module 1101, configured to acquire a target medical image to be identified;
a model determining module 1102, configured to determine a feature point identification model corresponding to the target medical image, where the feature point identification model is configured to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image;
a feature point obtaining module 1103, configured to obtain image feature points in the target medical image according to a feature point identification model;
and a feature point output module 1104, configured to output the target medical image and the image feature points in a correlated manner.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
Referring to fig. 14, a block diagram of an embodiment of a medical imaging apparatus according to a twelfth embodiment of the present application is shown, including an image capturing device 1201, an image processing device 1202, and a display interface 1203;
the image acquisition device 1201 is used for acquiring a target medical image based on a detection object;
the image processing device 1202 is configured to obtain image feature points in the target medical image, where the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is configured to extract image information related to image feature points and connection relationships between the image feature points, and perform image feature point identification according to the extracted image information, where the image feature points are related to a processing procedure corresponding to a detection object of the target medical image;
the display interface 1203 is configured to display the target medical image and the image feature points in an associated manner.
According to the embodiment of the application, a feature point identification model is trained in advance and used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, the identified image feature points are related to a processing process corresponding to a detection object of a target medical image, when the feature point identification is further carried out on the target medical image to be identified, the feature point identification model corresponding to the target medical image is determined firstly, the image feature points in the target medical image are further obtained according to the feature point identification model, and the target medical image and the image feature points are output in a correlated mode. Therefore, the image feature point identification scheme of the embodiment of the application does not need to artificially design the extracted image information, and can directly extract effective image information from the medical image to predict the position of the image feature point, so that higher image feature point positioning accuracy can be obtained.
Compared with the scheme of only predicting the image feature points, the feature point recognition model is also used for predicting the connection relation between the image feature points, so that the extracted image information is not only related to the image feature points, but also related to the connection relation between the image feature points, the image information related to the connection relation between the image feature points is referred to for the recognition of the image feature points, the deviation of the image feature point prediction can be corrected according to the connection relation in the training process, and the detection precision of the image feature points is improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
Embodiments of the disclosure may be implemented as a system using any suitable hardware, firmware, software, or any combination thereof, in a desired configuration. Fig. 15 schematically illustrates an example system (or apparatus) 1300 that can be used to implement various embodiments described in this disclosure.
For one embodiment, fig. 15 illustrates an exemplary system 1300 having one or more processors 1302, a system control module (chipset) 1304 coupled to at least one of the processor(s) 1302, system memory 1306 coupled to the system control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the system control module 1304, one or more input/output devices 1310 coupled to the system control module 1304, and a network interface 1312 coupled to the system control module 1306.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the system 1300 can function as a browser as described in embodiments herein.
In some embodiments, system 1300 may include one or more computer-readable media (e.g., system memory 1306 or NVM/storage 1308) having instructions and one or more processors 1302, which in conjunction with the one or more computer-readable media, are configured to execute the instructions to implement modules to perform the actions described in this disclosure.
For one embodiment, the system control module 1304 may include any suitable interface controller to provide any suitable interface to at least one of the processor(s) 1302 and/or any suitable device or component in communication with the system control module 1304.
The system control module 1304 may include a memory controller module to provide an interface to the system memory 1306. The memory controller module may be a hardware module, a software module, and/or a firmware module.
System memory 1306 may be used, for example, to load and store data and/or instructions for system 1300. For one embodiment, system memory 1306 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 1306 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 1304 may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
For example, NVM/storage 1308 may be used to store data and/or instructions. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 1308 may include storage resources that are physically part of the device on which system 1300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 1308 may be accessible over a network via input/output device(s) 1310.
Input/output device(s) 1310 may provide an interface for system 1300 to communicate with any other suitable device, input/output device(s) 1310 may include communication components, audio components, sensor components, and so forth. Network interface 1312 may provide an interface for system 1300 to communicate over one or more networks, and system 1300 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as access to a communication standard-based wireless network, such as WiFi, 2G, 3G, 4G, or 5G, or a combination thereof.
For one embodiment, at least one of the processor(s) 1302 may be packaged together with logic for one or more controllers (e.g., memory controller modules) of the system control module 1304. For one embodiment, at least one of the processor(s) 1302 may be packaged together with logic for one or more controllers of the system control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 1302 may be integrated on the same die with logic for one or more controller(s) of the system control module 1304. For one embodiment, at least one of the processor(s) 1302 may be integrated on the same die with logic of one or more controllers of the system control module 1304 to form a system on chip (SoC).
In various embodiments, system 1300 may be, but is not limited to being: a browser, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, system 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 1300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
Wherein, if the display includes a touch panel, the display screen may be implemented as a touch screen display to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also identify the duration and pressure associated with the touch or slide operation.
The present application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to a terminal device, the one or more modules may cause the terminal device to execute instructions (instructions) of method steps in the present application.
In one example, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to the embodiments of the present application when executing the computer program.
There is also provided in one example a computer readable storage medium having stored thereon a computer program, characterized in that the program, when executed by a processor, implements a method as one or more of the embodiments of the application.
The embodiment of the application discloses a method for processing a medical image, and example 1 comprises the following steps:
acquiring a target medical image to be identified;
determining a feature point identification model corresponding to the target medical image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
acquiring image feature points in the target medical image according to a feature point identification model;
and outputting the target medical image and the image feature points in a correlated manner.
Example 2 may include the method of example 1, the associating outputting the target medical image and the image feature points including:
and displaying the target medical image and the image feature points in an associated manner on a display interface of the medical image equipment.
Example 3 may include the method of example 1, the acquiring a target medical image to be identified comprising:
determining a detection object of the target medical image;
and screening the medical image corresponding to the determined detection object from a medical image database.
Example 4 may include the method of example 3, further comprising, after acquiring the target medical image to be identified:
and determining that the target medical image to be identified conforms to the image content rule corresponding to the detection object.
Example 5 may include the method of example 1, the method further comprising:
and adjusting the target medical image to a first target window width and a first target window level.
Example 6 may include the method of example 5, the method further comprising:
determining an image threshold of the target medical image according to the histogram of the target medical image, wherein the image threshold comprises a maximum pixel value and a minimum pixel value of medical image pixels;
and determining a window width boundary value according to the maximum pixel value and the minimum pixel value to obtain a first target window width of the target medical image, and determining a first target window level of the target medical image according to the first target window width.
Example 7 may include the method of example 1, the feature point identification model including a hidden layer to extract image information related to image feature points and connection relationships between the image feature points from the medical image, a convolutional layer to identify the image feature points and the connection relationships between the image feature points based on the extracted image information, and a pooling layer to generate characterization data of the identified image feature points and connection relationships.
Example 8 may include the method of example 7, the convolutional layers including a first convolutional layer for identifying image feature points according to the extracted image information and a second convolutional layer for identifying connection relationships between the image feature points according to the extracted image information;
the pooling layer comprises a first pooling layer and a second pooling layer, the first pooling layer is used for generating a distribution thermodynamic diagram for representing the distribution condition of the image feature points, and the second pooling layer is used for generating edge vectors for representing the connection relation between the image points.
Example 9 may include the method of example 1, further comprising:
acquiring a first medical image sample and a connection relation between image characteristic points and image characteristic points which are correspondingly marked;
and training the feature point recognition model according to the marked medical image sample.
Example 10 may include the method of example 8, the method further comprising:
adjusting the first medical image sample to a second target window width and a second target window level.
Example 11 may include the method of example 9, further comprising:
and cutting the first medical image sample, and removing contents except the detected object in the first medical image sample.
Example 12 may include the method of example 19, wherein prior to the cropping the first medical image sample to remove content in the first medical image sample other than the detected object, the method further comprises:
adjusting the first medical image sample to a target resolution;
after the cropping the first medical image sample and removing the content except the detected object in the first medical image sample, the method further comprises:
and adjusting the cut first medical image sample to a target size.
Example 13 may include the method of example 12, further comprising:
recording a first scaling parameter corresponding to the target resolution and a second scaling parameter corresponding to the target size;
and updating the first medical image sample and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks according to the first scaling parameter and the second scaling parameter.
Example 14 may include the method of example 9, the first medical image sample and the corresponding marked image feature points being acquired by:
determining processing parameters related to a processing process corresponding to a detection object of the first medical image sample;
and taking the image data points involved in the calculation process of the processing parameters as image characteristic points related to the processing process in the first medical image sample.
Example 15 may include the method of example 9, wherein the connection relationship between the image feature points correspondingly marked in the first medical image sample is obtained by:
and determining a connection relation related to a processing procedure corresponding to the detection object of the first medical image sample from the connection relation among the image feature points.
Example 16 may include the method of example 9, further comprising:
inputting a second medical image sample into the feature point identification model to obtain a corresponding output result;
and if the output result and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks of the second medical image sample do not meet the similar condition, continuing to train the characteristic point recognition model.
Example 17 may include the method of example 1, wherein obtaining the corresponding identified image feature points of the target medical image according to a feature point identification model comprises:
and sending the target medical image to a medical image server, and acquiring image feature points identified by the medical image server according to a feature point identification model.
Example 18 may include the method of example 1, further comprising:
and generating a processing schematic diagram aiming at the processing process according to the image feature points and the processing process of the detection object of the target medical image.
The embodiment of the present application further discloses a method for processing a medical image, where example 19 includes:
acquiring a target medical image based on a detection object;
acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the connection relationship between the image feature points and the image feature points, and identifying the image feature points according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and displaying the target medical image and the image feature points in an associated manner on a display interface of the medical image equipment.
The embodiment of the present application further discloses a processing method of a medical image recognition model, where example 20 includes:
acquiring a first medical image sample and a connection relation between image characteristic points and image characteristic points which are correspondingly marked;
training a feature point recognition model according to the marked medical image sample, wherein the feature point recognition model is used for extracting image information related to image feature points and connection relations among the image feature points, and performing image feature point recognition according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image.
The embodiment of the present application further discloses an image processing method, and example 21 includes:
acquiring a target image to be identified;
determining that the target image conforms to an image content rule corresponding to a target detection object;
adjusting pixels of the target image to a target window width and a target window level;
determining a feature point identification model corresponding to the target image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target image;
acquiring image feature points in the target image according to a feature point identification model;
and outputting the target image, the image characteristic points and a schematic diagram of a processing process of a detection object of the target image in a correlated manner.
An embodiment of the present application further discloses a medical imaging apparatus, and example 22 includes:
the image acquisition module is used for acquiring a target medical image to be identified;
the model determining module is used for determining a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
the characteristic point acquisition module is used for acquiring image characteristic points in the target medical image according to the characteristic point identification model;
and the characteristic point output module is used for outputting the target medical image and the image characteristic points in a correlation manner.
The embodiment of the application also discloses medical imaging equipment, and an example 23 comprises an image acquisition device, an image processing device and a display interface;
the image acquisition device is used for acquiring a target medical image based on a detection object;
the image processing device is used for acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the image feature points and the connection relation between the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and the display interface is used for displaying the target medical image and the image feature points in an associated manner.
An embodiment of the present application further discloses an electronic device, and example 24 includes: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of examples 1-21.
This application embodiment also discloses one or more machine readable media having executable code stored thereon that, when executed, causes a processor to perform the method of any of examples 1-21.
Although certain examples have been illustrated and described for purposes of description, a wide variety of alternate and/or equivalent implementations, or calculations, may be made to achieve the same objectives without departing from the scope of practice of the present application. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that the embodiments described herein be limited only by the claims and the equivalents thereof.

Claims (25)

1. A method for processing medical images, comprising:
acquiring a target medical image to be identified;
determining a feature point identification model corresponding to the target medical image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
acquiring image feature points in the target medical image according to a feature point identification model;
and outputting the target medical image and the image feature points in a correlated manner.
2. The method of claim 1, the correlating outputting the target medical image and the image feature points comprising:
and displaying the target medical image and the image feature points in an associated manner on a display interface of the medical image equipment.
3. The method of claim 1, the acquiring a target medical image to be identified comprising:
determining a detection object of the target medical image;
and screening the medical image corresponding to the determined detection object from a medical image database.
4. The method of claim 3, after acquiring the target medical image to be identified, the method further comprising:
and determining that the target medical image to be identified conforms to the image content rule corresponding to the detection object.
5. The method of claim 1, further comprising:
and adjusting the target medical image to a first target window width and a first target window level.
6. The method of claim 5, further comprising:
determining an image threshold of the target medical image according to the histogram of the target medical image, wherein the image threshold comprises a maximum pixel value and a minimum pixel value of medical image pixels;
and determining a window width boundary value according to the maximum pixel value and the minimum pixel value to obtain a first target window width of the target medical image, and determining a first target window level of the target medical image according to the first target window width.
7. The method of claim 1, the feature point identification model comprising a hidden layer for extracting image information related to image feature points and connections between image feature points from the medical image, a convolutional layer for identifying image feature points and connections between image feature points from the extracted image information, and a pooling layer for generating characterization data of the identified image feature points and connections.
8. The method of claim 7, the convolutional layers comprising a first convolutional layer for identifying image feature points according to the extracted image information and a second convolutional layer for identifying connection relationships between the image feature points according to the extracted image information;
the pooling layer comprises a first pooling layer and a second pooling layer, the first pooling layer is used for generating a distribution thermodynamic diagram for representing the distribution condition of the image feature points, and the second pooling layer is used for generating edge vectors for representing the connection relation between the image points.
9. The method of claim 1, further comprising:
acquiring a first medical image sample and a connection relation between image characteristic points and image characteristic points which are correspondingly marked;
and training the feature point recognition model according to the marked medical image sample.
10. The method of claim 8, further comprising:
adjusting the first medical image sample to a second target window width and a second target window level.
11. The method of claim 9, further comprising:
and cutting the first medical image sample, and removing contents except the detected object in the first medical image sample.
12. The method of claim 9, prior to said cropping the first medical image sample to remove content in the first medical image sample other than the detected object, the method further comprising:
adjusting the first medical image sample to a target resolution;
after the cropping the first medical image sample and removing the content except the detected object in the first medical image sample, the method further comprises:
and adjusting the cut first medical image sample to a target size.
13. The method of claim 12, further comprising:
recording a first scaling parameter corresponding to the target resolution and a second scaling parameter corresponding to the target size;
and updating the first medical image sample and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks according to the first scaling parameter and the second scaling parameter.
14. The method of claim 9, wherein the first medical image sample and the correspondingly labeled image feature points are obtained by:
determining processing parameters related to a processing process corresponding to a detection object of the first medical image sample;
and taking the image data points involved in the calculation process of the processing parameters as image characteristic points related to the processing process in the first medical image sample.
15. The method according to claim 9, wherein the connection relationship between the image feature points marked correspondingly in the first medical image sample is obtained by:
and determining a connection relation related to a processing procedure corresponding to the detection object of the first medical image sample from the connection relation among the image feature points.
16. The method of claim 9, further comprising:
inputting a second medical image sample into the feature point identification model to obtain a corresponding output result;
and if the output result and the connection relation between the image characteristic points and the image characteristic points of the corresponding marks of the second medical image sample do not meet the similar condition, continuing to train the characteristic point recognition model.
17. The method according to claim 1, wherein the obtaining of the image feature points corresponding to the target medical image according to the feature point identification model comprises:
and sending the target medical image to a medical image server, and acquiring image feature points identified by the medical image server according to a feature point identification model.
18. The method of claim 1, further comprising:
and generating a processing schematic diagram aiming at the processing process according to the image feature points and the processing process of the detection object of the target medical image.
19. A method for processing medical images, comprising:
acquiring a target medical image based on a detection object;
acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the connection relationship between the image feature points and the image feature points, and identifying the image feature points according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and displaying the target medical image and the image feature points in an associated manner on a display interface of the medical image equipment.
20. A processing method of a medical image recognition model is characterized by comprising the following steps:
acquiring a first medical image sample and a connection relation between image characteristic points and image characteristic points which are correspondingly marked;
training a feature point recognition model according to the marked medical image sample, wherein the feature point recognition model is used for extracting image information related to image feature points and connection relations among the image feature points, and performing image feature point recognition according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image.
21. An image processing method, comprising:
acquiring a target image to be identified;
determining that the target image conforms to an image content rule corresponding to a target detection object;
adjusting pixels of the target image to a target window width and a target window level;
determining a feature point identification model corresponding to the target image, wherein the feature point identification model is used for extracting image information related to image feature points and connection relations among the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target image;
acquiring image feature points in the target image according to a feature point identification model;
and outputting the target image, the image characteristic points and a schematic diagram of a processing process of a detection object of the target image in a correlated manner.
22. A medical imaging apparatus, comprising:
the image acquisition module is used for acquiring a target medical image to be identified;
the model determining module is used for determining a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to image feature points and the connection relation between the image feature points, image feature point identification is carried out according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
the characteristic point acquisition module is used for acquiring image characteristic points in the target medical image according to the characteristic point identification model;
and the characteristic point output module is used for outputting the target medical image and the image characteristic points in a correlation manner.
23. A medical imaging device is characterized by comprising an image acquisition device, an image processing device and a display interface;
the image acquisition device is used for acquiring a target medical image based on a detection object;
the image processing device is used for acquiring image feature points in the target medical image, wherein the image feature points are determined based on a feature point identification model corresponding to the target medical image, the feature point identification model is used for extracting image information related to the image feature points and the connection relation between the image feature points, and carrying out image feature point identification according to the extracted image information, and the image feature points are related to a processing process corresponding to a detection object of the target medical image;
and the display interface is used for displaying the target medical image and the image feature points in an associated manner.
24. An electronic device, comprising: a processor; and
a memory having executable code stored thereon that, when executed, causes the processor to perform the method of any of claims 1-21.
25. One or more machine-readable media having executable code stored thereon that, when executed, causes a processor to perform the method of any of claims 1-21.
CN202010754855.9A 2020-07-30 2020-07-30 Medical image processing method, computer device, and storage medium Pending CN114093462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010754855.9A CN114093462A (en) 2020-07-30 2020-07-30 Medical image processing method, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010754855.9A CN114093462A (en) 2020-07-30 2020-07-30 Medical image processing method, computer device, and storage medium

Publications (1)

Publication Number Publication Date
CN114093462A true CN114093462A (en) 2022-02-25

Family

ID=80295065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010754855.9A Pending CN114093462A (en) 2020-07-30 2020-07-30 Medical image processing method, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN114093462A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861162A (en) * 2022-08-26 2023-03-28 宁德时代新能源科技股份有限公司 Method, device and computer readable storage medium for positioning target area
CN116386850A (en) * 2023-03-28 2023-07-04 数坤(北京)网络科技股份有限公司 Medical data analysis method, medical data analysis device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861162A (en) * 2022-08-26 2023-03-28 宁德时代新能源科技股份有限公司 Method, device and computer readable storage medium for positioning target area
CN116386850A (en) * 2023-03-28 2023-07-04 数坤(北京)网络科技股份有限公司 Medical data analysis method, medical data analysis device, computer equipment and storage medium
CN116386850B (en) * 2023-03-28 2023-11-28 数坤科技股份有限公司 Medical data analysis method, medical data analysis device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111671454B (en) Spine bending angle measuring method, device, computer equipment and storage medium
CN108968991B (en) Hand bone X-ray film bone age assessment method, device, computer equipment and storage medium
CN112967236B (en) Image registration method, device, computer equipment and storage medium
JP5417321B2 (en) Semi-automatic contour detection method
CN111210467A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP5337845B2 (en) How to perform measurements on digital images
US20140278322A1 (en) Systems and methods for using generic anatomy models in surgical planning
US8285013B2 (en) Method and apparatus for detecting abnormal patterns within diagnosis target image utilizing the past positions of abnormal patterns
CN111080573B (en) Rib image detection method, computer device and storage medium
Allen et al. Validity and reliability of active shape models for the estimation of Cobb angle in patients with adolescent idiopathic scoliosis
WO2007126667A2 (en) Processing and measuring the spine in radiographs
CN114093462A (en) Medical image processing method, computer device, and storage medium
US20190392552A1 (en) Spine image registration method
EP4156096A1 (en) Method, device and system for automated processing of medical images to output alerts for detected dissimilarities
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN113962957A (en) Medical image processing method, bone image processing method, device and equipment
Schramm et al. Toward fully automatic object detection and segmentation
CN111091504A (en) Image deviation field correction method, computer device, and storage medium
CN116524158A (en) Interventional navigation method, device, equipment and medium based on image registration
CN115345928A (en) Key point acquisition method, computer equipment and storage medium
Zhang et al. VDVM: An automatic vertebrae detection and vertebral segment matching framework for C-arm X-ray image identification
KR102672531B1 (en) Method and apparatus for automatically estimating spine position in medical images using deep learning-based pose estimation
US11664116B2 (en) Medical image data
CN116728420B (en) Mechanical arm regulation and control method and system for spinal surgery
JP7262203B2 (en) IMAGE PROCESSING DEVICE, CONTROL METHOD AND PROGRAM FOR IMAGE PROCESSING DEVICE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination