CN117115159A - Bone lesion determination device, electronic device, and storage medium - Google Patents

Bone lesion determination device, electronic device, and storage medium Download PDF

Info

Publication number
CN117115159A
CN117115159A CN202311373832.3A CN202311373832A CN117115159A CN 117115159 A CN117115159 A CN 117115159A CN 202311373832 A CN202311373832 A CN 202311373832A CN 117115159 A CN117115159 A CN 117115159A
Authority
CN
China
Prior art keywords
target
image
gray
determining
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311373832.3A
Other languages
Chinese (zh)
Other versions
CN117115159B (en
Inventor
王静芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yidian Lingdong Technology Co ltd
Original Assignee
Beijing Yidian Lingdong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yidian Lingdong Technology Co ltd filed Critical Beijing Yidian Lingdong Technology Co ltd
Priority to CN202311373832.3A priority Critical patent/CN117115159B/en
Publication of CN117115159A publication Critical patent/CN117115159A/en
Application granted granted Critical
Publication of CN117115159B publication Critical patent/CN117115159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses a determination device for bone lesions, electronic equipment and a storage medium. Wherein, the determination device of bone lesion includes: the first determining unit is used for determining a target image area in a medical image corresponding to a target skeleton, wherein the target image area at least comprises an image to be processed corresponding to a target voxel in the medical image; the second determining unit is used for determining K target feature vectors corresponding to the target voxels according to the image to be processed; and the third determining unit is used for carrying out weighted summation calculation on the K target feature vectors and determining that the target skeleton is diseased at the position corresponding to the target voxel when the calculation result is larger than a target threshold value. The application solves the technical problem of low detection efficiency for the bone lesion condition under the condition of smaller wound surface of a patient in the prior art.

Description

Bone lesion determination device, electronic device, and storage medium
Technical Field
The application relates to the field of medical science and technology, in particular to a bone lesion determination device, electronic equipment and a storage medium.
Background
In robot-assisted joint replacement surgery, the registration process of surgical navigation is important, and mainly comprises the steps of acquiring bone surface points of a patient in the surgical process through an optical positioning and tracking system, and unifying a three-dimensional model generated before surgery with the actual bones of the patient through a registration algorithm.
However, registration accuracy can be greatly affected if bone lesions are present in the acquisition region. For example, during hip replacement surgery, bone surface points can only be acquired near the acetabular fossa due to the small surgical wound area, and if a patient has bone cysts in the acetabular fossa, serious registration deviations can occur, thereby affecting the accuracy of the surgery.
Therefore, in the prior art, when a surgery with a small wound surface such as a joint replacement surgery is performed, it is difficult to effectively identify a bone lesion area, and the detection efficiency for the bone lesion condition is low.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a determination device, electronic equipment and storage medium for bone lesions, which at least solve the technical problem of low detection efficiency for bone lesions under the condition of small wound surface of a patient in the prior art.
According to one aspect of the present application, there is provided a bone lesion determination apparatus comprising: a first determining unit, configured to determine a target image area in a medical image corresponding to a target bone, where the target image area includes at least an image to be processed corresponding to a target voxel in the medical image, and the image to be processed includes at least a coronal image, a cross-sectional image, and a sagittal image that include the target voxel; the second determining unit is used for determining K target feature vectors corresponding to the target voxels according to the image to be processed, wherein K is an integer greater than 1, and the K target feature vectors at least represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions; and the third determining unit is used for carrying out weighted summation calculation on the K target feature vectors and determining that the target skeleton is diseased at the position corresponding to the target voxel when the calculation result is larger than a target threshold value, wherein the target threshold value is used for representing a feature difference critical value between the diseased skeleton and the non-diseased skeleton.
Optionally, the first determining unit includes: the three-dimensional reconstruction subunit is used for carrying out three-dimensional reconstruction on the target skeleton according to the medical image corresponding to the target skeleton to obtain a three-dimensional space model corresponding to the target skeleton; the three-dimensional point cloud determining subunit is used for determining a three-dimensional point cloud corresponding to the target skeleton according to the three-dimensional space model; a first processing subunit, configured to use a voxel in the medical image, where the voxel has a correspondence with a spatial point in the three-dimensional point cloud, as the target voxel; and the second processing subunit is used for taking an image area of a preset volume taking the target voxel as a central point in the medical image as the target image area.
Optionally, the second determining unit includes: the normalization processing subunit is used for carrying out normalization processing on the image to be processed to obtain a normalized image; the gray level compression subunit is used for compressing the gray level number of the normalized image into a preset level number to obtain a target gray level image; a gray matrix generation subunit, configured to generate M gray matrices corresponding to the target gray image, where M is an integer greater than 1, and a target angle corresponding to each gray matrix in the M gray matrices is different, where the target angle corresponding to each gray matrix is used to represent a direction angle according to which the gray matrix determines an adjacent element in the target gray image; and the first determining subunit is used for determining K target feature vectors corresponding to the target voxels according to M gray matrixes corresponding to the target gray images.
Optionally, the first determining subunit includes: the first determining module is used for determining a gray level co-occurrence matrix corresponding to each gray level matrix according to each element in each gray level matrix in the M gray level matrices and the sum of all elements in the gray level matrix; the second determining module is used for determining K feature vectors to be processed corresponding to the gray level matrix according to each element in the gray level co-occurrence matrix corresponding to each gray level matrix, wherein the K feature vectors to be processed corresponding to each gray level matrix represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in the target angle direction corresponding to the gray level matrix; and the third determining module is used for determining K target feature vectors corresponding to the target voxels according to all the first feature vectors corresponding to the M gray matrixes, the target angle corresponding to each gray matrix and the specific image corresponding to each gray matrix.
Optionally, the normalization processing subunit includes: the acquisition module is used for acquiring the minimum gray value and the maximum gray value in the gray values corresponding to all the pixels contained in the coronal image, the cross-section image and the sagittal image; the calculating module is used for calculating an initial gray value corresponding to each pixel point in the image to be processed; a fourth determining module, configured to determine a target gray value corresponding to the pixel according to the initial gray value, the minimum gray value, and the maximum gray value corresponding to each pixel; and the adjusting module is used for adjusting the initial gray value of each pixel point in the image to be processed to be the target gray value corresponding to the pixel point, so as to obtain the normalized image.
Optionally, the gray matrix generating subunit includes: the target angle acquisition module is used for acquiring preset M target angles; the statistics module is used for counting the occurrence frequency of each adjacent element combination in the target gray level graph under the condition that each target angle is taken as the basis of the adjacent direction; and a fifth determining module, configured to determine, according to the elements related to each adjacent element combination and the occurrence frequencies of each adjacent element combination, one gray matrix corresponding to each of the M target angles by using the target gray image, so as to obtain the M gray matrices.
Optionally, the bone lesion determining device further comprises: an image set obtaining unit, configured to obtain a first image set and a second image set, where the first image set includes a plurality of first medical images corresponding to bones known to have lesions, and the second image set includes a plurality of second medical images corresponding to bones known to have no lesions; a fourth determining unit, configured to determine K first feature vectors corresponding to a target voxel in each first medical image in the first image set according to each first medical image in the first image set; a fifth determining unit, configured to determine K second feature vectors corresponding to a target voxel in each second medical image in the second image set according to each second medical image in the second image set; and the sixth determining unit is used for determining a target threshold according to the K first feature vectors corresponding to the target voxels in each first medical image and the K second feature vectors corresponding to the target voxels in each second medical image.
Optionally, the sixth determining unit includes: the first computing subunit is used for carrying out probability distribution computation on all first feature vectors corresponding to the first image set to obtain a first probability distribution curve; the second calculating subunit is used for carrying out probability distribution calculation on all second feature vectors corresponding to the second image set to obtain a second probability distribution curve; and the target threshold determining subunit is used for determining the target threshold according to the intersection point of the first probability distribution curve and the second probability distribution curve.
According to another aspect of the present application, there is further provided a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and wherein when the computer program runs, a device on which the computer readable storage medium is controlled controls the determination device for bone lesions according to any one of the above.
According to another aspect of the present application, there is also provided an electronic device, wherein the electronic device comprises one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to control the determination means of skeletal lesions of any one of the above.
In the application, there is provided a bone lesion determination device comprising: a first determination unit, a second determination unit, and a third determination unit. The first determining unit is used for determining a target image area in a medical image corresponding to a target skeleton, wherein the target image area at least comprises an image to be processed corresponding to a target voxel in the medical image, and the image to be processed at least comprises a coronal image, a cross-sectional image and a sagittal image containing the target voxel; the second determining unit is used for determining K target feature vectors corresponding to the target voxels according to the image to be processed, wherein K is an integer greater than 1, and the K target feature vectors at least represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions; and the third determining unit is used for carrying out weighted summation calculation on the K target feature vectors and determining that the target skeleton is diseased at the position corresponding to the target voxel when the calculation result is larger than a target threshold value, wherein the target threshold value is used for representing a feature difference critical value between the diseased skeleton and the non-diseased skeleton.
From the above, according to the application, the medical image corresponding to the target bone is utilized, the characteristic analysis is performed based on the angular second-order matrix information, the entropy information, the inverse differential matrix information and the contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions, and the probability of the lesion of the target bone at the position corresponding to the target voxel is determined according to the characteristic analysis result, so that the probability of the lesion condition of the bone can be detected without any trauma to the patient under the condition that only the medical image corresponding to the target bone is required to be acquired, the detection efficiency is improved, the operation success rate is improved, and the technical problem of low detection efficiency for the bone lesion condition under the condition that the trauma surface of the patient is small in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic illustration of an alternative bone lesion determination device according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an alternative different target angle correspondence direction in accordance with an embodiment of the present application;
FIG. 3 is a schematic representation of the generation of an alternative gray matrix according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative target threshold according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, related information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, electronic medical record data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or institution, before acquiring the relevant information, the system needs to send an acquisition request to the user or institution through the interface, and acquire the relevant information after receiving the consent information fed back by the user or institution.
The application is further illustrated below in conjunction with the examples.
Example 1
Currently, the mainstream joint replacement robots all use a three-dimensional reconstruction model as data input. The three-dimensional reconstruction of the skeleton model takes CT images as data input, two common three-dimensional reconstruction modes aiming at the skeleton exist, and one three-dimensional reconstruction mode is to reconstruct and generate an STL model by utilizing traditional medical image processing software; the other is to automatically segment the CT image by a deep learning method to generate a three-dimensional model. However, at present, both three-dimensional reconstruction methods cannot effectively reconstruct bone lesions in bones.
On the premise that the three-dimensional reconstruction problem cannot be solved, in order to improve the operation precision of a patient, the bone lesion area needs to be accurately identified, the bone lesion area is displayed in an intuitive mode in the operation process for reference of doctors, and meanwhile, the doctors need to be guided by utilizing the optical positioning tracking system to avoid the lesion area, so that the registration precision of the operation robot is ensured.
The method comprises the steps of acquiring surface points of a patient bone in a surgical process through an optical positioning and tracking system, and integrating a three-dimensional model generated before the surgery with the actual bone of the patient through a registration algorithm. However, if there is bone lesions in the acquisition area, registration accuracy is greatly affected, and common bone lesions include, but are not limited to, hyperosteogeny, osteoporosis, bone cysts, trabecular structural destruction, and various osteoarthritis. For example, in hip replacement surgery, due to the limitation of the intraoperative incision area, registration points can only be acquired near the acetabular fossa, and if a patient has a bone cyst in the acetabular fossa, serious registration deviation can occur, thereby affecting the surgery accuracy.
Therefore, in the prior art, when a surgery with a small wound surface such as a joint replacement surgery is performed, it is difficult to effectively identify a bone lesion area, and the detection efficiency for the bone lesion condition is low.
In order to solve the above problems, an embodiment of the present application provides a bone lesion determination apparatus. Wherein, fig. 1 is a schematic diagram of an alternative bone lesion determination device according to an embodiment of the present application, and as shown in fig. 1, the bone lesion determination device includes: the first determination unit 101, the second determination unit 102, and the third determination unit 103.
In an alternative embodiment, the first determining unit 101 is configured to determine a target image area in a medical image corresponding to a target bone, where the target image area includes at least an image to be processed corresponding to a target voxel in the medical image, and the image to be processed includes at least a coronal image, a transverse image, and a sagittal image including the target voxel.
Alternatively, the medical image may include, but is not limited to, a multi-modality image such as a CT image, an MR image, etc., and in an embodiment of the present application, the medical image may be a three-dimensional image.
Alternatively, N target voxels may be included in the medical image, where N is an integer greater than 1, and the first determining unit 101 may extract, from the medical image, an image of a preset volume as the target image region with each target voxel as a center point. For example, with target voxels As a center point, a target image region Q of L, L is cut out from the medical image, and a target voxel is included in the target image region Q>Corresponding coronal, transversal and sagittal images are denoted S (C), S (S), S (T), S (C), S (S), S (T) as target voxels->Corresponding images to be processed.
In an alternative embodiment, the second determining unit 102 is configured to determine K target feature vectors corresponding to the target voxels according to the image to be processed, where K is an integer greater than 1, and the K target feature vectors at least represent angular second-order matrix information, entropy information, inverse differential matrix information, and contrast information corresponding to the coronal image, the cross-sectional image, and the sagittal image in different angular directions.
Optionally, the K target feature vectors include at least a first target feature vector, a second target feature vector, a third target feature vector, and a fourth target feature vector, where the first target feature vector is used to represent angular second-order matrix information corresponding to the coronal image, the cross-sectional image, and the sagittal image in different angular directions; the second target feature vector is used for representing entropy information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angle directions; the third target feature vector is used for representing inverse differential matrix information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angle directions; the fourth target feature vector is used for representing contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions.
It should be noted that the number of angular directions may be set to be plural.
In an alternative embodiment, the third determining unit 103 is configured to perform weighted summation calculation on the K target feature vectors, and determine that the target bone is diseased at a location corresponding to the target voxel when the calculation result is greater than a target threshold, where the target threshold is used to characterize a feature difference threshold between the diseased bone and the non-diseased bone.
Optionally, each of the K target feature vectors corresponds to a weight, and by performing weighted summation calculation on the K target feature vectors, determining whether a lesion occurs in the target skeleton at a position corresponding to the target voxel according to a calculation result.
From the above, according to the application, the medical image corresponding to the target bone is utilized, the characteristic analysis is performed based on the angular second-order matrix information, the entropy information, the inverse differential matrix information and the contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions, and the probability of the lesion of the target bone at the position corresponding to the target voxel is determined according to the characteristic analysis result, so that the probability of the lesion condition of the bone can be detected without any trauma to the patient under the condition that only the medical image corresponding to the target bone is required to be acquired, the detection efficiency is improved, the operation success rate is improved, and the technical problem of low detection efficiency for the bone lesion condition under the condition that the trauma surface of the patient is small in the prior art is solved.
In an alternative embodiment, the first determining unit 101 comprises: the system comprises a three-dimensional reconstruction subunit, a three-dimensional point cloud determination subunit, a first processing subunit and a second processing subunit.
The three-dimensional reconstruction subunit is used for carrying out three-dimensional reconstruction on the target skeleton according to the medical image corresponding to the target skeleton to obtain a three-dimensional space model corresponding to the target skeleton; the three-dimensional point cloud determining subunit is used for determining a three-dimensional point cloud corresponding to the target skeleton according to the three-dimensional space model; a first processing subunit, configured to use a voxel in the medical image, where the voxel has a correspondence with a spatial point in the three-dimensional point cloud, as the target voxel; and the second processing subunit is used for taking an image area of a preset volume taking the target voxel as a central point in the medical image as the target image area.
Optionally, the three-dimensional reconstruction subunit may use a traditional imaging algorithm such as region growing, or use a deep learning method, and use a medical image corresponding to the target bone as data input to perform three-dimensional reconstruction, so as to obtain a three-dimensional space model corresponding to the target bone.
Alternatively, after the three-dimensional space model corresponding to the target bone is obtained, the three-dimensional point cloud determining subunit may extract a three-dimensional point cloud (may be denoted as Pts) corresponding to the three-dimensional space model as the three-dimensional point cloud corresponding to the target bone.
Alternatively, for each spatial point in the three-dimensional point cloud Pts (hereinafter expressed as) Identifying the corresponding target voxel in the medical image>Both are in one-to-one correspondence.
Optionally, with target voxelsAs a center point of the lens, the lens is, and intercepting a target image area Q of L in the medical image. In the target image region Q, there is included a target voxel->Corresponding coronal, transversal and sagittal images are denoted S (C), S (S), S (T), S (C), S (S), S (T) as target voxels->Corresponding images to be processed.
In an alternative embodiment, the second determining unit 102 comprises: the device comprises a normalization processing subunit, a gray level compression subunit, a gray level matrix generation subunit and a first determination subunit.
The normalization processing subunit is used for carrying out normalization processing on the image to be processed to obtain a normalized image; the gray level compression subunit is used for compressing the gray level number of the normalized image into a preset level number to obtain a target gray level image; a gray matrix generation subunit, configured to generate M gray matrices corresponding to the target gray image, where M is an integer greater than 1, and a target angle corresponding to each gray matrix in the M gray matrices is different, where the target angle corresponding to each gray matrix is used to represent a direction angle according to which the gray matrix determines an adjacent element in the target gray image; and the first determining subunit is used for determining K target feature vectors corresponding to the target voxels according to M gray matrixes corresponding to the target gray images.
Optionally, the normalization processing subunit includes: the acquisition module is used for acquiring the minimum gray value and the maximum gray value in the gray values corresponding to all the pixels contained in the coronal image, the cross-section image and the sagittal image; the calculating module is used for calculating an initial gray value corresponding to each pixel point in the image to be processed; a fourth determining module, configured to determine a target gray value corresponding to the pixel according to the initial gray value, the minimum gray value, and the maximum gray value corresponding to each pixel; and the adjusting module is used for adjusting the initial gray value of each pixel point in the image to be processed to be the target gray value corresponding to the pixel point, so as to obtain the normalized image.
For example, global normalization processing is performed on the coronal image S (C), the transversal image S (S), and the sagittal image S (T) in the image to be processed, so as to obtain a normalized image corresponding to the image to be processed.
Optionally, the global normalization process specifically includes adjusting a gray value of each pixel of the image to be processed to a target gray value corresponding to the pixel, where the target gray value corresponding to each pixel is calculated by the following formula (1):
(1)
Wherein,representing a certain pixel of the image to be processed, +.>And->Respectively representing the minimum gray value and the maximum gray value in the gray values corresponding to all the pixels contained in the coronal image, the cross-sectional image and the sagittal image contained in the image to be processed.
Alternatively, after obtaining the normalized image, the gray-scale compression subunit may compress the gray-scale number of the normalized image to a preset number of levels to obtain the target gray-scale image, for example, compress the gray-scale number of the normalized image to 64 levels.
Alternatively, after the target gray-scale image is obtained, M gray-scale matrices corresponding to the target gray-scale image may be generated by the gray-scale matrix generating subunit, for example, the target gray-scale images corresponding to the coronal image S (C), the transversal image S (S), and the sagittal image S (T) are denoted as D (C), D (S), and D (T), where, assuming that the target angle settings are respectively set to 0 degrees, 45 degrees, 90 degrees, and 135 degrees, the target gray-scale image D (C) corresponds to four gray-scale matrices, which are denoted as R (C1), R (C2), R (C3), and R (C4). Wherein, the target angle corresponding to R (C1) is 0 degree, the target angle corresponding to R (C2) is 45 degrees, the target angle corresponding to R (C3) is 90 degrees, and the target angle corresponding to R (C4) is 135 degrees. Similarly, the four gray matrices corresponding to the target gray images D (S) at the four target angles may be denoted as R (S1), R (S2), R (S3), and R (S4), respectively; the four gray matrices corresponding to the target gray image D (T) at the four target angles may be denoted as R (T1), R (T2), R (T3), and R (T4), respectively.
In an alternative embodiment, the gray matrix generation subunit comprises: the system comprises a target angle acquisition module, a statistics module and a fifth determination module.
The target angle acquisition module is used for acquiring preset M target angles; the statistics module is used for counting the occurrence frequency of each adjacent element combination in the target gray level graph under the condition that each target angle is taken as the basis of the adjacent direction; and a fifth determining module, configured to determine, according to the elements related to each adjacent element combination and the occurrence frequencies of each adjacent element combination, one gray matrix corresponding to each of the M target angles by using the target gray image, so as to obtain the M gray matrices.
Alternatively, the M target angles may be preset multiple angles, for example, 0 degrees, 45 degrees, 90 degrees, 135 degrees. Wherein fig. 2 characterizes the directions corresponding to different target angles.
Alternatively, in order to more intuitively understand a process of generating a gray matrix based on a certain target angle, taking the target angle as 0 degree as an example in conjunction with fig. 3, a left-hand image in fig. 3 is an 8-level gray image, and a right-hand image in fig. 3 is a gray matrix generated according to the gray image shown in the left-hand image when the target angle is 0 degree, wherein two adjacent elements in the left-hand image are rows and columns corresponding to each other in the right-hand image, and values corresponding to any one row and any one column in the right-hand image in a crossing manner represent frequencies at which elements corresponding to the row and elements corresponding to the column appear as adjacent elements. For example, in the left-hand diagram, (1, 1) appears as a neighboring element with a frequency of 1, so in the right-hand diagram, the intersection of row 1 and column 1 corresponds to a value of 1; in the left-hand graph, (1, 2) appears as a neighboring element with a frequency of 2, and therefore in the right-hand graph, the intersection of row 1 and class 2 corresponds to a value of 2.
In an alternative embodiment, the first determining subunit comprises: the device comprises a first determining module, a second determining module and a third determining module.
The first determining module is used for determining a gray level co-occurrence matrix corresponding to the gray level matrix according to each element in each gray level matrix in the M gray level matrices and the sum of all elements in the gray level matrix.
Alternatively, the gray co-occurrence matrix corresponding to each gray matrix may be calculated by the following formula (2).
Wherein, in the formula (2),in the ith row and the jth column for the gray matrixIs an element of (2); />Elements representing gray level co-occurrence matrix in ith row and jth column,>representing the sum of all elements in the gray matrix. It should be noted that each gray level matrix and its corresponding gray level co-occurrence matrix correspond to the same dimension.
Optionally, the second determining module is configured to determine K first feature vectors corresponding to the gray level matrix according to each element in the gray level co-occurrence matrix corresponding to each gray level matrix, where the K first feature vectors corresponding to each gray level matrix represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal plane image, the cross-sectional image and the sagittal plane image in a target angular direction corresponding to the gray level matrix.
Optionally, when the target angle is four, the coronal image S (C), the transverse image S (S), and the sagittal image S (T) respectively correspond to gray-scale matrices in four directions, and since each gray-scale matrix corresponds to one gray-scale co-occurrence matrix, the coronal image S (C), the transverse image S (S), and the sagittal image S (T) also respectively correspond to gray-scale co-occurrence matrices in four directions.
Optionally, determining K feature vectors to be processed corresponding to each gray matrix through a gray co-occurrence matrix corresponding to the gray matrix, where the K feature vectors to be processed include an angular second-order matrix f_asm, an entropy f_ent, an inverse differential matrix f_idm, and a contrast f_con. The specific calculation formulas are shown as the following formula (3), formula (4), formula (5) and formula (6).
It should be noted that, the second-order angular matrix f_asm corresponding to each gray matrix represents the second-order angular matrix information corresponding to the coronal image, the cross-sectional image, and the sagittal image in the target angular direction corresponding to the gray matrix. The entropy F_ENT corresponding to each gray matrix represents entropy information corresponding to the coronal image, the cross-sectional image and the sagittal image in the target angle direction corresponding to the gray matrix. The inverse differential matrix F_IDM corresponding to each gray matrix represents inverse differential matrix information corresponding to the coronal image, the transverse image and the sagittal image in the target angle direction corresponding to the gray matrix. The contrast F_CON corresponding to each gray matrix represents the contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in the target angle direction corresponding to the gray matrix.
Optionally, the third determining module is configured to determine K target feature vectors corresponding to the target voxels according to all feature vectors to be processed corresponding to the M gray matrices, the target angle corresponding to each gray matrix, and the specific image corresponding to each gray matrix.
Alternatively, in the case of 4 target angles, the coronal image S (C), the transverse image S (S), and the sagittal image S (T) correspond to gray level co-occurrence matrices in four directions, respectively, and thus, the three images correspond to 12 gray level co-occurrence matrices in total. On the basis, the target voxelThe corresponding K target feature vectors are the first target feature vector +.>Second target feature vector->Third target feature vector->Fourth target feature vector->
Alternatively, the following is a calculation formula corresponding to the four target feature vectors.
Wherein, in the above formulas (7) to (10),respectively corresponding to the coronal image S (C), the cross-sectional image S (S) and the sagittal image S (T)>Corresponding to four target angles, namely 0 degrees, 45 degrees, 90 degrees and 135 degrees, respectively.
In an alternative embodiment, the bone lesion determination device further comprises: an image set obtaining unit, configured to obtain a first image set and a second image set, where the first image set includes a plurality of first medical images corresponding to bones known to have lesions, and the second image set includes a plurality of second medical images corresponding to bones known to have no lesions; a fourth determining unit, configured to determine K first feature vectors corresponding to a target voxel in each first medical image in the first image set according to each first medical image in the first image set; a fifth determining unit, configured to determine K second feature vectors corresponding to a target voxel in each second medical image in the second image set according to each second medical image in the second image set; and a sixth determining unit, configured to determine a target threshold according to the K first feature vectors corresponding to the target voxels in each first medical image and the K second feature vectors corresponding to the target voxels in each second medical image, where the target threshold is used to characterize a feature difference critical value between the diseased bone and the non-diseased bone.
Optionally, the same processing manner as that of determining K target feature vectors corresponding to target voxels in the medical image according to the medical image corresponding to the target bone is adopted, K first feature vectors corresponding to the target voxels in the first medical image are determined according to each first medical image, and K second feature vectors corresponding to the target voxels in the second medical image are determined according to each second medical image.
In an alternative embodiment, the sixth determining unit comprises: the target threshold determining subunit comprises a first calculating subunit, a second calculating subunit and a target threshold determining subunit.
The first computing subunit is used for carrying out probability distribution computation on all first feature vectors corresponding to the first image set to obtain a first probability distribution curve; the second calculating subunit is used for carrying out probability distribution calculation on all second feature vectors corresponding to the second image set to obtain a second probability distribution curve; and the target threshold determining subunit is used for determining the target threshold according to the intersection point of the first probability distribution curve and the second probability distribution curve.
Optionally, the first calculating subunit may perform probability distribution calculation on all the first feature vectors corresponding to the first image set through a statistical method, and represent a probability distribution calculation result through a first probability distribution curve. Similarly, the second calculating subunit may perform probability distribution calculation on all the second feature vectors corresponding to the second image set by using the same statistical method, and represent the probability distribution calculation result by using a second probability distribution curve,
It should be noted that, in the process of calculating the probability distribution, the weight corresponding to each first feature vector/each second feature vector may be determined by a principal component analysis method.
Optionally, fig. 4 is a schematic diagram of an optional target threshold according to an embodiment of the present application, where, as shown in fig. 4, curve 1 is a first probability distribution curve, curve 2 is a second probability distribution curve, and an abscissa H of an intersection of curve 1 and curve 2 is a target threshold, and an abscissa F in fig. 4 represents a weighted sum result of all first feature vectors/all second feature vectors corresponding to a same target voxel. The ordinate P in fig. 4 characterizes the probability values calculated by the probability distribution calculation process.
Optionally, when the calculation result of the weighted sum calculation of the K target feature vectors is greater than a target threshold, determining that the target bone has a lesion at a position corresponding to the target voxel, and when the calculation result of the weighted sum calculation of the K target feature vectors is less than or equal to the target threshold, determining that the target bone has no lesion at the position corresponding to the target voxel.
In an alternative embodiment, the bone lesion determination device may further include a visualization module, where the visualization module is configured to use, by using the image rendering tool, the bone lesion area determined based on the target feature vector as an input, to achieve different rendering effects by modifying the pixel attribute in the three-dimensional spatial model point set.
Alternatively, the visualization may be divided into two types, the first being a binary visualization effect of "bone lesion exists" and "bone lesion does not exist", for example, a region rendering corresponding to "bone lesion exists" in the three-dimensional space model is displayed as yellow, and a region rendering corresponding to "bone lesion does not exist" in the three-dimensional space model is displayed as blue.
The other type uses the probability of bone lesions as rendering parameters, and the display effect shows gradual change. For example, according to the probability of lesion at each position in the target skeleton, the probability value is directly used as the rendering parameter of the position in the corresponding region of the three-dimensional space model, due to the probabilityThe range of the color mapping can thus be set to +.>Where 0 is green and 1 is red, with values between 0 and 1 tapering from green to red.
From the above, according to the application, the medical image corresponding to the target bone is utilized, the characteristic analysis is performed based on the angular second-order matrix information, the entropy information, the inverse differential matrix information and the contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions, and the probability of the lesion of the target bone at the position corresponding to the target voxel is determined according to the characteristic analysis result, so that the probability of the lesion condition of the bone can be detected without any trauma to the patient under the condition that only the medical image corresponding to the target bone is required to be acquired, the detection efficiency is improved, the operation success rate is improved, and the technical problem of low detection efficiency for the bone lesion condition under the condition that the trauma surface of the patient is small in the prior art is solved.
According to another aspect of the present application, there is further provided a computer readable storage medium, where the computer readable storage medium stores a computer program, where when the computer program runs, a device where the computer readable storage medium is controlled controls the above-mentioned determination device for bone lesions.
According to another aspect of the present application, there is also provided an electronic device, wherein the electronic device comprises one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to control the above-mentioned determination means of skeletal lesions.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. A bone lesion determination device, comprising:
a first determining unit, configured to determine a target image area in a medical image corresponding to a target bone, where the target image area includes at least an image to be processed corresponding to a target voxel in the medical image, and the image to be processed includes at least a coronal image, a cross-sectional image, and a sagittal image that include the target voxel;
the second determining unit is used for determining K target feature vectors corresponding to the target voxels according to the image to be processed, wherein K is an integer greater than 1, and the K target feature vectors at least represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in different angular directions;
and the third determining unit is used for carrying out weighted summation calculation on the K target feature vectors and determining that the target skeleton is diseased at the position corresponding to the target voxel when the calculation result is larger than a target threshold value, wherein the target threshold value is used for representing a feature difference critical value between the diseased skeleton and the non-diseased skeleton.
2. The bone lesion determination device according to claim 1, wherein the first determination unit comprises:
the three-dimensional reconstruction subunit is used for carrying out three-dimensional reconstruction on the target skeleton according to the medical image corresponding to the target skeleton to obtain a three-dimensional space model corresponding to the target skeleton;
the three-dimensional point cloud determining subunit is used for determining a three-dimensional point cloud corresponding to the target skeleton according to the three-dimensional space model;
a first processing subunit, configured to use a voxel in the medical image, where the voxel has a correspondence with a spatial point in the three-dimensional point cloud, as the target voxel;
and the second processing subunit is used for taking an image area of a preset volume taking the target voxel as a central point in the medical image as the target image area.
3. The bone lesion determination device according to claim 1, wherein the second determination unit comprises:
the normalization processing subunit is used for carrying out normalization processing on the image to be processed to obtain a normalized image;
the gray level compression subunit is used for compressing the gray level number of the normalized image into a preset level number to obtain a target gray level image;
A gray matrix generation subunit, configured to generate M gray matrices corresponding to the target gray image, where M is an integer greater than 1, and a target angle corresponding to each gray matrix in the M gray matrices is different, where the target angle corresponding to each gray matrix is used to represent a direction angle according to which the gray matrix determines an adjacent element in the target gray image;
and the first determining subunit is used for determining K target feature vectors corresponding to the target voxels according to M gray matrixes corresponding to the target gray images.
4. A bone lesion determination device as claimed in claim 3, wherein the first determination subunit comprises:
the first determining module is used for determining a gray level co-occurrence matrix corresponding to each gray level matrix according to each element in each gray level matrix in the M gray level matrices and the sum of all elements in the gray level matrix;
the second determining module is used for determining K feature vectors to be processed corresponding to the gray level matrix according to each element in the gray level co-occurrence matrix corresponding to each gray level matrix, wherein the K first feature vectors corresponding to each gray level matrix represent angular second-order matrix information, entropy information, inverse differential matrix information and contrast information corresponding to the coronal image, the cross-sectional image and the sagittal image in the target angle direction corresponding to the gray level matrix;
And the third determining module is used for determining K target feature vectors corresponding to the target voxels according to all the feature vectors to be processed corresponding to the M gray matrixes, the target angle corresponding to each gray matrix and the specific image corresponding to each gray matrix.
5. A bone lesion determination device according to claim 3, wherein the normalization processing subunit comprises:
the acquisition module is used for acquiring the minimum gray value and the maximum gray value in the gray values corresponding to all the pixels contained in the coronal image, the cross-section image and the sagittal image;
the calculating module is used for calculating an initial gray value corresponding to each pixel point in the image to be processed;
a fourth determining module, configured to determine a target gray value corresponding to the pixel according to the initial gray value, the minimum gray value, and the maximum gray value corresponding to each pixel;
and the adjusting module is used for adjusting the initial gray value of each pixel point in the image to be processed to be the target gray value corresponding to the pixel point, so as to obtain the normalized image.
6. A bone lesion determination device according to claim 3, wherein the gray matrix generation subunit comprises:
The target angle acquisition module is used for acquiring preset M target angles;
the statistics module is used for counting the occurrence frequency of each adjacent element combination in the target gray level graph under the condition that each target angle is taken as the basis of the adjacent direction;
and a fifth determining module, configured to determine, according to the elements related to each adjacent element combination and the occurrence frequencies of each adjacent element combination, one gray matrix corresponding to each of the M target angles by using the target gray image, so as to obtain the M gray matrices.
7. The bone lesion determination device according to claim 1, wherein the bone lesion determination device further comprises:
an image set obtaining unit, configured to obtain a first image set and a second image set, where the first image set includes a plurality of first medical images corresponding to bones known to have lesions, and the second image set includes a plurality of second medical images corresponding to bones known to have no lesions;
a fourth determining unit, configured to determine K first feature vectors corresponding to a target voxel in each first medical image in the first image set according to each first medical image in the first image set;
A fifth determining unit, configured to determine K second feature vectors corresponding to a target voxel in each second medical image in the second image set according to each second medical image in the second image set;
and the sixth determining unit is used for determining a target threshold according to the K first feature vectors corresponding to the target voxels in each first medical image and the K second feature vectors corresponding to the target voxels in each second medical image.
8. The bone lesion determination device according to claim 7, wherein the sixth determination unit comprises:
the first computing subunit is used for carrying out probability distribution computation on all first feature vectors corresponding to the first image set to obtain a first probability distribution curve;
the second calculating subunit is used for carrying out probability distribution calculation on all second feature vectors corresponding to the second image set to obtain a second probability distribution curve;
and the target threshold determining subunit is used for determining the target threshold according to the intersection point of the first probability distribution curve and the second probability distribution curve.
9. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and wherein the computer program, when run, controls a device in which the computer readable storage medium is located to control the bone lesion determination device according to any one of claims 1 to 8.
10. An electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to control the bone lesion determination device of any of claims 1-8.
CN202311373832.3A 2023-10-23 2023-10-23 Bone lesion determination device, electronic device, and storage medium Active CN117115159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311373832.3A CN117115159B (en) 2023-10-23 2023-10-23 Bone lesion determination device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311373832.3A CN117115159B (en) 2023-10-23 2023-10-23 Bone lesion determination device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN117115159A true CN117115159A (en) 2023-11-24
CN117115159B CN117115159B (en) 2024-03-15

Family

ID=88800550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311373832.3A Active CN117115159B (en) 2023-10-23 2023-10-23 Bone lesion determination device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN117115159B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117854720A (en) * 2023-12-06 2024-04-09 广州达安临床检验中心有限公司 Autism risk prediction device and computer equipment based on fungus genus characteristic

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063667A1 (en) * 2013-08-29 2015-03-05 General Electric Company Methods and systems for evaluating bone lesions
CN106529188A (en) * 2016-11-25 2017-03-22 苏州国科康成医疗科技有限公司 Image processing method applied to surgical navigation
CN109583444A (en) * 2018-11-22 2019-04-05 博志生物科技有限公司 Hole region localization method, device and computer readable storage medium
CN110348457A (en) * 2019-06-25 2019-10-18 北京邮电大学 A kind of image characteristic extracting method, extraction element, electronic equipment and storage medium
CN112053400A (en) * 2020-09-09 2020-12-08 北京柏惠维康科技有限公司 Data processing method and robot navigation system
CN115115813A (en) * 2022-03-03 2022-09-27 中国人民解放军总医院第四医学中心 Intelligent construction method for standard body position of human skeleton

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063667A1 (en) * 2013-08-29 2015-03-05 General Electric Company Methods and systems for evaluating bone lesions
CN106529188A (en) * 2016-11-25 2017-03-22 苏州国科康成医疗科技有限公司 Image processing method applied to surgical navigation
CN109583444A (en) * 2018-11-22 2019-04-05 博志生物科技有限公司 Hole region localization method, device and computer readable storage medium
CN110348457A (en) * 2019-06-25 2019-10-18 北京邮电大学 A kind of image characteristic extracting method, extraction element, electronic equipment and storage medium
CN112053400A (en) * 2020-09-09 2020-12-08 北京柏惠维康科技有限公司 Data processing method and robot navigation system
CN115115813A (en) * 2022-03-03 2022-09-27 中国人民解放军总医院第四医学中心 Intelligent construction method for standard body position of human skeleton

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
武盼盼: "基于肺部CT图像的肺结节检测技术研究", 《中国博士学位论文全文数据库 医药卫生科技辑》, no. 02, pages 1 - 123 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117854720A (en) * 2023-12-06 2024-04-09 广州达安临床检验中心有限公司 Autism risk prediction device and computer equipment based on fungus genus characteristic

Also Published As

Publication number Publication date
CN117115159B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110599508B (en) Artificial intelligence-based spine image processing method and related equipment
US20200327721A1 (en) Autonomous level identification of anatomical bony structures on 3d medical imagery
US11380084B2 (en) System and method for surgical guidance and intra-operative pathology through endo-microscopic tissue differentiation
Rouet et al. Genetic algorithms for a robust 3-D MR-CT registration
CN117115159B (en) Bone lesion determination device, electronic device, and storage medium
US20200058098A1 (en) Image processing apparatus, image processing method, and image processing program
CN110033465B (en) Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
CN101256224B (en) Method and magnetic resonance apparatus for setting a shim to homogenize a magnetic field in the apparatus
CN111772792A (en) Endoscopic surgery navigation method, system and readable storage medium based on augmented reality and deep learning
KR20210051141A (en) Method, apparatus and computer program for providing augmented reality based medical information of patient
JP2023139022A (en) Medical image processing method, medical image processing device, medical image processing system, and medical image processing program
JP6824845B2 (en) Image processing systems, equipment, methods and programs
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
US20180005378A1 (en) Atlas-Based Determination of Tumor Growth Direction
KR102433473B1 (en) Method, apparatus and computer program for providing augmented reality based medical information of patient
CA2778599C (en) Bone imagery segmentation method and apparatus
CN116322899A (en) Method and system for transducer array placement and skin surface condition avoidance
US11580673B1 (en) Methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis
KR20170116971A (en) Method for the multisensory representation of an object and a representation system
CN114365188A (en) Analysis method and product based on VRDS AI inferior vena cava image
US20110115785A1 (en) Image processing apparatus, method, and program
Akter et al. Robust initialisation for single-plane 3D CT to 2D fluoroscopy image registration
CN112862975B (en) Bone data processing method, system, readable storage medium and device
WO2021081850A1 (en) Vrds 4d medical image-based spine disease recognition method, and related devices
CN111613300B (en) Tumor and blood vessel Ai processing method and product based on VRDS 4D medical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant