WO2023169108A1 - 目标区域的定位方法、电子设备、介质 - Google Patents

目标区域的定位方法、电子设备、介质 Download PDF

Info

Publication number
WO2023169108A1
WO2023169108A1 PCT/CN2023/074338 CN2023074338W WO2023169108A1 WO 2023169108 A1 WO2023169108 A1 WO 2023169108A1 CN 2023074338 W CN2023074338 W CN 2023074338W WO 2023169108 A1 WO2023169108 A1 WO 2023169108A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
coordinate information
nuclear magnetic
coordinate system
skin surface
Prior art date
Application number
PCT/CN2023/074338
Other languages
English (en)
French (fr)
Inventor
林涛
李聚龙
谭智刚
蒲里鹏
伍小兵
Original Assignee
重庆海扶医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆海扶医疗科技股份有限公司 filed Critical 重庆海扶医疗科技股份有限公司
Publication of WO2023169108A1 publication Critical patent/WO2023169108A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Definitions

  • Embodiments of the present disclosure relate to the field of smart medical technology, and in particular to target area positioning methods, electronic devices, and computer-readable storage media.
  • Minimally non-invasive treatment technology refers to a treatment method that accurately damages and kills tumors through image guidance and minimally invasive methods such as focused ultrasound, argon-helium cryotherapy, catheter intervention, and radiofrequency ablation.
  • minimally invasive treatment is known as one of the most active and promising technologies in the field of comprehensive tumor treatment due to its characteristics of small trauma, precise curative effect, strong pertinence, and rapid recovery.
  • the doctor's ability to find the patient's lesions is very demanding. Especially for inexperienced doctors, it almost takes a long time to find the lesions, which limits the entry of many doctors into minimally invasive techniques.
  • Treatment field when a doctor confirms whether the observed lesion is a patient's lesion, it is mainly the doctor's subjective judgment. Therefore, there may be problems such as improper treatment position due to the doctor's subjective judgment error. In addition, if the patient's position changes during the treatment, the doctor needs to rely on experience to reposition the patient before continuing the treatment, which takes a long time.
  • Embodiments of the present disclosure provide a method for locating a target area, an electronic device, and a computer-readable storage medium.
  • embodiments of the present disclosure provide a method for locating a target area, including:
  • a target image including a skin surface area corresponding to the reaction bone is collected through a camera; , the reactive bone is a bone with target characteristics;
  • the second device coordinate information including the center position of the target area of the lesion in the device coordinate system according to the first device coordinate information and the predetermined first position relationship information; wherein the first position relationship information is Positional relationship information between the center position of the skin surface area and the center position of the target area.
  • identifying the skin surface area from the target image includes:
  • the target image after image enhancement processing is input into the trained classification model to obtain the first pixel coordinate information of the skin surface area in the pixel coordinate system.
  • the method before inputting the image enhanced target image into the trained classification model to obtain the first pixel coordinate information of the skin surface area in the pixel coordinate system, the method further includes:
  • the classification model is obtained by performing model training on the sample images after image enhancement processing.
  • the first device coordinate information determining the center position of the skin surface area in the device coordinate system includes:
  • the camera coordinate information of the center position of the skin surface area in the camera coordinate system is determined according to the second pixel coordinate information and the first conversion relationship; wherein the first conversion relationship is the pixel coordinate system and the camera Transformation relationship between coordinate systems;
  • the first device coordinate information is determined according to the camera coordinate information and a second conversion relationship; wherein the second conversion relationship is a conversion relationship between the camera coordinate system and the device coordinate system.
  • the first location relationship information is where the first device is located.
  • Determining the second device coordinate information of the center position of the target area including the lesion in the device coordinate system based on the first device coordinate information and the predetermined first position relationship information includes:
  • the second device coordinate information is determined to be the difference between the first device coordinate information and the first position relationship information.
  • the method before collecting the first image including the skin surface area corresponding to the reactive bone through the camera, the method further includes:
  • the first positional relationship information is obtained in advance based on the nuclear magnetic image.
  • obtaining the first positional relationship information in advance based on magnetic resonance images includes:
  • the first position relationship information is determined based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information.
  • determining the first position relationship information based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information includes:
  • the first positional relationship information is the difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information
  • determine the difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information determine the first position relationship information as the product of the difference and the third conversion relationship; wherein, The third conversion relationship is the conversion relationship between the nuclear magnetic coordinate system and the equipment coordinate system.
  • an electronic device including:
  • a memory on which at least one program is stored, when the at least one program is At least one processor executes, so that the at least one processor implements any one of the above target area positioning methods.
  • embodiments of the present disclosure provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the program is executed by a processor, any one of the above target area positioning methods is implemented.
  • the target area positioning method provided by the embodiments of the present disclosure realizes intelligent identification and intelligent positioning of the patient's lesion location during the surgical operation, improves the positioning accuracy of the patient's lesion location, and does not require pasting of markers, reducing the workload of medical staff. workload.
  • Figure 1 is a flow chart of a target area positioning method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of the conversion between the camera coordinate system and the image physical coordinate system according to an embodiment of the present disclosure
  • FIG. 3 is a block diagram of a target area positioning device provided by another embodiment of the present disclosure.
  • Figure 1 is a flow chart of a target area positioning method provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a method for locating a target area, including:
  • Step 100 Use the camera to collect a target image including the skin surface area corresponding to the reactive bone; wherein the reactive bone is a bone with target characteristics.
  • the camera may be any one of a monocular camera, a binocular camera, a multi-ocular camera, and a 3D structure optical camera.
  • the reactive bone may be a bone whose spatial positional relationship with the human skin surface does not change in a natural state.
  • the reaction bone can be the bridge of the nose, sacrococcygeal bone, etc.
  • the skin surface area corresponding to the reactive bone refers to an area on the human skin surface that is at the same location as the reactive bone but has a different depth.
  • the skin surface area may be the area including the nose; when the reaction bone is the sacrococcygeal bone, the skin surface area may be the sacrococcygeal triangle area.
  • Step 101 Identify the skin surface area from the target image.
  • identifying the skin surface area from the target image includes: performing image enhancement processing on the target image; inputting the image enhancement processed target image into a trained classification model to obtain the pixel coordinates of the skin surface area.
  • the first pixel coordinate information in the system includes: performing image enhancement processing on the target image; inputting the image enhancement processed target image into a trained classification model to obtain the pixel coordinates of the skin surface area. The first pixel coordinate information in the system.
  • identifying the skin surface area from the target image includes: The target image is input into the trained classification model to obtain the first pixel coordinate information of the skin surface area in the pixel coordinate system.
  • the pixel coordinate system is a two-dimensional coordinate system established on the target image.
  • the origin of the pixel coordinate system can be any point on the target image or any point on the non-target image, for example, it can be the upper left corner of the target image; one axis of the pixel coordinate system is parallel to the rows of the target image, and the other axis parallel to the columns of the target image; alternatively, one axis of the pixel coordinate system is parallel to the columns of the target image and the other axis is parallel to the rows of the target image.
  • the pixel coordinate information of a certain point on the target image in the pixel coordinate system is discrete, in pixels, and can only be integer values.
  • the target image can be enhanced using methods well known to those skilled in the art.
  • the limited contrast adaptive histogram equalization algorithm CLAHE
  • Contrast Limited Adaptive Histogram Equalization performs image enhancement processing on the target image.
  • the target image after image enhancement processing is input into the trained classification model to obtain the first pixel coordinate information of the skin surface area in the pixel coordinate system, or the target image is input into the trained classification model.
  • the method Before obtaining the first pixel coordinate information of the skin surface area in the pixel coordinate system in the classification model, the method also includes: collecting a sample image including the skin surface area through a camera; performing image enhancement processing on the sample image; and based on the image enhancement processing Model training is performed on sample images to obtain a classification model.
  • the target image after image enhancement processing is input into the trained classification model to obtain the first pixel coordinate information of the skin surface area in the pixel coordinate system, or the target image is input into the trained classification model.
  • the method Before obtaining the first pixel coordinate information of the skin surface area in the pixel coordinate system in the classification model, the method also includes: collecting a sample image including the skin surface area through a camera; performing model training based on the sample image to obtain a classification model.
  • a model well known to those skilled in the art can be used for training to obtain a classification model.
  • a Mask R-CNN neural network model can be used for training to obtain a classification model.
  • the implementation process of the Mask R-CNN neural network model roughly includes: Label the skin surface area of the sample image or the sample image after image enhancement processing to generate a mask label data set; filter and preprocess the mask label data set, and divide the filtered and preprocessed data set to obtain different posture images The combined data set; input the data set of different posture images into the pre-trained neural network (such as ResNet, etc.) to obtain the corresponding body surface feature map; for each point region of interest (ROI, Region) in the body surface feature map of Interest), obtain the candidate box based on the ROI; perform binary classification and regression (BB, Bounding-box regression) processing on the candidate box to filter out a part of the points corresponding to the lower score (lower Score) ROI; classify the remaining points in the candidate box Perform ROI alignment (Alig
  • Step 102 Determine the first device coordinate information of the center position of the skin surface area in the device coordinate system.
  • the device may be any device that performs surgical operations, such as a robotic arm or the like.
  • the device coordinate system is a three-dimensional coordinate system established based on the device.
  • determining the first device coordinate information of the center position of the skin surface area in the device coordinate system includes: determining the second pixel coordinate information of the center position of the skin surface area in the pixel coordinate system; according to the second The pixel coordinate information and the first conversion relationship determine the camera coordinate information of the center position of the skin surface area in the camera coordinate system; wherein the first conversion relationship is the conversion relationship between the pixel coordinate system and the camera coordinate system; according to the camera coordinate information and The second transformation relationship determines the first device coordinate information; wherein the second transformation relationship is the transformation relationship between the camera coordinate system and the device coordinate system.
  • the camera coordinate system is a three-dimensional coordinate system established based on the camera.
  • the first transformation relationship may be represented by a first transformation matrix.
  • the camera coordinate system and the pixel coordinate system are related through the image physical coordinate system, and the first transformation relationship can be based on the transformation relationship between the camera coordinate system and the image physical coordinate system, and the image physical coordinate system and The conversion relationship between pixel coordinate systems is obtained.
  • the transformation relationship between the camera coordinate system and the image physical coordinate system can be represented by a third transformation matrix
  • the transformation relationship between the image physical coordinate system and the pixel coordinate system can be represented by a fourth transformation matrix
  • the first transformation matrix can be based on the third transformation
  • the transformation matrix and the fourth transformation matrix are determined.
  • the image physical coordinate system is a two-dimensional coordinate system established on the image sensor.
  • the origin of the image physical coordinate system is the intersection of the camera optical axis and the imaging plane; one axis of the image physical coordinate system is parallel to the rows of the image sensor, and the other axis is parallel to the column of the image sensor; or, one of the image physical coordinate systems One axis is parallel to the image sensor columns and the other axis is parallel to the image sensor rows.
  • the image physical coordinate information of a certain point on the image sensor in the image physical coordinate system is discrete and measured in length.
  • the camera coordinate system is a three-dimensional coordinate system
  • the image physical coordinate system is a two-dimensional coordinate system. Therefore, the third transformation matrix is a transformation matrix between the three-dimensional coordinate system and the two-dimensional coordinate system.
  • the camera coordinate information of point P in the camera coordinate system is (Xc, Yc, Zc), the intersection point Oc of the camera optical axis and the imaging plane and the line OcP connecting point P and the camera
  • the intersection point of the imaging plane is p, which is the projection point of point P on the camera imaging plane, as shown in Figure 2.
  • the image physical coordinate information of point p in the image physical coordinate system is (x, y) f is the focal length of the camera, then According to the principle of similar triangles:
  • the third transformation matrix can be expressed as:
  • the fourth transformation matrix is the transformation matrix between two two-dimensional coordinate systems, namely:
  • is the number of pixels included in the unit length in the x direction
  • is the number of pixels included in the unit length in the y direction
  • (u, v) is the pixel coordinate information of point p
  • (x, y) is the physical image of point p
  • the coordinate information, (u 0 , v 0 ) is the pixel coordinate information of the origin of the image physical coordinate system in the pixel coordinate system.
  • the first transformation matrix can be expressed as:
  • K is the internal parameter matrix of the camera, that is, the first conversion matrix.
  • determining the camera coordinate information of the center position of the skin surface area in the camera coordinate system according to the second pixel coordinate information and the first transformation relationship includes: determining the camera coordinate information according to formula (4).
  • the second transformation relationship may be represented by a second transformation matrix Le.
  • both the camera coordinate system and the device coordinate system are three-dimensional coordinate systems. Therefore, the second transformation matrix is a transformation matrix between the two three-dimensional coordinate systems. At the same time, the skin surface area is in the two three-dimensional coordinate systems. Only the spatial position and orientation have changed, and the shape has not changed. Therefore, the second transformation matrix Le can be represented by the rotation matrix R and the translation matrix T. Specifically, assume that there is a point P on the skin surface area, the camera coordinate information of point P in the camera coordinate system is (Xc, Yc, Zc), and the device coordinate information of point P in the device coordinate system is (Xe, Ye, Ze ), the conversion relationship between the two coordinate information is shown in formula (5).
  • R is a 3 ⁇ 3 matrix
  • T is the translation vector
  • Le is the external parameter matrix of the reaction camera in the device coordinate system.
  • determining the first device coordinate information according to the camera coordinate information and the second transformation relationship includes: determining the first device coordinate information according to formula (5).
  • Step 103 Determine the location based on the first device coordinate information and the predetermined first location relationship information. Determine the second device coordinate information in the device coordinate system with the center position of the target area including the lesion; wherein the first position relationship information is the position relationship information between the center position of the skin surface area and the center position of the target area.
  • the lesion may be a tumor, such as uterine fibroids, etc.
  • the first position relationship information is a difference between the first device coordinate information and the fourth device coordinate information of the center position of the target area in the device coordinate system; according to the first device coordinate information and the preset Determining the second device coordinate information including the center position of the target area of the lesion in the device coordinate system based on the determined first position relationship information includes: determining the second device coordinate information as the difference between the first device coordinate information and the first position relationship information.
  • the first positional relationship information is between the third nuclear magnetic coordinate information in which the center position of the skin surface area is in the nuclear magnetic coordinate system and the first nuclear magnetic coordinate information in which the central position of the target area is in the nuclear magnetic coordinate system.
  • the second device coordinate information in the device coordinate system includes: according to the third conversion relationship, the third nuclear magnetic field
  • the difference between the coordinate information and the first nuclear magnetic coordinate information determines the difference between the first device coordinate information and the fourth device coordinate information; the second device coordinate information is determined to be the first device coordinate information, and the difference between the first device coordinate information and the first device coordinate information and the difference between the coordinate information of the fourth device.
  • the third transformation relationship is a transformation relationship between two three-dimensional coordinate systems, the nuclear magnetic coordinate system and the device coordinate system.
  • the third transformation relationship can be represented by a fifth transformation matrix.
  • the third transformation matrix is related to the second The transformation matrix Le is similar and will not be described again here.
  • the method before collecting the first image including the skin surface area corresponding to the reactive bone through the camera, the method further includes: acquiring the first positional relationship information in advance based on the nuclear magnetic image.
  • obtaining the first positional relationship information in advance based on the nuclear magnetic image includes: determining the first nuclear magnetic coordinate information of the center position of the target area in the nuclear magnetic coordinate system based on the nuclear magnetic image, and the target position of the reaction bone in the nuclear magnetic coordinate system.
  • the second nuclear magnetic coordinate information in the system includes: determining the center position of the skin surface area in the nuclear magnetic coordinate system based on the second nuclear magnetic coordinate information and the nuclear magnetic image; determining based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information First position relationship information.
  • the target location of the reactive bone may be the sacrococcygeal alternation.
  • the skin surface area is centered at a different depth, but the same location, as the sacrococcygeal alternation.
  • determining the first positional relationship information based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information includes: determining the first positional relationship information as a difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information. value; or, determine the difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information; determine the first position relationship information as the product of the difference and the third conversion relationship; wherein the third conversion relationship is the nuclear magnetic coordinate The conversion relationship between the coordinate system and the device coordinate system.
  • the target area positioning method provided by the embodiments of the present disclosure realizes intelligent identification and intelligent positioning of the patient's lesion location during the surgical operation, improves the positioning accuracy of the patient's lesion location, and does not require pasting of markers, reducing the workload of medical staff. workload.
  • an electronic device including:
  • the memory stores at least one program.
  • the at least one program When executed by at least one processor, the at least one processor implements any of the above target area positioning methods.
  • the processor is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the memory is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically such as SDRAM). , DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH).
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • FLASH flash memory
  • the processor and the memory are connected to each other through a bus and are further connected to other components of the computing device.
  • the electronic device further includes: a camera, configured to collect a target image including a skin surface area corresponding to the reaction bone; wherein the reaction bone is a bone with target characteristics.
  • another embodiment of the present disclosure provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the program is executed by a processor, any one of the above target area positioning methods is implemented.
  • FIG. 3 is a block diagram of a target area positioning device provided by another embodiment of the present disclosure.
  • a target area positioning device including: an acquisition module 301, configured to collect, through a camera, a skin surface area including a skin surface area corresponding to a reactive bone.
  • Target image wherein the reaction bone is a bone with target characteristics; identification module 302, used to identify the skin surface area from the target image; coordinate information determination module 303, used to determine the skin surface area
  • the center position of the first device coordinate information in the device coordinate system determine the center position of the target area including the lesion in the device coordinate system according to the first device coordinate information and the predetermined first position relationship information.
  • Device coordinate information wherein the first positional relationship information is the positional relationship information between the center position of the skin surface area and the center position of the target area.
  • the recognition module 302 is specifically configured to: perform image enhancement processing on the target image; input the target image after image enhancement processing into a trained classification model to obtain the pixel coordinates of the skin surface area.
  • the first pixel coordinate information in the system is specifically configured to: perform image enhancement processing on the target image; input the target image after image enhancement processing into a trained classification model to obtain the pixel coordinates of the skin surface area. The first pixel coordinate information in the system.
  • the acquisition module 301 is further configured to: acquire a sample image including the skin surface area through the camera; the recognition module 302 is further configured to: perform image enhancement processing on the sample image; according to the image enhancement The processed sample images are subjected to model training to obtain the classification model.
  • the coordinate information determination module 303 is specifically configured to implement the determination of the first device coordinate information of the center position of the skin surface area in the device coordinate system in the following manner: determine the first device coordinate information of the skin surface area.
  • the second pixel coordinate information whose center position is in the pixel coordinate system; the camera coordinate information whose center position of the skin surface area is in the camera coordinate system is determined according to the second pixel coordinate information and the first conversion relationship; wherein,
  • the first conversion relationship is the conversion relationship between the pixel coordinate system and the camera coordinate system; the first device coordinate information is determined according to the camera coordinate information and the second conversion relationship; wherein, the second conversion relationship is the conversion relationship between the camera coordinate system and the device coordinate system.
  • the first position relationship information is a difference between the first device coordinate information and the fourth device coordinate information of the center position of the target area in the device coordinate system;
  • the coordinate information determination module 303 is specifically configured to use the following method to determine the second center position of the target area including the lesion in the device coordinate system based on the first device coordinate information and the predetermined first position relationship information.
  • Device coordinate information The second device coordinate information is determined to be the difference between the first device coordinate information and the first position relationship information.
  • the acquisition module 301 is further configured to: acquire the first position relationship information in advance based on the nuclear magnetic image.
  • the acquisition module 301 is specifically configured to achieve the pre-acquisition of the first position relationship information based on the nuclear magnetic image in the following manner: determining the center position of the target area in the nuclear magnetic coordinate system based on the nuclear magnetic image.
  • the first nuclear magnetic coordinate information in the nuclear magnetic coordinate system, and the second nuclear magnetic coordinate information of the target position of the reaction bone in the nuclear magnetic coordinate system determine the center position of the skin surface area according to the second nuclear magnetic coordinate information and the nuclear magnetic image
  • Third nuclear magnetic coordinate information in the nuclear magnetic coordinate system determining the first position relationship information based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information.
  • the acquisition module 301 is specifically configured to implement the determination of the first position relationship information based on the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information in the following manner: determine the first The positional relationship information is the difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information; or, determine the difference between the first nuclear magnetic coordinate information and the third nuclear magnetic coordinate information; determine The first position relationship information is the product of the difference and a third conversion relationship; wherein the third conversion relationship is a conversion relationship between the nuclear magnetic coordinate system and the device coordinate system.
  • the specific implementation process of the above target area positioning device is the same as the specific implementation process of the target area positioning method in the previous embodiment, and will not be described again here.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. removable, removable and non-removable media.
  • computer Storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage, or may be used Any other medium that stores the desired information and can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .
  • Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a general illustrative sense only and not for purpose of limitation. In some instances, it will be apparent to those skilled in the art that features, characteristics and/or elements described in connection with a particular embodiment may be used alone, or may be used in conjunction with other embodiments, unless expressly stated otherwise. Features and/or components used in combination. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the scope of the present disclosure as set forth in the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种目标区域的定位方法、电子设备、计算机可读存储介质,目标区域的定位方法包括:通过摄像头采集包括与反应骨对应的皮肤表面区域的目标图像;其中,所述反应骨为具有目标特征的骨骼;从所述目标图像中识别出所述皮肤表面区域;确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息;根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息;其中,所述第一位置关系信息为所述皮肤表面区域的中心位置与所述目标区域的中心位置之间的位置关系信息。

Description

目标区域的定位方法、电子设备、介质 技术领域
本公开实施例涉及智慧医疗技术领域,特别涉及目标区域的定位方法、电子设备、计算机可读存储介质。
背景技术
微无创治疗技术是指通过影像引导,采用聚焦超声、氩氦刀冷冻、导管介入、射频消融等微创方法,对肿瘤实行精确性毁损、杀灭的治疗方法。随着医学技术的不断发展,微无创治疗以其创伤小、疗效精确、针对性强、恢复较快等特征,称为现今肿瘤综合治疗领域中最活跃、最具有发展前景的技术之一。但是,在治疗过程中,由于患者组织内部比较复杂,对于医生寻找患者病灶的能力要求很高,尤其是没有经验的医生,几乎要花很长时间才能找到病灶,由此限制很多医生进入微无创治疗领域;同时医生在确认所观察的是否为患者病灶时,主要是医生的主观性判断,因此会存在因医生主观判断失误造成治疗位置不当等问题。另外,在治疗过程中患者体位发生改变,医生还需要依赖经验重新定位才能继续治疗,又要花很长时间。
为了减少医生的工作量,降低手术操作对医生经验的依赖性,需要对患者病灶位置进行智能识别,目前的智能识别技术需要采用粘贴标记的方式,粘贴标记会增加患者的不适感,并且标记材料的选择、粘贴位置的培训等问题还增加了医护人员的工作量。
公开内容
本公开实施例提供一种目标区域的定位方法、电子设备、计算机可读存储介质。
第一方面,本公开实施例提供一种目标区域的定位方法,包括:
通过摄像头采集包括与反应骨对应的皮肤表面区域的目标图像;其 中,所述反应骨为具有目标特征的骨骼;
从所述目标图像中识别出所述皮肤表面区域;
确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息;
根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息;其中,所述第一位置关系信息为所述皮肤表面区域的中心位置与所述目标区域的中心位置之间的位置关系信息。
在一些示例性实施例中,所述从所述目标图像中识别出所述皮肤表面区域包括:
对所述目标图像进行图像增强处理;
将图像增强处理后的目标图像输入到训练好的分类模型中得到所述皮肤表面区域在像素坐标系中的第一像素坐标信息。
在一些示例性实施例中,所述将图像增强处理后的目标图像输入到训练好的分类模型中得到所述皮肤表面区域在像素坐标系中的第一像素坐标信息之前,该方法还包括:
通过所述摄像头采集包括所述皮肤表面区域的样本图像;
对所述样本图像进行图像增强处理;
根据图像增强处理后的样本图像进行模型训练得到所述分类模型。
在一些示例性实施例中,所述确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息包括:
确定所述皮肤表面区域的中心位置在像素坐标系中的第二像素坐标信息;
根据所述第二像素坐标信息和第一转换关系确定所述皮肤表面区域的中心位置在摄像头坐标系中的摄像头坐标信息;其中,所述第一转换关系为所述像素坐标系与所述摄像头坐标系之间的转换关系;
根据所述摄像头坐标信息和第二转换关系确定所述第一设备坐标信息;其中,所述第二转换关系为所述摄像头坐标系与所述设备坐标系之间的转换关系。
在一些示例性实施例中,所述第一位置关系信息为所述第一设备坐 标信息和所述目标区域的中心位置在所述设备坐标系中的第四设备坐标信息之间的差值;
所述根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息包括:
确定所述第二设备坐标信息为所述第一设备坐标信息和所述第一位置关系信息之差。
在一些示例性实施例中,所述通过摄像头采集包括与反应骨对应的皮肤表面区域的第一图像之前,该方法还包括:
预先根据核磁图像获取所述第一位置关系信息。
在一些示例性实施例中,所述预先根据核磁图像获取所述第一位置关系信息包括:
根据所述核磁图像确定所述目标区域的中心位置在核磁坐标系中的第一核磁坐标信息,以及所述反应骨的目标位置在核磁坐标系中的第二核磁坐标信息;
根据所述第二核磁坐标信息和所述核磁图像确定所述皮肤表面区域的中心位置在所述核磁坐标系中的第三核磁坐标信息;
根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息。
在一些示例性实施例中,所述根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息包括:
确定所述第一位置关系信息为所述第一核磁坐标信息和所述第三核磁坐标信息之间的差值;
或者,确定所述第一核磁坐标信息和所述第三核磁坐标信息之间的差值;确定所述第一位置关系信息为所述差值和第三转换关系之间的乘积;其中,所述第三转换关系为所述核磁坐标系和所述设备坐标系之间的转换关系。
第二方面,本公开实施例提供一种电子设备,包括:
至少一个处理器;
存储器,存储器上存储有至少一个程序,当所述至少一个程序被所述 至少一个处理器执行,使得所述至少一个处理器实现上述任意一种目标区域的定位方法。
第三方面,本公开实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,所述程序被处理器执行时实现上述任意一种目标区域的定位方法。
本公开实施例提供的目标区域的定位方法,实现了手术操作过程中对患者病灶位置的智能识别和智能定位,提高了对患者病灶位置的定位精度;并且不需要粘贴标记,减轻了医护人员的工作量。
附图说明
附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开的实施例一起用于解释本公开,并不构成对本公开的限制。通过参考附图对详细示例性实施例进行描述,在附图中:
图1为本公开一个实施例提供的目标区域的定位方法的流程图;
图2为本公开实施例的摄像头坐标系和图像物理坐标系之间的转换示意图;
图3为本公开另一个实施例提供的目标区域的定位装置的组成框图。
具体实施方式
为使本领域的技术人员更好地理解本公开的技术方案,下面结合附图对本公开提供的目标区域的定位方法、电子设备、计算机可读存储介质进行详细描述。
在下文中将参考附图更充分地描述示例实施例,但是所述示例实施例可以以不同形式来体现且不应当被解释为限于本文阐述的实施例。反之,提供这些实施例的目的在于使本公开透彻和完整,并将使本领域技术人员充分理解本公开的范围。
在不冲突的情况下,本公开各实施例及实施例中的各特征可相互组合。
如本文所使用的,术语“和/或”包括至少一个相关列举条目的任何和所有组合。
本文所使用的术语仅用于描述特定实施例,且不意欲限制本公开。如本文所使用的,单数形式“一个”和“该”也意欲包括复数形式,除非上下文另外清楚指出。还将理解的是,当本说明书中使用术语“包括”和/或“由……制成”时,指定存在所述特征、整体、步骤、操作、元件和/或组件,但不排除存在或添加至少一个其它特征、整体、步骤、操作、元件、组件和/或其群组。
除非另外限定,否则本文所用的所有术语(包括技术和科学术语)的含义与本领域普通技术人员通常理解的含义相同。还将理解,诸如那些在常用字典中限定的那些术语应当被解释为具有与其在相关技术以及本公开的背景下的含义一致的含义,且将不解释为具有理想化或过度形式上的含义,除非本文明确如此限定。
图1为本公开一个实施例提供的目标区域的定位方法的流程图。
第一方面,参照图1,本公开一个实施例提供一种目标区域的定位方法,包括:
步骤100、通过摄像头采集包括与反应骨对应的皮肤表面区域的目标图像;其中,反应骨为具有目标特征的骨骼。
在一些示例性实施例中,摄像头可以是单目摄像头、双目摄像头、多目摄像头、3D结构光学摄像头中的任意一个。
在一些示例性实施例中,反应骨可以是自然状态下与人体皮肤表面之间的空间位置关系不会发生变化的骨骼。例如反应骨可以是鼻梁骨、骶尾骨等。
在一些示例性实施例中,与反应骨对应的皮肤表面区域是指人体皮肤表面上与反应骨所处位置相同,而深度不同的区域。例如,反应骨为鼻梁骨时,皮肤表面区域可以是包括鼻子的区域;反应骨为骶尾骨时,皮肤表面区域可以是骶尾骨三角区域。
步骤101、从目标图像中识别出皮肤表面区域。
在一些示例性实施例中,从目标图像中识别出皮肤表面区域包括:对目标图像进行图像增强处理;将图像增强处理后的目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息。
在一些示例性实施例中,从目标图像中识别出皮肤表面区域包括:将 目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息。
在一些示例性实施例中,像素坐标系是建立在目标图像上的二维坐标系。像素坐标系的原点可以是目标图像上的任意一点,也可以是非目标图像上的任意一点,例如可以是目标图像的左上角;像素坐标系的其中一个轴平行于目标图像的行,另一个轴平行于目标图像的列;或者,像素坐标系的其中一个轴平行于目标图像的列,另一个轴平行于目标图像的行。目标图像上的某一点在像素坐标系中的像素坐标信息是离散的,以像素为单位的,只能是整数值。
在一些示例性实施例中,由于目标图像中皮肤表面区域的亮度比周围其他区域的亮度要小,即偏暗,为了增强皮肤表面区域的图像对比度,防止图像灰度接近造成过度放大噪声的情况出现,减少光线条件对图像特征的影响,对目标图像进行图像增强处理,具体可以采用本领域技术人员熟知的方式对目标图像进行图像增强处理,例如可以采用限制对比度自适应直方图均衡算法(CLAHE,Contrast Limited Adaptive Histogram Equalization)对目标图像进行图像增强处理。
在一些示例性实施例中,将图像增强处理后的目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息之前,或将目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息之前,该方法还包括:通过摄像头采集包括皮肤表面区域的样本图像;对样本图像进行图像增强处理;根据图像增强处理后的样本图像进行模型训练得到分类模型。
在一些示例性实施例中,将图像增强处理后的目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息之前,或将目标图像输入到训练好的分类模型中得到皮肤表面区域在像素坐标系中的第一像素坐标信息之前,该方法还包括:通过摄像头采集包括皮肤表面区域的样本图像;根据样本图像进行模型训练得到分类模型。
在一些示例性实施例中,可以采用本领域技术人员熟知的模型进行训练得到分类模型,例如可以采用Mask R-CNN神经网络模型进行训练得到分类模型。具体的,Mask R-CNN神经网络模型的实现过程大致包括: 对样本图像或图像增强处理后的样本图像进行皮肤表面区域的标注,生成mask标签数据集;对mask标签数据集进行筛选和预处理,将筛选和预处理后的数据集进行划分得到不同姿态图像组合的数据集;将不同姿态图像组合的数据集输入到预训练的神经网络(如ResNet等)中得到相应的体表特征图;为体表特征图中的每一点感兴趣区域(ROI,Region of Interest),根据ROI得到候选框;对候选框进行二值分类和回归(BB,Bounding-box regression)处理,以过滤一部分低分(lower Score)ROI对应的点;对候选框中剩下的点进行ROI对齐(Align)操作,对ROI Align操作后的点进行分类。
步骤102、确定皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息。
在一些示例性实施例中,设备可以是进行手术操作的任何设备,例如机械臂等。
在一些示例性实施例中,设备坐标系是基于设备建立的三维坐标系。
在一些示例性实施例中,确定皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息包括:确定皮肤表面区域的中心位置在像素坐标系中的第二像素坐标信息;根据第二像素坐标信息和第一转换关系确定皮肤表面区域的中心位置在摄像头坐标系中的摄像头坐标信息;其中,第一转换关系为像素坐标系与摄像头坐标系之间的转换关系;根据摄像头坐标信息和第二转换关系确定第一设备坐标信息;其中,第二转换关系为摄像头坐标系与设备坐标系之间的转换关系。
在一些示例性实施例中,摄像头坐标系是基于摄像头建立的三维坐标系。
在一些示例性实施例中,第一转换关系可以采用第一转换矩阵表示。
在一些示例性实施例中,摄像头坐标系和像素坐标系之间通过图像物理坐标系关联,第一转换关系可以基于摄像头坐标系和图像物理坐标系之间的转换关系,以及图像物理坐标系和像素坐标系之间的转换关系获得。
在一些示例性实施例中,摄像头坐标系和图像物理坐标系之间的转换关系可以采用第三转换矩阵表示,图像物理坐标系和像素坐标系之间的转换关系可以采用第四转换矩阵表示,那么第一转换矩阵可以根据第三转 换矩阵和第四转换矩阵确定。
在一些示例性实施例中,图像物理坐标系是建立在图像传感器上的二维坐标系。图像物理坐标系的原点为摄像头光轴与成像平面的交点;图像物理坐标系的其中一个轴平行于图像传感器的行,另一个轴平行于图像传感器的列;或者,图像物理坐标系的其中一个轴平行于图像传感器的列,另一个轴平行于图像传感器的行。图像传感器上的某一点在图像物理坐标系中的图像物理坐标信息是离散的,以长度为单位的。
在一些示例性实施例中,摄像头坐标系为三维坐标系,图像物理坐标系为二维坐标系,因此,第三转换矩阵是三维坐标系和二维坐标系之间的转换矩阵。具体的,假设皮肤表面区域上有一点P,点P在摄像头坐标系中的摄像头坐标信息为(Xc,Yc,Zc),摄像头光轴与成像平面的交点Oc和点P的连线OcP与摄像头成像平面的交点为p,即点P在摄像头成像平面上的投影点,如图2所示,点p在图像物理坐标系的图像物理坐标信息为(x,y)f为摄像头的焦距,那么根据相似三角形原理可知:

那么,第三转换矩阵可以表示为:
第四转换矩阵是两个二维坐标系之间的转换矩阵,即:
其中,α为x方向单位长度包含的像素个数,β为y方向单位长度包含的像素个数,(u,v)为点p的像素坐标信息,(x,y)为点p的物理图像坐标信息,(u0,v0)为图像物理坐标系的原点在像素坐标系中的像素坐标信息。
那么,第一转换矩阵可以表示为:
其中,K为摄像头的内参矩阵,即第一转换矩阵。
在一些示例性实施例中,根据第二像素坐标信息和第一转换关系确定皮肤表面区域的中心位置在摄像头坐标系中的摄像头坐标信息包括:按照公式(4)确定摄像头坐标信息。
在一些示例性实施例中,第二转换关系可以采用第二转换矩阵Le表示。
在一些示例性实施例中,摄像头坐标系和设备坐标系均为三维坐标系,因此,第二转换矩阵是两个三维坐标系之间的转换矩阵,同一时间皮肤表面区域在两个三维坐标系中仅仅是空间位置和朝向发生了变化,形状并没有发生变化,因此,第二转换矩阵Le可以采用旋转矩阵R和平移矩阵T来表示。具体的,假设皮肤表面区域上有一点P,点P在摄像头坐标系中的摄像头坐标信息为(Xc,Yc,Zc),点P在设备坐标系中的设备坐标信息为(Xe,Ye,Ze),两个坐标信息的转换关系表示如公式(5)所示。
其中,R为3×3矩阵,T为平移向量,Le是反应摄像头在设备坐标系中的外参矩阵。
在一些示例性实施例中,根据摄像头坐标信息和第二转换关系确定第一设备坐标信息包括:按照公式(5)确定第一设备坐标信息。
步骤103、根据第一设备坐标信息和预先确定的第一位置关系信息确 定包括病灶的目标区域的中心位置在设备坐标系中的第二设备坐标信息;其中,第一位置关系信息为皮肤表面区域的中心位置与目标区域的中心位置之间的位置关系信息。
在一些示例性实施例中,病灶可以是肿瘤,如子宫肌瘤等。
在一些示例性实施例中,第一位置关系信息为第一设备坐标信息和目标区域的中心位置在设备坐标系中的第四设备坐标信息之间的差值;根据第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在设备坐标系中的第二设备坐标信息包括:确定第二设备坐标信息为第一设备坐标信息和第一位置关系信息之差。
在一些示例性实施例中,第一位置关系信息为皮肤表面区域的中心位置在核磁坐标系中的第三核磁坐标信息和目标区域的中心位置在核磁坐标系中的第一核磁坐标信息之间的差值;根据第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在设备坐标系中的第二设备坐标信息包括:根据第三转换关系、第三核磁坐标信息和第一核磁坐标信息之间的差值确定第一设备坐标信息和第四设备坐标信息之间的差值;确定第二设备坐标信息为第一设备坐标信息,与第一设备坐标信息和第四设备坐标信息之间的差值之差。
在一些示例性实施例中,第三转换关系为核磁坐标系和设备坐标系两个三维坐标系之间的转换关系,第三转换关系可以采用第五转换矩阵表示,第三转换矩阵与第二转换矩阵Le类似,这里不再赘述。
在一些示例性实施例中,通过摄像头采集包括与反应骨对应的皮肤表面区域的第一图像之前,该方法还包括:预先根据核磁图像获取第一位置关系信息。
在一些示例性实施例中,预先根据核磁图像获取第一位置关系信息包括:根据核磁图像确定目标区域的中心位置在核磁坐标系中的第一核磁坐标信息,以及反应骨的目标位置在核磁坐标系中的第二核磁坐标信息;根据第二核磁坐标信息和核磁图像确定皮肤表面区域的中心位置在核磁坐标系中的第三核磁坐标信息;根据第一核磁坐标信息和第三核磁坐标信息确定第一位置关系信息。
在一些示例性实施例中,反应骨的目标位置可以是骶尾骨交替处。
在一些示例性实施例中,皮肤表面区域的中心位置为与骶尾骨交替处处于不同深度,但位置相同的位置。
在一些示例性实施例中,根据第一核磁坐标信息和第三核磁坐标信息确定第一位置关系信息包括:确定第一位置关系信息为第一核磁坐标信息和第三核磁坐标信息之间的差值;或者,确定第一核磁坐标信息和第三核磁坐标信息之间的差值;确定第一位置关系信息为差值和第三转换关系之间的乘积;其中,第三转换关系为核磁坐标系和设备坐标系之间的转换关系。
本公开实施例提供的目标区域的定位方法,实现了手术操作过程中对患者病灶位置的智能识别和智能定位,提高了对患者病灶位置的定位精度;并且不需要粘贴标记,减轻了医护人员的工作量。
第二方面,本公开另一个实施例提供一种电子设备,包括:
至少一个处理器;
存储器,存储器上存储有至少一个程序,当至少一个程序被至少一个处理器执行,使得至少一个处理器实现上述任意一种目标区域的定位方法。
其中,处理器为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH)。
在一些示例性实施例中,处理器、存储器通过总线相互连接,进而与计算设备的其它组件连接。
在一些示例性实施例中,电子设备还包括:摄像头,用于采集包括与反应骨对应的皮肤表面区域的目标图像;其中,所述反应骨为具有目标特征的骨骼。
第三方面,本公开另一个实施例提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,程序被处理器执行时实现上述任意一种目标区域的定位方法。
图3为本公开另一个实施例提供的目标区域的定位装置的组成框图。
第四方面,本公开另一个实施例提供一种目标区域的定位装置,包括:获取模块301,用于通过摄像头采集包括与反应骨对应的皮肤表面区域的 目标图像;其中,所述反应骨为具有目标特征的骨骼;识别模块302,用于从所述目标图像中识别出所述皮肤表面区域;坐标信息确定模块303,用于确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息;根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息;其中,所述第一位置关系信息为所述皮肤表面区域的中心位置与所述目标区域的中心位置之间的位置关系信息。
在一些示例性实施例中,识别模块302具体用于:对所述目标图像进行图像增强处理;将图像增强处理后的目标图像输入到训练好的分类模型中得到所述皮肤表面区域在像素坐标系中的第一像素坐标信息。
在一些示例性实施例中,获取模块301还用于:通过所述摄像头采集包括所述皮肤表面区域的样本图像;识别模块302还用于:对所述样本图像进行图像增强处理;根据图像增强处理后的样本图像进行模型训练得到所述分类模型。
在一些示例性实施例中,坐标信息确定模块303具体用于采用以下方式实现所述确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息:确定所述皮肤表面区域的中心位置在像素坐标系中的第二像素坐标信息;根据所述第二像素坐标信息和第一转换关系确定所述皮肤表面区域的中心位置在摄像头坐标系中的摄像头坐标信息;其中,所述第一转换关系为所述像素坐标系与所述摄像头坐标系之间的转换关系;根据所述摄像头坐标信息和第二转换关系确定所述第一设备坐标信息;其中,所述第二转换关系为所述摄像头坐标系与所述设备坐标系之间的转换关系。
在一些示例性实施例中,所述第一位置关系信息为所述第一设备坐标信息和所述目标区域的中心位置在所述设备坐标系中的第四设备坐标信息之间的差值;坐标信息确定模块303具体用于采用以下方式实现所述根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息:确定所述第二设备坐标信息为所述第一设备坐标信息和所述第一位置关系信息之差。
在一些示例性实施例中,获取模块301还用于:预先根据核磁图像获取所述第一位置关系信息。
在一些示例性实施例中,获取模块301具体用于采用以下方式实现所述预先根据核磁图像获取所述第一位置关系信息:根据所述核磁图像确定所述目标区域的中心位置在核磁坐标系中的第一核磁坐标信息,以及所述反应骨的目标位置在核磁坐标系中的第二核磁坐标信息;根据所述第二核磁坐标信息和所述核磁图像确定所述皮肤表面区域的中心位置在所述核磁坐标系中的第三核磁坐标信息;根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息。
在一些示例性实施例中,获取模块301具体用于采用以下方式实现所述根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息:确定所述第一位置关系信息为所述第一核磁坐标信息和所述第三核磁坐标信息之间的差值;或者,确定所述第一核磁坐标信息和所述第三核磁坐标信息之间的差值;确定所述第一位置关系信息为所述差值和第三转换关系之间的乘积;其中,所述第三转换关系为所述核磁坐标系和所述设备坐标系之间的转换关系。
上述目标区域的定位装置的具体实现过程与前述实施例目标区域的定位方法的具体实现过程相同,这里不再赘述。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其它数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机 存储介质包括但不限于RAM、ROM、EEPROM、闪存或其它存储器技术、CD-ROM、数字多功能盘(DVD)或其它光盘存储、磁盒、磁带、磁盘存储或其它磁存储器、或者可以用于存储期望的信息并且可以被计算机访问的任何其它的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其它传输机制之类的调制数据信号中的其它数据,并且可包括任何信息递送介质。
本文已经公开了示例实施例,并且虽然采用了具体术语,但它们仅用于并仅应当被解释为一般说明性含义,并且不用于限制的目的。在一些实例中,对本领域技术人员显而易见的是,除非另外明确指出,否则可单独使用与特定实施例相结合描述的特征、特性和/或元素,或可与其它实施例相结合描述的特征、特性和/或元件组合使用。因此,本领域技术人员将理解,在不脱离由所附的权利要求阐明的本公开的范围的情况下,可进行各种形式和细节上的改变。

Claims (10)

  1. 一种目标区域的定位方法,包括:
    通过摄像头采集包括与反应骨对应的皮肤表面区域的目标图像;其中,所述反应骨为具有目标特征的骨骼;
    从所述目标图像中识别出所述皮肤表面区域;
    确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息;
    根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息;其中,所述第一位置关系信息为所述皮肤表面区域的中心位置与所述目标区域的中心位置之间的位置关系信息。
  2. 根据权利要求1所述的目标区域的定位方法,其中,所述从所述目标图像中识别出所述皮肤表面区域包括:
    对所述目标图像进行图像增强处理;
    将图像增强处理后的目标图像输入到训练好的分类模型中得到所述皮肤表面区域在像素坐标系中的第一像素坐标信息。
  3. 根据权利要求2所述的目标区域的定位方法,所述将图像增强处理后的目标图像输入到训练好的分类模型中得到所述皮肤表面区域在像素坐标系中的第一像素坐标信息之前,该方法还包括:
    通过所述摄像头采集包括所述皮肤表面区域的样本图像;
    对所述样本图像进行图像增强处理;
    根据图像增强处理后的样本图像进行模型训练得到所述分类模型。
  4. 根据权利要求1所述的目标区域的定位方法,其中,所述确定所述皮肤表面区域的中心位置在设备坐标系中的第一设备坐标信息包括:
    确定所述皮肤表面区域的中心位置在像素坐标系中的第二像素坐标信息;
    根据所述第二像素坐标信息和第一转换关系确定所述皮肤表面区域的中心位置在摄像头坐标系中的摄像头坐标信息;其中,所述第一转换关系为所述像素坐标系与所述摄像头坐标系之间的转换关系;
    根据所述摄像头坐标信息和第二转换关系确定所述第一设备坐标信息;其中,所述第二转换关系为所述摄像头坐标系与所述设备坐标系之间的转换关系。
  5. 根据权利要求1所述的目标区域的定位方法,其中,所述第一位置关系信息为所述第一设备坐标信息和所述目标区域的中心位置在所述设备坐标系中的第四设备坐标信息之间的差值;
    所述根据所述第一设备坐标信息和预先确定的第一位置关系信息确定包括病灶的目标区域的中心位置在所述设备坐标系中的第二设备坐标信息包括:
    确定所述第二设备坐标信息为所述第一设备坐标信息和所述第一位置关系信息之差。
  6. 根据权利要求1-5任意一项所述的目标区域的定位方法,所述通过摄像头采集包括与反应骨对应的皮肤表面区域的第一图像之前,该方法还包括:
    预先根据核磁图像获取所述第一位置关系信息。
  7. 根据权利要求6所述的目标区域的定位方法,其中,所述预先根据核磁图像获取所述第一位置关系信息包括:
    根据所述核磁图像确定所述目标区域的中心位置在核磁坐标系中的第一核磁坐标信息,以及所述反应骨的目标位置在核磁坐标系中的第二核磁坐标信息;
    根据所述第二核磁坐标信息和所述核磁图像确定所述皮肤表面区域的中心位置在所述核磁坐标系中的第三核磁坐标信息;
    根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息。
  8. 根据权利要求7所述的目标区域的定位方法,其中,所述根据所述第一核磁坐标信息和所述第三核磁坐标信息确定所述第一位置关系信息包括:
    确定所述第一位置关系信息为所述第一核磁坐标信息和所述第三核磁坐标信息之间的差值;
    或者,确定所述第一核磁坐标信息和所述第三核磁坐标信息之间的 差值;确定所述第一位置关系信息为所述差值和第三转换关系之间的乘积;其中,所述第三转换关系为所述核磁坐标系和所述设备坐标系之间的转换关系。
  9. 一种电子设备,包括:
    至少一个处理器;
    存储器,所述存储器上存储有至少一个程序,当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现权利要求1-8任意一项所述的目标区域的定位方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述程序被处理器执行时实现权利要求1-8任意一项所述的目标区域的定位方法。
PCT/CN2023/074338 2022-03-10 2023-02-03 目标区域的定位方法、电子设备、介质 WO2023169108A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210234627.8 2022-03-10
CN202210234627.8A CN114638798A (zh) 2022-03-10 2022-03-10 目标区域的定位方法、电子设备、介质

Publications (1)

Publication Number Publication Date
WO2023169108A1 true WO2023169108A1 (zh) 2023-09-14

Family

ID=81947625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/074338 WO2023169108A1 (zh) 2022-03-10 2023-02-03 目标区域的定位方法、电子设备、介质

Country Status (2)

Country Link
CN (1) CN114638798A (zh)
WO (1) WO2023169108A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638798A (zh) * 2022-03-10 2022-06-17 重庆海扶医疗科技股份有限公司 目标区域的定位方法、电子设备、介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143243A1 (en) * 2000-06-28 2004-07-22 Jurgen Wahrburg Apparatus for positioning a surgical instrument
CN112258494A (zh) * 2020-10-30 2021-01-22 北京柏惠维康科技有限公司 一种病灶位置确定方法、装置及电子设备
CN113041519A (zh) * 2019-12-27 2021-06-29 重庆海扶医疗科技股份有限公司 一种智能空间定位方法
CN113274130A (zh) * 2021-05-14 2021-08-20 上海大学 用于光学手术导航系统的无标记手术注册方法
CN113397704A (zh) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 机器人定位方法、装置、系统及计算机设备
CN114638798A (zh) * 2022-03-10 2022-06-17 重庆海扶医疗科技股份有限公司 目标区域的定位方法、电子设备、介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143243A1 (en) * 2000-06-28 2004-07-22 Jurgen Wahrburg Apparatus for positioning a surgical instrument
CN113041519A (zh) * 2019-12-27 2021-06-29 重庆海扶医疗科技股份有限公司 一种智能空间定位方法
CN112258494A (zh) * 2020-10-30 2021-01-22 北京柏惠维康科技有限公司 一种病灶位置确定方法、装置及电子设备
CN113397704A (zh) * 2021-05-10 2021-09-17 武汉联影智融医疗科技有限公司 机器人定位方法、装置、系统及计算机设备
CN113274130A (zh) * 2021-05-14 2021-08-20 上海大学 用于光学手术导航系统的无标记手术注册方法
CN114638798A (zh) * 2022-03-10 2022-06-17 重庆海扶医疗科技股份有限公司 目标区域的定位方法、电子设备、介质

Also Published As

Publication number Publication date
CN114638798A (zh) 2022-06-17

Similar Documents

Publication Publication Date Title
WO2021213508A1 (zh) 胶囊内窥镜图像拼接方法、电子设备及可读存储介质
WO2021017297A1 (zh) 基于人工智能的脊柱影像处理方法及相关设备
JP5797352B1 (ja) 3次元物体を追跡するための方法
Zhang et al. A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy
Zhu et al. Automatic segmentation of the left atrium from MR images via variational region growing with a moments-based shape prior
KR20210051141A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
EP3788596B1 (en) Lower to higher resolution image fusion
WO2023169108A1 (zh) 目标区域的定位方法、电子设备、介质
CN110123453B (zh) 一种基于无标记增强现实的手术导航系统
Yang et al. Improving catheter segmentation & localization in 3d cardiac ultrasound using direction-fused fcn
CN115590623A (zh) 穿刺路径规划方法、系统
Niri et al. Multi-view data augmentation to improve wound segmentation on 3D surface model by deep learning
Barbosa et al. Accurate chronic wound area measurement using structure from motion
KR20210052270A (ko) 환자의 증강 현실 기반의 의료 정보를 제공하는 방법, 장치 및 컴퓨터 프로그램
CN109816665B (zh) 一种光学相干断层扫描图像的快速分割方法及装置
Sabri et al. 2d photogrammetry image of scoliosis lenke type classification using deep learning
CN113693739B (zh) 肿瘤导航修正方法、装置及便携式荧光影像导航设备
CN111743628A (zh) 一种基于计算机视觉的自动穿刺机械臂路径规划的方法
CN116385756B (zh) 基于增强标注和深度学习的医学图像识别方法及相关装置
CN114515395B (zh) 基于双目视觉的吞咽检测方法及装置、设备、存储介质
US11922621B2 (en) Automatic frame selection for 3D model construction
CN114757953B (zh) 医学超声图像识别方法、设备及存储介质
Liu et al. CT-ultrasound registration for electromagnetic navigation of cardiac intervention
WO2022120714A1 (zh) 图像分割方法及装置、图像引导系统、放射治疗系统
WO2022198866A1 (zh) 图像处理方法、装置、计算机设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765687

Country of ref document: EP

Kind code of ref document: A1