CN112258494B - Focal position determination method and device and electronic equipment - Google Patents

Focal position determination method and device and electronic equipment Download PDF

Info

Publication number
CN112258494B
CN112258494B CN202011198101.6A CN202011198101A CN112258494B CN 112258494 B CN112258494 B CN 112258494B CN 202011198101 A CN202011198101 A CN 202011198101A CN 112258494 B CN112258494 B CN 112258494B
Authority
CN
China
Prior art keywords
image
laser
target object
determining
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011198101.6A
Other languages
Chinese (zh)
Other versions
CN112258494A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co Ltd filed Critical Beijing Baihui Weikang Technology Co Ltd
Priority to CN202011198101.6A priority Critical patent/CN112258494B/en
Publication of CN112258494A publication Critical patent/CN112258494A/en
Application granted granted Critical
Publication of CN112258494B publication Critical patent/CN112258494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for determining a focus position and electronic equipment, wherein the method for determining the focus position comprises the following steps: the method for determining the position of the focus comprises the steps of collecting an image formed when the surface of a target object is irradiated by laser, generating laser point cloud data of the surface of the target object according to the image, carrying out registration processing on the laser point cloud data and CT image point cloud data of the target object, and determining the position of the focus in an image coordinate system according to a registration result and the position of the focus in the CT image.

Description

Focal position determination method and device and electronic equipment
Technical Field
The present application relates to the field of medical imaging technologies, and in particular, to a method and an apparatus for determining a lesion position, and an electronic device.
Background
With the development of medical technology, it is more and more common to use surgical robots to assist in performing surgery, and before performing surgery, the actual lesion position of a patient and the lesion position in a captured CT image need to be determined. The traditional focus position determining process is complex and tedious, and is easily influenced by the focus position determining process caused by the fact that the patient moves to drive the position of the positioning marker to move, so that the focus position determining effect is unstable, the focus position determining accuracy is poor, and the treatment effect of the image surgery is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method for determining a lesion position, which can effectively reduce the complexity of a lesion position determination process and improve the accuracy of a lesion position determination result.
In a first aspect, an embodiment of the present application provides a lesion position determination method, including:
acquiring an image formed when the surface of a target object is irradiated by laser, and generating laser point cloud data of the surface of the target object according to the image;
registering the laser point cloud data and the CT image point cloud data of the target object, and determining the position of a focus in an image coordinate system according to a registration result and the focus position in the CT image;
wherein the image coordinate system is a three-dimensional coordinate system determined according to the irradiation source of the laser and the target object.
Optionally, in an embodiment of the present application, the acquiring an image formed when a surface of a target object is irradiated by laser light, and generating laser point cloud data of the surface of the target object according to the image includes:
collecting an image formed when the surface of the target object is irradiated by laser, and generating a binary image according to the image;
determining a connected component in the binary image according to a connected component determination mechanism;
and generating laser point cloud data of the surface of the target object according to the connected components.
Optionally, in an embodiment of the present application, the acquiring an image formed when the surface of the target object is irradiated by laser, and generating a binarized image according to the image includes:
acquiring at least one image of the surface of the target object to generate a background image;
and acquiring an image formed when the surface of the target object is irradiated by laser, and generating the binary image according to the image and the background image.
Optionally, in an embodiment of the present application, the acquiring at least one image of the surface of the target object and generating a background image includes: when the number of the collected images of the surface of the target object is more than or equal to 2, carrying out gray average processing on the images of the surface of the target object to generate the background image, wherein the method comprises the following steps: and when the number of the collected images on the surface of the target object is more than or equal to 2, carrying out gray average processing on the images on the surface of the target object to generate the background image.
Optionally, in an embodiment of the present application, the generating the binarized image according to the image and the background image includes:
comparing the pixel values of the background image and the other images to obtain a comparison result;
and generating the binary image according to the comparison result and a set pixel threshold value.
Optionally, in an embodiment of the present application, the determining the connected components in the binarized image according to a connected component determining mechanism includes:
constructing a transitional binary image which has the same size with the binary image and has zero pixel values of all pixel points;
determining the position of a pixel point with a pixel value of 1 in the binary image according to a connected component determination mechanism, and setting the pixel value of the pixel point at the corresponding position in the transitional binary image to be 1;
using a communicating structure, taking a pixel corresponding to a pixel point position with a pixel value of 1 in the transition binary image as a starting point, and performing expansion operation on the transition binary image to obtain an expanded image;
and performing intersection processing on the expanded image and the binarized image, and determining a connected component in the binarized image according to an intersection processing result.
Optionally, in an embodiment of the present application, the intersecting the expanded image and the binarized image, and determining a connected component in the binarized image according to a result of the intersecting, includes:
performing intersection processing on the expanded image and the binary image to obtain an intersection result, and updating the transitional binary image according to the intersection result;
performing expansion operation on the updated transitional binary image to obtain an updated expanded image;
performing intersection processing on the updated expanded image and the binary image again until the intersection processing result obtained by the intersection processing is the same as the expanded image subjected to the intersection processing last time;
and determining connected components in the binary image according to the intersection processing result.
Optionally, in an embodiment of the present application, the performing registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determining a position of a lesion in an image coordinate system according to a registration result and a lesion position in the CT image includes:
performing registration processing on the laser point cloud data of the target object and the CT image point cloud data of the target object by adopting an iterative closest point algorithm to obtain a rigid body change matrix of the position relationship between the laser point cloud data and the CT image point cloud data;
and determining the position of the focus in an image coordinate system according to the rigid body change matrix and the focus position in the CT image.
Optionally, in an embodiment of the present application, the generating laser point cloud data of the target object surface according to the connected component includes:
determining a set of laser points irradiated by the laser according to the number of pixels contained in the connected component;
and generating laser point cloud data of the surface of the target object according to the set of the laser points.
Optionally, in an embodiment of the present application, the determining the set of laser points irradiated by the laser according to the number of pixels included in the connected component includes:
determining a region corresponding to a first connected component with the number of pixels in a preset threshold range in the connected components as a bright spot region irradiated by the laser;
determining a laser point irradiated by the laser according to the position of the bright spot area;
carrying out iterative updating on the binary image, and determining the laser point of the binary image after iterative updating;
determining all of the determined laser spots as a set of laser spots to which the laser is irradiated.
Optionally, in an embodiment of the present application, the generating laser point cloud data of the target object surface according to the set of laser points includes:
and generating laser point cloud data of the surface of the target object according to the three-dimensional space position of the laser point in the set of the laser points.
In a second aspect, based on the lesion position determination method according to the first aspect of the embodiments of the present application, an embodiment of the present application further provides a lesion position determination apparatus, including: the system comprises a laser point cloud data generation module and a positioning module;
the laser point cloud data generation module is used for acquiring an image formed when the surface of a target object is irradiated by laser and generating laser point cloud data of the surface of the target object according to the image;
and the positioning module is used for determining the position of the focus in an image coordinate system according to the rigid body change matrix and the focus position in the CT image.
In a third aspect, based on the lesion position determination method in the first aspect of the present application, the present application further provides an electronic device for lesion position determination, the electronic device including: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus;
the memory is configured to store at least one executable instruction that causes the processor to perform any of the lesion location determination methods according to the first aspect of the present application.
According to the method for determining the position of the focus, the laser point cloud data of the surface of the target object is generated according to the image by collecting the image formed when the surface of the target object is irradiated by laser, the laser point cloud data and the point cloud data of the CT image of the target object are subjected to registration processing, and the position of the focus in an image coordinate system is determined according to the registration result and the position of the focus in the CT image.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart of a lesion location determination method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a lesion position determination apparatus according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device for lesion location determination according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the embodiments in the present application.
The first embodiment,
A method for determining a lesion position according to an embodiment of the present application is provided, as shown in fig. 1, where fig. 1 is a flowchart of the method for determining a lesion position according to an embodiment of the present application, and the method for determining a lesion position includes:
s101, collecting an image formed when the surface of the target object is irradiated by laser, and generating laser point cloud data of the surface of the target object according to the image.
In one implementation manner of this embodiment, acquiring an image formed when the surface of the target object is irradiated by the laser includes: the images acquired by the multi-view camera such as the binocular camera or the trinocular camera can be used for forming a plurality of two-dimensional images of the target surface by utilizing the acquired images of each view, so that the laser point cloud data of the target surface generated according to the plurality of images is more accurate.
In this embodiment, the laser irradiation is laser spot irradiation, for example, a spot laser generator projects a laser spot onto the surface of the target object, a bright spot is formed in each image of the multi-view camera, and in the moving process of the laser spot, an imaging track of the bright spot in the acquired image is an only changed and moved acquisition object in the image acquired by the multi-view camera, so that point cloud data containing more complete and accurate information of the surface of the target object can be generated according to the image acquired by the bright spot in the moving process of the surface of the target object.
Optionally, in an implementation manner of this embodiment, acquiring an image formed when the surface of the target object is irradiated by laser light, and generating laser point cloud data of the surface of the target object according to the image includes:
collecting an image formed when the surface of a target object is irradiated by laser, and generating a binary image according to the image;
determining a connected component in the binary image according to a connected component determination mechanism;
and generating laser point cloud data of the surface of the target object according to the connected components.
Optionally, in an implementation manner of this embodiment, generating a binary image according to an image formed when the surface of the target object is irradiated by the laser includes:
acquiring at least one image of the surface of a target object to generate a background image;
and acquiring other images formed when the surface of the target object is irradiated by the laser, and generating a binary image according to the other images and the background image.
In this embodiment, the other acquired images include any image acquired after the background image is generated.
Optionally, in an implementation manner of this embodiment, acquiring at least one image of a surface of the target object, and generating the background image includes: and when the number of the collected images on the surface of the target object is more than or equal to 2, carrying out gray average processing on the images on the surface of the target object to generate a background image.
Optionally, in an implementation manner of this embodiment, generating a binarized image according to the background image and another image formed when the surface of the target object is irradiated by the laser includes:
and comparing the pixel values of the background image and the other image to obtain a comparison result, and generating a binary image according to the comparison result and a preset pixel threshold value.
The present embodiment exemplifies how to generate a background image and a binarized image by taking shooting and capturing a surface of a target object by using a three-eye camera:
the laser is projected to the target object through the point laser emitter, if the target object is the face of a patient, at the moment, a bright spot can be formed in an image of the face of the patient, which is shot by the camera, due to the fact that the head of the patient and the camera are fixed in relative spatial positions, the image of the face of the patient is collected through the three-eye camera, the three-eye camera can collect three images of the same target at the same time at three different angles, in the process of moving the point laser emitter, the bright spot can move along with the laser emitter, and therefore the imaging area of the bright spot can move only in the image collected through the multi-eye camera.
Optionally, 5 frames of gray-scale images of the face of the patient are acquired by each eye of the three-eye camera, and background images of the target object acquired by each eye can be generated under each eye channel respectively. The example here takes the generation of a background image from five grayscale images acquired by any one eye of the three-eye camera as an example: recording the gray level image of the face of the patient with 5 frames acquired by one eye of the three-eye camera as Ir0,Ir1,Ir2,Ir3,Ir4Of the five frame imagesAverage gray value is synthesized into an image as a background image Ib0For the background image Ib0Each pixel above has:
Figure BDA0002754574320000071
where x is the abscissa of the grayscale image and y is the ordinate of the grayscale image. At least 1 image is selected as a background image when the image is selected, and 3-5 frames are preferably selected, in a specific implementation mode, the number of the selected images is large, and the integrity and accuracy of information contained in the generated laser point cloud data are better, so that the influence of inaccurate background image for positioning the focus position is eliminated; however, the number of the selected images is large, which results in that the speed of generating the laser point cloud data is slow, so that 3-5 frames are preferably selected, and the specific number can be set according to the situation.
In this example, the collected gray-scale image of the face of the patient may be an imaged image containing the laser projection bright spots, or may be an imaged image not containing the laser projection bright spots, and preferably, when the background image is generated, the imaged image not containing the laser projection bright spots is preferentially selected, so as to improve the accuracy of the generated background image.
By means of IcRepresenting a triple-view camera to re-acquire an image of the surface of a target object irradiated by laser, and using a background image I determined by the viewb0With the newly acquired image IcCalculating a binary image IbinaryThe pixel values of the pixel points in the binarized image are as follows:
Figure BDA0002754574320000081
the threshold is a set threshold of a gray value, for example, the set threshold may be 4, or may be other values, and a user may set the threshold according to own experience, which is not limited in this embodiment;
the binary image is a binary image formed by imaging only bright spots formed by point laser projection.
The present embodiment illustrates the manner of generating the background image and the binary image by the above specific implementation example, and does not represent that the present application is limited thereto.
Optionally, in an implementation manner of this embodiment, determining a connected component in the binarized image according to a connected component determining mechanism includes:
constructing a transitional binary image which has the same size as the binary image and has zero pixel values of all pixel points;
determining the pixel point position with the pixel value of 1 in the binary image according to a connected component determination mechanism, and setting the pixel value of the pixel point at the corresponding position in the transitional binary image to be 1;
and (3) using a communication structure, taking a pixel corresponding to the pixel point position with the pixel value of 1 in the transition binary image as a starting point, and performing expansion operation on the transition binary image to obtain an expanded image.
And performing intersection processing on the expanded image and the binary image, and determining a connected component in the binary image according to an intersection processing result.
Optionally, in an implementation manner of this embodiment, performing intersection processing on the expanded image and the binarized image, and determining a connected component in the binarized image according to a result of the intersection processing includes:
performing intersection processing on the expanded image and the binary image to obtain an intersection result, and updating transition binarization according to the intersection result;
performing expansion operation on the updated transitional binary image to obtain an updated expanded image;
and performing intersection processing on the updated expanded image and the binary image again until the intersection processing result obtained by the intersection processing at this time is the same as the intersection processing result obtained by participating in the intersection processing at the last time, stopping performing the intersection processing, and determining the connected component in the binary image according to the intersection processing result at this time.
Through the mode, the connected components meeting the requirements in the binary image can be completely determined, so that the accuracy and the integrity of the image processing result are ensured.
In this embodiment, taking an image acquired by any one of the three-view cameras on the surface of the target object as an example, a specific implementation is illustrated, and the following is described for determining a connected component in the binarized image according to a connected component determining mechanism:
traversing the binarized image IbinaryFor example, a raster scan can be used to find the binary image IbinaryThe first pixel point is a pixel point with a pixel value of 1, and a transitional binary image X is newly built0The transition binary image X0Size of (d) and the binarized image IbrinaryIs the same, namely, pixel points exist in corresponding coordinate positions in the two images, and the transitional binary image X0The pixel values of all the pixel points are 0;
determining a binary image I according to a connected component determination mechanismbrinaryThe position of a pixel point with the middle pixel value of 1 is detected, and the transition binary image X is processed0Setting the pixel value of the pixel point at the corresponding position as 1;
using the connected structure, binarizing the image X with the transition0Neutralization binarization image IbrinaryTaking a pixel point corresponding to the position of the first pixel point with the value of 1 as a starting point, and using a 4-connected structural element B (a pixel area occupied by the imaging of bright spots in the collected target object surface image and an area with a default of a 4-connected structure) to perform image XkPerforming an expansion operation (with
Figure BDA0002754574320000102
Representation) and then with the binarized image IbinaryTake the intersection and iterate with the following formula:
Figure BDA0002754574320000101
when X is presentk=Xk-1The time iteration is over, when XkThe binary image I comprises a connected component, the coordinates of pixel points (corresponding to edge pixel points of a bright spot imaging area) with heavy connected components are recorded, and the binary image I is processedbinarySetting the pixel value of the pixel point corresponding to the connected component to be 0, continuing to scan to find a new connected component until a binary image I is scannedbinaryThe last pixel point in the binary image I is determinedbinaryAll connected components in (a).
In the above listed examples, it is able to detect the imaging area formed by the moving track of the bright spot formed by laser irradiation during the movement of the collected image of the target object surface.
Optionally, in an implementation manner of this embodiment, generating laser point cloud data of a target object surface according to the connected components includes:
determining a set of laser points irradiated by the laser according to the number of pixels contained in the connected components;
and generating laser point cloud data of the surface of the target object according to the set of the laser points.
Optionally, in an implementation manner of this embodiment, determining a set of laser points irradiated by laser according to the number of pixels included in the connected component includes:
determining an area corresponding to a first connected component with the number of pixels in a set threshold range in the determined connected components as a bright spot area irradiated by laser;
determining laser points irradiated by the laser according to the position of the bright spot area;
carrying out iterative updating on the binary image, and determining the laser points of the binary image after iterative updating;
all the determined laser points are determined as a set of laser points to be irradiated by the laser.
Optionally, in an implementation manner of this embodiment, generating laser point cloud data of a target object surface according to a set of laser points includes:
and generating laser point cloud data of the surface of the target object according to the three-dimensional space position of the laser points in the set of the laser points.
In this embodiment, still taking an image collected by a trinocular camera as an example, a specific implementation is listed, and a laser point cloud data generated on the surface of a target object according to a connected component is exemplarily described:
determining the weight of the connected components determined in the binary image, selecting the first connected component with the quantity of 20-100 pixels as a determined laser bright spot area, and determining the edge pixel of the connected component, wherein the edge pixel I (x, y) is defined as:
I(x-1,y)=0,I(x,y)=1orI(x,y)=1,I(x+1,y)=0
fitting an ellipse according to the determined coordinate positions of the edge pixels according to the following formula:
Figure BDA0002754574320000111
wherein the center of the fitted elliptical region is the newly acquired image IcAnd the imaging pixel point of the medium imaging laser point. Of course, in the present embodiment, the surface image I of the newly acquired target object at the laser point may be determined in other wayscThe position of the imaging pixel point in (1) is not limited herein.
Taking a multi-view camera as an example, the embodiment exemplarily illustrates a set of laser points formed by determining a laser irradiation target object by iteratively updating a binarized image and determining laser points of the binarized image after iterative updating;
tracking the determined laser point using a camera, such as a trinocular camera, and finding the three-dimensional coordinates of the laser point in space using triangulation, wherein the triangulation method is as follows:
in an actual situation, the surface of the target object is shot and acquired by the trinocular camera, a current laser point may be shot by one or more eyes, at this time, under the condition that any two eyes can see the laser point, the three-dimensional coordinates of the laser point are calculated according to the positions of the laser points determined by the images respectively acquired by the two eyes, if the laser point can be shot by all the current three eyes, the three-dimensional coordinates of the laser point are calculated by selecting the positions of the laser points determined in the images acquired by the left eye and the right eye, and if the laser point cannot be shot by all the three eyes or only one eye can shoot the current laser point, the image acquired by all the current three eyes is skipped.
Let the selected two-purpose projection matrix be PlAnd PrThe three-dimensional homogeneous coordinate of the laser point in the space is X, and the homogeneous coordinate of the laser point in the binocular imaging is XlAnd xrThen there is a mapping relation xl=PlX,xr=PrX, one can deduce: x is the number ofl×PlX=0,xr×PrX is 0, and the above two formulae may be arranged in a form AX is 0, wherein:
Figure BDA0002754574320000121
PiTand (4) solving X by using a least square method on the ith row of the P, namely the three-dimensional coordinate of the laser point in the space.
Update the background image to Ir0=Ir1,Ir1=Ir2,Ir2=Ir3,Ir3=Ir4,Ir4=IcSynthesizing a new background image Ib0And then another image I formed when the surface of the target object is irradiated by the laser is obtained againcDetermining the position of a new laser point;
and repeating the method for determining the position of the laser point until a sufficient set of laser points is collected, and generating laser point cloud data of the target object according to the positions of the laser points contained in the set of laser points.
S102, carrying out registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determining the position of a focus in an image coordinate system according to a registration result and the focus position in the CT image;
wherein the image coordinate system is a three-dimensional coordinate system determined according to the irradiation source of the laser and the target object.
Optionally, in an implementation manner of this embodiment, performing registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determining a position of a lesion in an image coordinate system according to a registration result and a lesion position in the CT image includes:
registering the laser point cloud data of the target object and the CT image point cloud data of the target object by an iterative closest point algorithm to obtain a rigid body change matrix of the position relationship between the laser point cloud data of the target object and the CT image point cloud data of the target object;
and determining the position of the focus in the image coordinate system according to the determined rigid body change matrix and the position of the focus in the CT image.
This embodiment exemplifies a specific implementation manner, that is, performing registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determining the position of the lesion in the image coordinate system according to the registration result and the lesion position in the CT image:
setting a point on laser point cloud data of a target object as si=(six,siy,siz1), determining CT point cloud data of the target object, the distance s on the point cloud data of the CT imageiThe nearest point is ci=(cix,ciy,ciz1), the unit normal direction of the point is ni=(nix,niy,niz0), rigid body transformation between the laser point cloud data of the target object and the point cloud data of the CT image is converted into M, and the optimal M is obtained in each iterationopt
Wherein,
Figure BDA0002754574320000131
when the difference between M obtained by two iterations is less than a certain threshold, the iteration is terminated, so that a rigid body transformation matrix from the point cloud data of the target object to the point cloud data of the CT image is determined, the position of the target object in an image coordinate system (a three-dimensional coordinate system constructed according to a multi-view camera such as a three-view camera and the position relation between the multi-view camera and the target object) is determined according to the rigid body transformation matrix, the registration is completed, and the position of the focus in the image coordinate system is determined according to the position of the focus position in the CT image.
According to the method for determining the position of the focus, the laser point cloud data of the surface of the target object is generated according to the image by collecting the image formed when the surface of the target object is irradiated by laser, the laser point cloud data and the point cloud data of the CT image of the target object are subjected to registration processing, and the position of the focus in an image coordinate system is determined according to the registration result and the position of the focus in the CT image.
The second embodiment,
Based on the method for determining a lesion position provided in the first embodiment of the present application, a second embodiment of the present application provides a device for determining a lesion position, as shown in fig. 2, fig. 2 is a schematic structural diagram of a device 20 for determining a lesion position provided in the first embodiment of the present application, where the device 20 for determining a lesion position includes: a laser point cloud data generation module 201 and a positioning module 202;
a laser point cloud data generating module 201, configured to collect an image formed when the surface of the target object is irradiated by laser, and generate laser point cloud data of the surface of the target object according to the image;
and the positioning module 202 is configured to perform registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determine a position of a focus in an image coordinate system according to a registration result and a focus position in the CT image.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to collect an image formed when the surface of the target object is irradiated by laser, generate a binary image according to the image, determine a connected component in the binary image according to a connected component determining mechanism, and generate laser point cloud data of the surface of the target object according to the connected component.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to perform gray average processing on the image of the surface of the target object to generate a background image when the number of the images of the surface of the target object is greater than or equal to 2.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to perform gray average processing on at least two images formed when the surface of the target object is irradiated by the laser light, so as to generate a background image.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to compare pixel values of a background image and other images to obtain a comparison result, and generate a binary image according to the comparison result and a set pixel threshold.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 constructs a transition binary image that has the same size as the binary image and has zero pixel values of all pixel points; determining the pixel point position with the pixel value of 1 in the binary image according to a connected component determination mechanism, and setting the pixel value of the pixel point at the corresponding position in the transitional binary image to be 1; using a communication structure, taking a pixel corresponding to a pixel point position with a pixel value of 1 in the transition binary image as a starting point, and performing expansion operation on the transition binary image to obtain an expanded image; and performing intersection processing on the expanded image and the binary image, and determining a connected component in the binary image according to an intersection processing result.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to perform intersection processing on the expanded image and the binarized image to obtain an intersection result, and update the transitional binarized image according to the intersection result;
performing expansion operation on the updated transitional binary image to obtain an updated collision image;
and performing intersection processing on the updated expanded image and the binary image again until the intersection processing result obtained by the intersection processing at this time is the same as the intersection processing result obtained by the intersection processing at the last time, and determining a connected component in the binary image according to the intersection processing result at this time.
Optionally, in an implementation manner of this embodiment, the positioning module 202 is further configured to perform registration processing on the laser point cloud data of the target object and the CT image point cloud data of the target object, obtain a rigid change matrix of a position relationship between the laser point cloud data and the CT image point cloud data, and determine a position of the lesion in the image coordinate system according to the rigid change matrix and a lesion position in the CT image.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to determine a set of laser points irradiated by laser according to the number of pixels included in the connected component, and generate laser point cloud data of the target object surface according to the set of laser points.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to determine, as a laser-irradiated bright spot region, a region corresponding to a first connected component, where the number of pixels included in the connected component is within a preset threshold range, determine a laser point irradiated by the laser according to a position of the bright spot region, iteratively update the binarized image, determine a laser point of the iteratively updated binarized image, and determine a set of laser points irradiated by the laser according to all the determined laser points.
Optionally, in an implementation manner of this embodiment, the laser point cloud data generating module 201 is further configured to generate laser point cloud data of the surface of the target object according to a three-dimensional spatial position of a laser point in the set of laser points; wherein the image coordinate system is a three-dimensional coordinate system determined according to the irradiation source of the laser and the target object.
Example III,
Based on the method for determining a lesion position provided in the first embodiment, a third embodiment of the present application provides an electronic device 30 for determining a lesion position, as shown in fig. 3, fig. 3 is a hardware structure diagram of the electronic device 30 provided in the first embodiment of the present application, where the electronic device 30 includes:
one or more processors 301;
a storage medium 302, the storage medium 302 configured to store one or more readable programs 312;
when executed by one or more processors 301, cause the one or more processors 301 to implement a lesion location determination method as described in any of the embodiments above.
The electronic device further comprises a communication interface 303 and a communication bus 304;
wherein the processor 301, the storage medium 302 and the communication interface 303 of the device communicate with each other via the communication bus 34.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (8)

1. A method of lesion location determination, comprising:
acquiring an image formed when the surface of a target object is irradiated by laser, and generating laser point cloud data of the surface of the target object according to the image;
registering the laser point cloud data and the CT image point cloud data of the target object, and determining the position of a focus in an image coordinate system according to a registration result and the focus position in the CT image;
wherein the image coordinate system is a three-dimensional coordinate system determined according to the irradiation source of the laser and the target object;
the collecting of the image formed when the surface of the target object is irradiated by the laser and the generation of the laser point cloud data of the surface of the target object according to the image comprise: collecting at least one image of the surface of the target object, generating a background image, collecting an image formed when the surface of the target object is irradiated by laser, and generating a binary image according to the image and the background image; determining a connected component in the binary image according to a connected component determination mechanism; determining an area corresponding to a first connected component with the number of pixels in a set threshold range in the connected components as a laser-irradiated bright spot area, determining a laser point irradiated by laser according to the position of the bright spot area, performing iterative update on the binary image, and determining the laser point in the iteratively updated binary image; and determining a set of laser points irradiated by the laser according to all the determined laser points, and generating laser point cloud data of the surface of the target object according to the three-dimensional space positions of the laser points in the set of the laser points.
2. The lesion position determination method of claim 1, wherein the acquiring at least one image of the surface of the target object, generating a background image, comprises: and when the number of the collected images on the surface of the target object is more than or equal to 2, carrying out gray average processing on the images on the surface of the target object to generate the background image.
3. The lesion position determination method according to claim 1, wherein the generating a binarized image from the image and the background image comprises:
comparing the background image with the image to obtain a comparison result;
and generating a binary image according to the comparison result and a set pixel threshold value.
4. The lesion position determination method according to claim 1, wherein the determining the connected components in the binarized image according to a connected component determination mechanism comprises:
constructing a transitional binary image which has the same size with the binary image and has zero pixel values of all pixel points;
determining the position of a pixel point with a pixel value of 1 in the binary image according to a connected component determination mechanism, and setting the pixel value of the pixel point at the corresponding position in the transitional binary image to be 1;
using a communicating structure, taking a pixel corresponding to a pixel point position with a pixel value of 1 in the transition binary image as a starting point, and performing expansion operation on the transition binary image to obtain an expanded image;
and performing intersection processing on the expanded image and the binarized image, and determining a connected component in the binarized image according to an intersection processing result.
5. The lesion position determination method according to claim 4, wherein the intersection processing of the dilated image and the binarized image to determine connected components in the binarized image according to the intersection processing result comprises:
performing intersection processing on the expanded image and the binary image to obtain an intersection result, and updating the transitional binary image according to the intersection result;
performing expansion operation on the updated transitional binary image to obtain an updated expanded image;
performing intersection processing on the updated expanded image and the binary image again until the intersection processing result obtained by the intersection processing at this time is the same as the intersection processing result obtained by the intersection processing at the last time;
and determining connected components in the binary image according to the intersection processing result.
6. The method for determining the lesion position according to claim 1, wherein the registering the laser point cloud data and the CT image point cloud data of the target object, and determining the position of the lesion in the image coordinate system according to the registration result and the lesion position in the CT image comprises:
performing registration processing on the laser point cloud data of the target object and the CT image point cloud data of the target object by adopting an iterative closest point algorithm to obtain a rigid body change matrix of the position relationship between the laser point cloud data and the CT image point cloud data;
and determining the position of the focus in an image coordinate system according to the rigid body change matrix and the focus position in the CT image.
7. A lesion position determination device, comprising: the system comprises a laser point cloud data generation module and a positioning module;
the laser point cloud data generation module is used for acquiring an image formed when the surface of a target object is irradiated by laser and generating laser point cloud data of the surface of the target object according to the image; the method comprises the following steps of acquiring an image formed when the surface of a target object is irradiated by laser, and generating laser point cloud data of the surface of the target object according to the image, wherein the method comprises the following steps: collecting at least one image of the surface of the target object, generating a background image, collecting an image formed when the surface of the target object is irradiated by laser, and generating a binary image according to the image and the background image; determining a connected component in the binary image according to a connected component determination mechanism; determining an area corresponding to a first connected component with the number of pixels in a set threshold range in the connected components as a laser-irradiated bright spot area, determining a laser point irradiated by laser according to the position of the bright spot area, performing iterative update on the binary image, and determining the laser point in the iteratively updated binary image; determining a set of laser points irradiated by the laser according to all the determined laser points, and generating laser point cloud data of the surface of the target object according to the three-dimensional space positions of the laser points in the set of the laser points;
and the positioning module is used for carrying out registration processing on the laser point cloud data and the CT image point cloud data of the target object, and determining the position of a focus in an image coordinate system according to a registration result and the focus position in the CT image.
8. An electronic device for lesion location determination, comprising: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus;
the memory is for storing at least one executable instruction that causes the processor to perform the lesion location determination method of any of claims 1-6.
CN202011198101.6A 2020-10-30 2020-10-30 Focal position determination method and device and electronic equipment Active CN112258494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011198101.6A CN112258494B (en) 2020-10-30 2020-10-30 Focal position determination method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011198101.6A CN112258494B (en) 2020-10-30 2020-10-30 Focal position determination method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112258494A CN112258494A (en) 2021-01-22
CN112258494B true CN112258494B (en) 2021-10-22

Family

ID=74267182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011198101.6A Active CN112258494B (en) 2020-10-30 2020-10-30 Focal position determination method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112258494B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362247B (en) * 2021-06-11 2023-08-15 山东大学 Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera
CN114638798A (en) * 2022-03-10 2022-06-17 重庆海扶医疗科技股份有限公司 Target area positioning method, electronic device, and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651752A (en) * 2016-09-27 2017-05-10 深圳市速腾聚创科技有限公司 Three-dimensional point cloud data registration method and stitching method
CN109146931A (en) * 2018-11-12 2019-01-04 深圳安科高技术股份有限公司 A kind of three dimensional image processing method, system, device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080310757A1 (en) * 2007-06-15 2008-12-18 George Wolberg System and related methods for automatically aligning 2D images of a scene to a 3D model of the scene
US10527711B2 (en) * 2017-07-10 2020-01-07 Aurora Flight Sciences Corporation Laser speckle system and method for an aircraft
CN109965979A (en) * 2017-12-27 2019-07-05 上海复旦数字医疗科技股份有限公司 A kind of steady Use of Neuronavigation automatic registration method without index point
CN109464196B (en) * 2019-01-07 2021-04-20 北京和华瑞博医疗科技有限公司 Surgical navigation system adopting structured light image registration and registration signal acquisition method
CN109674536A (en) * 2019-01-25 2019-04-26 上海交通大学医学院附属第九人民医院 Operation guiding system and its equipment, method and storage medium based on laser
CN110946659A (en) * 2019-12-25 2020-04-03 武汉中科医疗科技工业技术研究院有限公司 Registration method and system for image space and actual space
CN111260702B (en) * 2020-02-13 2022-05-17 北京航空航天大学 Laser three-dimensional point cloud and CT three-dimensional point cloud registration method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651752A (en) * 2016-09-27 2017-05-10 深圳市速腾聚创科技有限公司 Three-dimensional point cloud data registration method and stitching method
CN109146931A (en) * 2018-11-12 2019-01-04 深圳安科高技术股份有限公司 A kind of three dimensional image processing method, system, device and storage medium

Also Published As

Publication number Publication date
CN112258494A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN109857254B (en) Pupil positioning method and device, VR/AR equipment and computer readable medium
CN112258494B (en) Focal position determination method and device and electronic equipment
KR102450931B1 (en) Image registration method and associated model training method, apparatus, apparatus
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
US11348216B2 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
JP2008264341A (en) Eye movement measurement method and eye movement measuring instrument
US11633235B2 (en) Hybrid hardware and computer vision-based tracking system and method
WO2024011943A1 (en) Deep learning-based knee joint patella resurfacing three-dimensional preoperative planning method and system
JP2019036346A (en) Image processing apparatus, image processing method, and program
US10078906B2 (en) Device and method for image registration, and non-transitory recording medium
JP2018113021A (en) Information processing apparatus and method for controlling the same, and program
Furukawa et al. Fully auto-calibrated active-stereo-based 3d endoscopic system using correspondence estimation with graph convolutional network
AU2020217368A1 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
CN110348351B (en) Image semantic segmentation method, terminal and readable storage medium
CN110544278A (en) rigid body motion capture method and device and AGV pose capture system
CN114612352A (en) Multi-focus image fusion method, storage medium and computer
EP3843038B1 (en) Image processing method and system
JP7498404B2 (en) Apparatus, method and program for estimating three-dimensional posture of subject
CN114782537A (en) Human carotid artery positioning method and device based on 3D vision
WO2020025001A1 (en) Method and system of tracking patient position in operation
US20240135519A1 (en) Technologies for determining the accuracy of three-dimensional models for use in an orthopaedic surgical procedure
CN115880338B (en) Labeling method, labeling device and computer readable storage medium
CN113597288B (en) Method and system for determining operation path based on image matching
CN110910393B (en) Data processing method and device, electronic equipment and storage medium
US20230342994A1 (en) Storage medium, image identification method, image identification device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Patentee after: Beijing Baihui Weikang Technology Co.,Ltd.

Address before: 100191 Room 608, 6 / F, building 9, 35 Huayuan North Road, Haidian District, Beijing

Patentee before: Beijing Baihui Wei Kang Technology Co.,Ltd.

CP03 Change of name, title or address