CN116740768B - Navigation visualization method, system, equipment and storage medium based on nasoscope - Google Patents

Navigation visualization method, system, equipment and storage medium based on nasoscope Download PDF

Info

Publication number
CN116740768B
CN116740768B CN202311008249.2A CN202311008249A CN116740768B CN 116740768 B CN116740768 B CN 116740768B CN 202311008249 A CN202311008249 A CN 202311008249A CN 116740768 B CN116740768 B CN 116740768B
Authority
CN
China
Prior art keywords
tumor
image
module
picture
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311008249.2A
Other languages
Chinese (zh)
Other versions
CN116740768A (en
Inventor
蔡惠明
李长流
朱淳
潘洁
胡学山
卢露
倪轲娜
王玉叶
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Nuoyuan Medical Devices Co Ltd
Original Assignee
Nanjing Nuoyuan Medical Devices Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Nuoyuan Medical Devices Co Ltd filed Critical Nanjing Nuoyuan Medical Devices Co Ltd
Priority to CN202311008249.2A priority Critical patent/CN116740768B/en
Publication of CN116740768A publication Critical patent/CN116740768A/en
Application granted granted Critical
Publication of CN116740768B publication Critical patent/CN116740768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • A61B1/3135Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes for examination of the epidural or the spinal space
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application discloses a navigation visualization method, a system, equipment and a storage medium based on a nasoscope, which belong to the field of image data processing, wherein a shooting module shoots a tumor part to obtain a shooting image, the shooting image is imported into an image processing strategy to be subjected to sharpening processing to obtain a clear picture of the tumor part, the clear picture is substituted into an image recognition strategy, a tumor image in the picture is extracted, a tumor contour is recognized and extracted, the tumor contour is imported into data fitting software to perform contour curve fitting to obtain a tumor contour curve formula taking the position of the shooting module as an origin, a path of a safe distance from the tumor contour is planned according to the tumor contour curve formula, and data acquisition equipment performs data acquisition on the tumor according to the planned path.

Description

Navigation visualization method, system, equipment and storage medium based on nasoscope
Technical Field
The application belongs to the field of image data processing, and particularly relates to a navigation visualization method, a system, equipment and a storage medium based on a nose cranioscope.
Background
When brain tumor detection is carried out, the approximate position of a tumor is usually identified in advance through CT irradiation, the data acquisition equipment is inserted to the position which is at a safe distance from the tumor through the nasal cavity of a patient, video photographing is carried out on a tumor picture, but only partial tumor pictures are taken by video, when the panoramic picture of the tumor is taken, the track of the data acquisition module is controlled in a manual mode, the track of the data acquisition module cannot be planned and predicted in advance, so that the damage to the tumor caused by the extrusion of the data acquisition module and the tumor in the moving process is very easy to cause, and meanwhile, the quality of the taken video is also easy to be lower.
For example, chinese patent publication No. CN108921843B provides a system for detecting performance of a vehicle-mounted navigator, which includes an image acquisition device for acquiring and transmitting a standard picture; the processor is used for receiving the standard pictures and displaying the standard pictures through a screen of the vehicle-mounted navigator; the method is also used for acquiring a display picture of the vehicle navigator; comparing the display picture with the standard picture; if the comparison result meets the preset condition, judging that the vehicle navigator is qualified, otherwise, judging that the vehicle navigator is unqualified. Therefore, the system for detecting the performance of the vehicle-mounted navigator can automatically detect the performance of the vehicle-mounted navigator without manual detection, time and labor are saved, and the detection result is obtained according to the set standard and is very reliable, so that the detection accuracy is improved. The application also discloses a method for detecting the performance of the vehicle-mounted navigator, which has the same beneficial effects as the system for detecting the performance of the vehicle-mounted navigator.
The method acquires a picture of the current pose of a mobile carrier acquired by a camera, and determines a path planning area in the picture according to the movable range of the mobile carrier; and generating a navigation path from the mobile carrier to the target position in the path planning area according to the point coordinates of the target position in the picture coordinate system and a path planning algorithm. By the method, the path planning distance of the user is limited through the path planning area, so that the actual moving distance of the moving carrier is in an acceptable error range, and the phenomenon that the moving carrier is too large in yaw due to too large error is avoided; the navigation path is directly generated in the path planning area in the picture through the path planning algorithm, so that the calculated amount is reduced, the path calculation speed is increased, and the path display is more convenient.
The problems proposed in the background art exist in the above patents: when brain tumor detection is carried out, the approximate position of a tumor is usually identified in advance through CT irradiation, a data acquisition device is inserted into a position which is at a safe distance from the tumor through the nasal cavity of a patient, video photographing is carried out on a tumor picture, but only partial tumor pictures are taken in the video, when panoramic pictures of the tumor are taken, the track of the data acquisition module is controlled in a manual mode, the track of the data acquisition module cannot be planned and predicted in advance, so that the damage to the tumor caused by the extrusion of the data acquisition module and the tumor in the moving process is extremely easy to cause, and the quality of the photographed video is also easy to be lower.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a navigation visualization method, a system, equipment and a storage medium based on a naso-cranioscope, wherein a data acquisition device inserts into a naso-cranio tumor position through a nasal cavity, a shooting module shoots a tumor part to obtain a shooting image, the shooting image is guided into an image processing strategy to be subjected to sharpening processing to obtain a sharpening picture of the tumor part, the sharpening picture is substituted into an image recognition strategy, a tumor image in the picture is extracted, a tumor contour is recognized and extracted, the tumor contour is guided into data fitting software to perform contour curve fitting to obtain a tumor contour curve formula taking the position of the shooting module as an origin, a path away from a tumor contour safety distance is planned according to the tumor contour curve formula, the data acquisition device performs data acquisition on the tumor according to the planned path, and the damage to the tumor during data acquisition is avoided while the shooting quality is improved.
In order to achieve the above purpose, the present application provides the following technical solutions:
a navigation visualization method based on a nasoscope comprises the following specific steps:
s1, inserting the data acquisition equipment into a tumor position of the nasal cranium through a nasal cavity, and photographing the tumor position by a camera module to obtain a photographed image;
s2, importing the shot image into an image processing strategy for sharpness processing to obtain a sharp image of the tumor part;
s3, substituting the clear picture into an image recognition strategy, extracting a tumor image in the picture, and recognizing and extracting a tumor contour;
s4, importing the tumor contour into data fitting software to fit a contour curve, and obtaining a tumor contour curve formula taking the position of the camera module as an origin;
s5, planning a path of a safe distance from the tumor contour according to a tumor contour curve formula;
and S6, the data acquisition equipment acquires data of the tumor according to the planned path.
Specifically, the specific steps of S1 are as follows:
s101, identifying the approximate position of a tumor through CT irradiation in advance, and inserting the data acquisition equipment to a position at a safe distance from the tumor through the nasal cavity of a patient;
s102, the shooting module shoots the tumor part at a position which is at a safe distance from the tumor, and a shooting picture of the tumor part is obtained.
Specifically, the image processing strategy of S2 includes the following specific steps:
s201, extracting the intensity of at least one color channel of each pixel of a shot picture of a shot tumor part to form a foggy color intensity set, acquiring the humidity of a shot position through a humidity detection assembly, acquiring the temperature of the shot position through a temperature detection assembly, and acquiring the distance data from a shooting module to the tumor position through a distance acquisition module;
s202, taking the foggy color intensity set, the distance data from the image pickup module to the tumor position, the temperature data of the image pickup position and the humidity data of the image pickup position as the input of the constructed machine learning model, and obtaining a foggy clear picture output by the machine learning model.
Specifically, the training manner of the machine learning model in S202 is as follows:
s2021, acquiring tumor images in foggy and non-foggy environments by using an image pickup module to obtain at least one group of foggy shooting pictures and non-foggy clear pictures, and simultaneously obtaining humidity of at least one group of shooting positions, temperature of the shooting positions and distance data from the image pickup module to the tumor positions;
s2022, combining the foggy shot picture, the humidity of the shot position, the temperature of the shot position and the distance data from the shooting module to the tumor position into feature vectors, wherein the set of all feature vectors is used as the input of a machine learning model, the machine learning model takes a foggy clear picture predicted by each group of feature vectors as the output, takes an actual foggy clear picture corresponding to each group of feature vectors as a prediction target, and takes the sum of the prediction accuracy of all predicted foggy clear pictures as a training target;
s2023, a calculation formula of prediction accuracy is as follows:wherein the subscript i is the number of groups of feature vectors, the superscript j is the j-th pixel in the shot picture, M is the predicted haze-free clear picture, M is the shot haze-free clear picture, n is the total number of pixels of the shot picture, and->Training a machine learning model for the prediction accuracy between the i-th group of feature vector predicted haze-free clear pictures and the shot haze-free clear pictures until the sum of the prediction accuracy reaches convergence;
the machine learning model is any one of a deep neural network model and a deep belief network model.
Specifically, the specific steps of the image recognition strategy in S3 are as follows:
s301, importing an image obtained after the definition processing into gray processing software, and performing gray processing on the image to obtain a gray processing image;
s302, dividing the gray processing image into pixels, calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel above to obtain a vertical gradient value of the pixel, and calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel on the left to obtain a horizontal gradient value of the pixel;
s303, respectively comparing the vertical gradient value and the horizontal gradient value with a gradient threshold, taking the corresponding pixel point connecting line of which the vertical gradient value and the horizontal gradient value are larger than the gradient threshold as a boundary, dividing the graying processing image into at least one region, obtaining the image after the clearing processing of the regions, extracting the color and the shape of the region images, importing the color and the shape of the region images into a similarity comparison formula with the stored tumor color and shape, calculating the similarity of the region images and the tumor image, positioning the tumor position, and extracting the tumor contour in the image.
Specifically, the specific step of S4 is as follows:
s401, importing the shot tumor outline picture and the distance from the outline to the camera module into three-dimensional model construction software to construct a tumor three-dimensional image corresponding to the tumor outline picture, and extracting the tumor three-dimensional image and the position of the corresponding camera module;
s402, taking a connecting line between a tumor three-dimensional image and the closest point of a corresponding image pickup module as an x-axis, taking a line which is vertical to the x-axis in a horizontal plane where the x-axis is located as a y-axis, and guiding the positions of the tumor three-dimensional image and the corresponding image pickup module into data fitting software to perform fitting of contour curves to obtain a tumor contour curve formula taking the position of the image pickup module as an originWherein->The position coordinates of the root of the tumor three-dimensional image relative to the camera module.
Specifically, the specific step of S5 is as follows:
s501, collecting a tumor contour curve, and solving a tangent equation of each point of the tumor contour curveWherein->Is in +.>Dot derivative value, wherein->For tumor root image +.>The point on the tangent of the point is found the equation of +.>,/>The equation expression of (2) is +.>Simplifying to obtainWherein->To be in the +.>The tangent to the dot is perpendicular and passes +>Equation of line (2)>Is a point of (2);
s502, calculatingThe track equation of the point which is different from the tangent point by a set safety distance d is calculated by the following steps:,/>substituting it into +.>Obtaining a path planning track equation->And after the track is calculated, the track points positioned in the tumor are removed.
The navigation visualization system based on the nasoscope is realized based on the navigation visualization method based on the nasoscope, and comprises a control module, a camera shooting module, an image processing module, a contour fitting module, a path planning module and a display module, wherein the control module is used for controlling data acquisition equipment to run according to a planned path, the camera shooting module is used for shooting a tumor part to obtain a shooting image, the image processing module is used for importing the shooting image into an image processing strategy to conduct sharpening processing to obtain a clear picture of the tumor part, the contour fitting module is used for importing a tumor contour into data fitting software to conduct contour curve fitting to obtain a tumor contour curve formula taking the position of the camera shooting module as an origin, and the path planning module is used for planning a path with a safe distance from the tumor contour according to the tumor contour curve formula, and the display module is used for displaying the shooting tumor video of the camera shooting module in real time.
An electronic device, comprising: the system comprises a server, an acquisition terminal and a memory, wherein a computer program which can be called by a processor is stored in the memory; the acquisition terminal acquires image data, and the server executes the navigation visualization method based on the rhinoscope by calling the computer program stored in the memory.
A computer readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a rhinoscope-based navigation visualization method as described above.
Compared with the prior art, the application has the beneficial effects that:
the method comprises the steps that a data acquisition device inserts into a tumor position of a nasal cranium through a nasal cavity, a shooting module shoots a tumor position to obtain a shooting image, the shooting image is imported into an image processing strategy to be subjected to sharpening processing to obtain a sharpening picture of the tumor position, the sharpening picture is substituted into an image recognition strategy, a tumor image in the picture is extracted, a tumor contour is recognized and extracted, the tumor contour is imported into data fitting software to perform contour curve fitting to obtain a tumor contour curve formula taking the position of the shooting module as an origin, a path away from a tumor contour safety distance is planned according to the tumor contour curve formula, the data acquisition device performs data acquisition on the tumor according to the planned path, and the damage to the tumor during data acquisition is avoided while shooting quality is improved.
Drawings
FIG. 1 is a flow chart of a method for visualizing a nasoscope-based navigation system according to the present application;
FIG. 2 is a schematic diagram of the components of a nasoscope-based navigation visualization system of the present application;
FIG. 3 is a schematic diagram of data transmission between a server and an acquisition terminal according to the present application;
fig. 4 is a schematic diagram of an acquisition trajectory of a data acquisition device of a nasoscope-based navigation visualization system of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments.
Example 1
Referring to fig. 1, an embodiment of the present application is provided: a navigation visualization method based on a nasoscope comprises the following specific steps:
s1, inserting the data acquisition equipment into a tumor position of the nasal cranium through a nasal cavity, and photographing the tumor position by a camera module to obtain a photographed image;
in this step, the specific steps are:
s101, identifying the approximate position of a tumor through CT irradiation in advance, and inserting the data acquisition equipment to a position at a safe distance from the tumor through the nasal cavity of a patient;
s102, photographing a tumor part at a position which is at a safe distance from the tumor by using a photographing module to obtain a photographed picture of the tumor part;
it should be noted that, the identification of the approximate position of the tumor by CT irradiation is a conventional technique for acquiring the tumor position, but the present application is not limited to CT irradiation and may be any detection means for acquiring the tumor position.
S2, importing the shot image into an image processing strategy for sharpness processing to obtain a sharp image of the tumor part;
in this step, the image processing strategy of S2 includes the following specific steps:
s201, extracting the intensity of at least one color channel of each pixel of a shot picture of a shot tumor part to form a foggy color intensity set, acquiring the humidity of a shot position through a humidity detection assembly, acquiring the temperature of the shot position through a temperature detection assembly, and acquiring the distance data from a shooting module to the tumor position through a distance acquisition module;
s202, taking a foggy color intensity set, distance data from an imaging module to a tumor position, temperature data of an imaging position and humidity data of the imaging position as inputs of a constructed machine learning model, and obtaining a foggy clear picture output by the machine learning model;
it should be noted that the humidity is definitely different for each human skull, so that different fog is easily generated on the surface of the lens, and the video is blurred;
the training method of the machine learning model in S202 is as follows:
s2021, acquiring tumor images in foggy and non-foggy environments by using an image pickup module to obtain at least one group of foggy shooting pictures and non-foggy clear pictures, and simultaneously obtaining humidity of at least one group of shooting positions, temperature of the shooting positions and distance data from the image pickup module to the tumor positions;
s2022, combining the foggy shot picture, the humidity of the shot position, the temperature of the shot position and the distance data from the shooting module to the tumor position into feature vectors, wherein the set of all feature vectors is used as the input of a machine learning model, the machine learning model takes a foggy clear picture predicted by each group of feature vectors as the output, takes an actual foggy clear picture corresponding to each group of feature vectors as a prediction target, and takes the sum of the prediction accuracy of all predicted foggy clear pictures as a training target;
s2023, a calculation formula of prediction accuracy is as follows:wherein the subscript i is the number of groups of feature vectors, the superscript j is the j-th pixel in the shot picture, M is the predicted haze-free clear picture, M is the shot haze-free clear picture, n is the total number of pixels of the shot picture, and->Training a machine learning model for the prediction accuracy between the i-th group of feature vector predicted haze-free clear pictures and the shot haze-free clear pictures until the sum of the prediction accuracy reaches convergence;
the machine learning model is any one of a deep neural network model and a deep belief network model;
s3, substituting the clear picture into an image recognition strategy, extracting a tumor image in the picture, and recognizing and extracting a tumor contour;
in this step, the specific steps of the image recognition strategy in S3 are:
s301, importing an image obtained after the definition processing into gray processing software, and performing gray processing on the image to obtain a gray processing image;
s302, dividing the gray processing image into pixels, calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel above to obtain a vertical gradient value of the pixel, and calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel on the left to obtain a horizontal gradient value of the pixel;
s303, respectively comparing the vertical gradient value and the horizontal gradient value with a gradient threshold value, taking the corresponding pixel point connecting line of which the vertical gradient value and the horizontal gradient value are larger than the gradient threshold value as a boundary, dividing a graying treatment image into at least one region, acquiring the image after the sharpening treatment of the region, extracting the color and the shape of the region image, importing the color and the shape of the region image and the stored tumor color and shape into a similarity comparison formula, calculating the similarity of the region image and the tumor image, positioning the tumor position, and extracting the tumor contour in the image;
s4, importing the tumor contour into data fitting software to fit a contour curve, and obtaining a tumor contour curve formula taking the position of the camera module as an origin;
in this step, the specific steps of S4 are:
s401, importing the shot tumor outline picture and the distance from the outline to the camera module into three-dimensional model construction software to construct a tumor three-dimensional image corresponding to the tumor outline picture, and extracting the tumor three-dimensional image and the position of the corresponding camera module;
s402, taking a connecting line between the tumor three-dimensional image and the closest point of the corresponding image pickup module as an x-axis, taking a line perpendicular to the x-axis in a horizontal plane where the x-axis is located as a y-axis, and importing the positions of the tumor three-dimensional image and the corresponding image pickup module into data fitting software to perform data fittingFitting the line profile curve to obtain a tumor profile curve formula taking the position of the camera module as an originWherein->The position coordinates of the root of the tumor three-dimensional image relative to the camera module are obtained;
the coordinate construction mode is shown in fig. 4;
s5, planning a path of a safe distance from the tumor contour according to a tumor contour curve formula;
in this step, the specific steps of S5 are:
s501, collecting a tumor contour curve, and solving a tangent equation of each point of the tumor contour curveWherein->Is in +.>Derivative of the point, wherein->For tumor root image +.>The point on the tangent of the point is found the equation of +.>,/>The equation expression of (2) is +.>Simplifying to obtainWherein->To be in the +.>The tangent to the dot is perpendicular and passes +>Equation of line (2)>Is a point of (2);
s502, calculatingThe track equation of the point which is different from the tangent point by a set safety distance d is calculated by the following steps:,/>substituting it into +.>Obtaining a path planning track equation->After the track is calculated, removing the track points positioned in the tumor;
the set safety distance d is obtained by means of expert evaluation;
in this case, as shown in fig. 4, since there are two points that are different from each other by the set safety distance d, the calculation of the specific trajectory requires that the trajectory points located inside the tumor be removed after the calculation of the trajectory.
And S6, the data acquisition equipment acquires data of the tumor according to the planned path.
Through this embodiment: the method comprises the steps that a data acquisition device inserts into a tumor position of a nasal cranium through a nasal cavity, a shooting module shoots a tumor position to obtain a shooting image, the shooting image is imported into an image processing strategy to be subjected to sharpening processing to obtain a sharpening picture of the tumor position, the sharpening picture is substituted into an image recognition strategy, a tumor image in the picture is extracted, a tumor contour is recognized and extracted, the tumor contour is imported into data fitting software to perform contour curve fitting to obtain a tumor contour curve formula taking the position of the shooting module as an origin, a path away from a tumor contour safety distance is planned according to the tumor contour curve formula, the data acquisition device performs data acquisition on the tumor according to the planned path, and the damage to the tumor during data acquisition is avoided while shooting quality is improved.
Example 2
Referring to fig. 2, in a second embodiment of the present application, a navigation visualization system based on a nasoscope is disclosed, which is implemented based on the above-mentioned navigation visualization method based on a nasoscope, and includes a control module, a camera module, an image processing module, a contour fitting module, a path planning module and a display module, where the control module is used to control the data acquisition device to run according to the planned path, the camera module is used to photograph a tumor site to obtain a photographed image, the image processing module is used to introduce the photographed image into an image processing strategy to perform a sharpening process to obtain a clear picture of the tumor site, the contour fitting module is used to introduce a tumor contour into data fitting software to perform fitting of a contour curve to obtain a tumor contour curve formula with the position of the camera module as an origin, the path planning module is used to plan a path away from a tumor contour safety distance according to the tumor contour curve formula, and the display module is used to display the photographed tumor video of the camera module in real time.
It should be noted that, the control module is connected with the camera module, the image processing module, the contour fitting module, the path planning module and the display module in a wired or wireless manner, and the connection relationship among the camera module, the image processing module, the contour fitting module, the path planning module and the display module corresponds to the information transmission relationship, and the camera module is not limited to the camera at the end of the naso-cranioscope.
Example 3
As shown in fig. 3, the present embodiment provides an electronic device, including: a processor and a memory, wherein the memory stores a computer program for the processor to call;
the processor performs a rhinoscope-based navigation visualization method as described above by invoking a computer program stored in the memory.
The electronic device may vary greatly in configuration or performance, and can include one or more processors (Central Processing Units, CPU) and one or more memories, wherein the memories store at least one computer program that is loaded and executed by the processors to implement a rhinoscope-based navigation visualization method provided by the above-described method embodiments. The electronic device can also include other components for implementing the functions of the device, for example, the electronic device can also have wired or wireless network interfaces, input-output interfaces, and the like, for inputting and outputting data. The present embodiment is not described herein.
Example 4
The present embodiment proposes a computer-readable storage medium having stored thereon an erasable computer program;
the computer program, when run on a computer device, causes the computer device to perform a rhinoscope-based navigation visualization method as described above.
For example, the computer readable storage medium can be Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), compact disk Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
It should be understood that determining B from a does not mean determining B from a alone, but can also determine B from a and/or other information.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by way of wired or/and wireless networks from one website site, computer, server, or data center to another. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely one, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The preferred embodiments of the application disclosed above are intended only to assist in the explanation of the application. The preferred embodiments are not intended to be exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and the full scope and equivalents thereof.

Claims (7)

1. The navigation visualization method based on the nasoscope is characterized by comprising the following specific steps of:
s1, inserting the data acquisition equipment into a tumor position of the nasal cranium through a nasal cavity, and photographing the tumor position by a camera module to obtain a photographed image;
s2, importing the shot image into an image processing strategy for sharpness processing to obtain a sharp image of the tumor part;
s3, substituting the clear picture into an image recognition strategy, extracting a tumor image in the picture, and recognizing and extracting a tumor contour;
s4, importing the tumor contour into data fitting software to fit a contour curve, and obtaining a tumor contour curve formula taking the position of the camera module as an origin;
s5, planning a path of a safe distance from the tumor contour according to a tumor contour curve formula;
s6, the data acquisition equipment acquires data of the tumor according to the planned path; the specific steps of the S1 are as follows:
s101, identifying the approximate position of a tumor through CT irradiation in advance, and inserting the data acquisition equipment to a position at a safe distance from the tumor through the nasal cavity of a patient;
s102, photographing a tumor part at a position which is at a safe distance from the tumor by using a photographing module to obtain a photographed picture of the tumor part; the image processing strategy of S2 comprises the following specific steps:
s201, extracting the intensity of at least one color channel of each pixel of a shot picture of a shot tumor part to form a foggy color intensity set, acquiring the humidity of a shot position through a humidity detection assembly, acquiring the temperature of the shot position through a temperature detection assembly, and acquiring the distance data from a shooting module to the tumor position through a distance acquisition module;
s202, taking a foggy color intensity set, distance data from an imaging module to a tumor position, temperature data of an imaging position and humidity data of the imaging position as inputs of a constructed machine learning model, and obtaining a foggy clear picture output by the machine learning model; the training manner of the machine learning model in S202 is as follows:
s2021, acquiring tumor images in foggy and non-foggy environments by using an image pickup module to obtain at least one group of foggy shooting pictures and non-foggy clear pictures, and simultaneously obtaining humidity of at least one group of shooting positions, temperature of the shooting positions and distance data from the image pickup module to the tumor positions;
s2022, combining the foggy shot picture, the humidity of the shot position, the temperature of the shot position and the distance data from the shooting module to the tumor position into feature vectors, wherein the set of all feature vectors is used as the input of a machine learning model, the machine learning model takes a foggy clear picture predicted by each group of feature vectors as the output, takes an actual foggy clear picture corresponding to each group of feature vectors as a prediction target, and takes the sum of the prediction accuracy of all predicted foggy clear pictures as a training target;
s2023, a calculation formula of prediction accuracy is as follows:wherein the subscript i is the number of groups of feature vectors, the superscript j is the j-th pixel in the shot picture, M is the predicted haze-free clear picture, M is the shot haze-free clear picture, n is the total number of pixels of the shot picture, and->For the prediction accuracy between the i-th group of feature vector predicted haze-free clear pictures and the photographed haze-free clear pictures, a machine is providedTraining the learning model until the sum of the prediction accuracy reaches convergence, and stopping training; the specific steps of the image recognition strategy in the step S3 are as follows:
s301, importing an image obtained after the definition processing into gray processing software, and performing gray processing on the image to obtain a gray processing image;
s302, dividing the gray processing image into pixels, calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel above to obtain a vertical gradient value of the pixel, and calculating the difference value between the gray value of each pixel and the gray value of the adjacent pixel on the left to obtain a horizontal gradient value of the pixel;
s303, respectively comparing the vertical gradient value and the horizontal gradient value with a gradient threshold value, taking the corresponding pixel point connecting line of which the vertical gradient value and the horizontal gradient value are larger than the gradient threshold value as a boundary, dividing a graying treatment image into at least one region, acquiring the image after the sharpening treatment of the region, extracting the color and the shape of the region image, importing the color and the shape of the region image and the stored tumor color and shape into a similarity comparison formula, calculating the similarity of the region image and the tumor image, positioning the tumor position, and extracting the tumor contour in the image; the specific steps of the S4 are as follows:
s401, importing the shot tumor outline picture and the distance from the outline to the camera module into three-dimensional model construction software to construct a tumor three-dimensional image corresponding to the tumor outline picture, and extracting the tumor three-dimensional image and the position of the corresponding camera module;
s402, taking a connecting line between the tumor three-dimensional image and the closest point of the corresponding camera module as an x-axis, taking a line which is vertical to the x-axis in a horizontal plane where the x-axis is located as a y-axis, and guiding the positions of the tumor three-dimensional image and the corresponding camera module into data fitting software to perform fitting of contour curves, so as to obtain a tumor contour curve formula taking the position of the camera module as an origin.
2. The method for visualizing a nasal cranioscope-based navigation according to claim 1, wherein the specific step of S5 is:
s501, collecting a tumor contour curve, solving a tangent equation of each point of the tumor contour curve, and solving an equation perpendicular to a tangent of each point of the tumor contour curve and passing through each corresponding tangent point
S502, calculatingAnd (3) a track equation of a point which is different from the tangent point by a set safety distance d is provided, and after the track is calculated, the track point positioned in the tumor is removed.
3. The nasoscope-based navigation visualization method of claim 2, wherein the machine learning model is any one of a deep neural network model or a deep belief network model.
4. A nasoscope-based navigation visualization system, which is realized based on the nasoscope-based navigation visualization method according to any one of claims 1 to 3, and is characterized by comprising a control module, a camera module, an image processing module, a contour fitting module, a path planning module and a display module, wherein the control module is used for controlling a data acquisition device to run according to the planned path, the camera module is used for photographing a tumor part to obtain a photographed image, and the image processing module is used for importing the photographed image into an image processing strategy to perform sharpness processing to obtain a sharp picture of the tumor part.
5. The nasoscope-based navigation visualization system of claim 4, wherein the profile fitting module is configured to guide a tumor profile into data fitting software to perform profile curve fitting, obtain a tumor profile curve formula with a position of the camera module as an origin, the path planning module is configured to plan a path from a safe distance of the tumor profile according to the tumor profile curve formula, and the display module is configured to display a captured tumor video of the camera module in real time.
6. An electronic device, comprising: a processor and a memory, wherein the memory stores a computer program for the processor to call;
the processor performs a rhinoscope-based navigation visualization method of any of claims 1-3 by invoking a computer program stored in the memory.
7. A computer-readable storage medium, characterized by: instructions stored thereon which, when executed on a computer, cause the computer to perform a method of nasal cranioscope-based navigation visualization according to any one of claims 1 to 3.
CN202311008249.2A 2023-08-11 2023-08-11 Navigation visualization method, system, equipment and storage medium based on nasoscope Active CN116740768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311008249.2A CN116740768B (en) 2023-08-11 2023-08-11 Navigation visualization method, system, equipment and storage medium based on nasoscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311008249.2A CN116740768B (en) 2023-08-11 2023-08-11 Navigation visualization method, system, equipment and storage medium based on nasoscope

Publications (2)

Publication Number Publication Date
CN116740768A CN116740768A (en) 2023-09-12
CN116740768B true CN116740768B (en) 2023-10-20

Family

ID=87901537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311008249.2A Active CN116740768B (en) 2023-08-11 2023-08-11 Navigation visualization method, system, equipment and storage medium based on nasoscope

Country Status (1)

Country Link
CN (1) CN116740768B (en)

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530873B1 (en) * 1999-08-17 2003-03-11 Georgia Tech Research Corporation Brachytherapy treatment planning method and apparatus
WO2005035061A3 (en) * 2003-10-07 2005-07-07 Nomos Corp Planning system, method and apparatus for conformal radiation therapy
WO2007063980A1 (en) * 2005-12-03 2007-06-07 Atsushi Takahashi Intraoral panoramic image pickup device and intraoral panoramic image pickup system
EP2006802A1 (en) * 2007-06-19 2008-12-24 Agfa HealthCare NV Method of constructing a grey value model and/or a geometric model of an anatomic entity in a 3D digital medical image
CN102737382A (en) * 2012-06-22 2012-10-17 刘怡光 Automatic precise partition method for prostate ultrasonic image
CN103236065A (en) * 2013-05-09 2013-08-07 中南大学 Biochip analysis method based on active contour model and cell neural network
WO2013034462A3 (en) * 2011-09-05 2013-10-17 Materialise Dental N.V. A method and system for 3d root canal treatment planning
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing
WO2014055923A2 (en) * 2012-10-05 2014-04-10 Elizabeth Begin System and method for instant and automatic border detection
WO2014201035A1 (en) * 2013-06-10 2014-12-18 Chandler Jr Howard C Method and system for intraoperative imaging of soft tissue in the dorsal cavity
JP2015045595A (en) * 2013-08-29 2015-03-12 日本メジフィジックス株式会社 Visualization of heart muscle motion
CN104751457A (en) * 2015-03-19 2015-07-01 浙江德尚韵兴图像科技有限公司 Novel variational energy based liver partition method
CN105160660A (en) * 2015-08-17 2015-12-16 中国科学院苏州生物医学工程技术研究所 Active contour blood vessel extraction method and system based on multi-feature Gaussian fitting
JP2016039874A (en) * 2014-08-13 2016-03-24 富士フイルム株式会社 Endoscopic image diagnosis support device, system, method, and program
JP2016142666A (en) * 2015-02-04 2016-08-08 日本メジフィジックス株式会社 Technique for extracting tumor contours from nuclear medicine image
CN106682633A (en) * 2016-12-30 2017-05-17 四川沃文特生物技术有限公司 Method for classifying and identifying visible components of microscopic excrement examination images based on machine vision
JP2017109074A (en) * 2015-12-15 2017-06-22 コニカミノルタ株式会社 Ultrasonic diagnostic imaging apparatus
CN106997596A (en) * 2017-04-01 2017-08-01 太原理工大学 A kind of Lung neoplasm dividing method of the LBF movable contour models based on comentropy and joint vector
CN107106096A (en) * 2014-10-10 2017-08-29 皇家飞利浦有限公司 TACE navigation guides based on tumor survival power and blood vessel geometry
RU2649474C1 (en) * 2017-03-27 2018-04-03 Николай Сергеевич Грачев Method of visualizing results of surgical treatment of juvenile angiophibromas of nasopharinx and skull basis
CN108108649A (en) * 2016-11-24 2018-06-01 腾讯科技(深圳)有限公司 Auth method and device
CN108416792A (en) * 2018-01-16 2018-08-17 辽宁师范大学 Medical computer tomoscan image dividing method based on movable contour model
CN109285142A (en) * 2018-08-07 2019-01-29 广州智能装备研究院有限公司 A kind of head and neck neoplasm detection method, device and computer readable storage medium
CN109801360A (en) * 2018-12-24 2019-05-24 北京理工大学 Stomach and intestine three-dimensionalreconstruction and method for visualizing based on image
WO2019120032A1 (en) * 2017-12-21 2019-06-27 Oppo广东移动通信有限公司 Model construction method, photographing method, device, storage medium, and terminal
WO2019148265A1 (en) * 2018-02-02 2019-08-08 Moleculight Inc. Wound imaging and analysis
CN110141772A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 Radioactive particle source total quantity acquisition methods needed for tumour
CN110599501A (en) * 2019-09-05 2019-12-20 北京理工大学 Real scale three-dimensional reconstruction and visualization method for gastrointestinal structure
CN110889896A (en) * 2019-11-11 2020-03-17 苏州润迈德医疗科技有限公司 Method, device and system for obtaining angiostenosis lesion interval and three-dimensional synthesis
CN110969619A (en) * 2019-12-19 2020-04-07 广州柏视医疗科技有限公司 Method and device for automatically identifying primary tumor of nasopharyngeal carcinoma
CN111145142A (en) * 2019-11-26 2020-05-12 昆明理工大学 Uneven-gray cyst image segmentation method based on level set algorithm
CN111340937A (en) * 2020-02-17 2020-06-26 四川大学华西医院 Brain tumor medical image three-dimensional reconstruction display interaction method and system
CN111477298A (en) * 2020-04-03 2020-07-31 北京易康医疗科技有限公司 Method for tracking tumor position change in radiotherapy process
CN111785349A (en) * 2020-07-27 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Method for tracking tumor position change in radiotherapy process
EP3832597A1 (en) * 2019-12-06 2021-06-09 Microsoft Technology Licensing, LLC Refinement of image segmentation
CN112991314A (en) * 2021-03-30 2021-06-18 昆明同心医联科技有限公司 Blood vessel segmentation method, device and storage medium
CN113421272A (en) * 2021-06-22 2021-09-21 厦门理工学院 Method, device and equipment for monitoring tumor infiltration depth and storage medium
CN113610054A (en) * 2021-08-27 2021-11-05 广州慧瞳科技有限公司 Underwater structure disease depth detection method, system and device and storage medium
CN113679417A (en) * 2021-08-23 2021-11-23 高小翎 Model-guided optimization parallel ultrasonic image 3D reconstruction method
WO2021238438A1 (en) * 2020-05-29 2021-12-02 京东方科技集团股份有限公司 Tumor image processing method and apparatus, electronic device, and storage medium
CN114170134A (en) * 2021-11-03 2022-03-11 杭州脉流科技有限公司 Stenosis assessment method and device based on intracranial DSA image
CN114529802A (en) * 2022-01-26 2022-05-24 扬州大学 Goose egg identification and positioning method and system based on machine vision
CN114820855A (en) * 2022-04-28 2022-07-29 浙江大学滨江研究院 Lung respiration process image reconstruction method and device based on patient 4D-CT
WO2022182681A2 (en) * 2021-02-26 2022-09-01 Reflexion Medical, Inc. Methods for automatic target identification, tracking, and safety evaluation for radiotherapy
WO2022198553A1 (en) * 2021-03-25 2022-09-29 中国科学院近代物理研究所 Three-dimensional image-guided positioning method and system, and storage medium
CN115265545A (en) * 2022-08-02 2022-11-01 杭州视图智航科技有限公司 Map matching navigation method, device, equipment and storage medium based on decision analysis
WO2022261134A1 (en) * 2021-06-10 2022-12-15 Photogauge, Inc. System and method for digital-representation-based flight path planning for object imaging
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN115741717A (en) * 2022-12-07 2023-03-07 同济大学 Three-dimensional reconstruction and path planning method, device, equipment and storage medium
CN115797448A (en) * 2022-11-08 2023-03-14 中国科学院深圳先进技术研究院 Digestive endoscopy visual reconstruction navigation system and method
CN115943305A (en) * 2020-06-24 2023-04-07 索尼集团公司 Information processing apparatus, information processing method, program, and information processing system
CN116540710A (en) * 2023-05-12 2023-08-04 河海大学常州校区 Path planning method of glass cleaning robot based on image recognition
CN116549107A (en) * 2023-04-23 2023-08-08 北京师范大学 Virtual nose endoscope system and roaming path planning method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7907990B2 (en) * 2006-03-24 2011-03-15 General Electric Company Systems, methods and apparatus for oncology workflow integration
EP2131325B1 (en) * 2008-05-08 2013-01-30 Agfa Healthcare Method for mass candidate detection and segmentation in digital mammograms
WO2010064249A1 (en) * 2008-12-04 2010-06-10 Real Imaging Ltd. Method apparatus and system for determining a thermal signature
US20130109963A1 (en) * 2011-10-31 2013-05-02 The University Of Connecticut Method and apparatus for medical imaging using combined near-infrared optical tomography, fluorescent tomography and ultrasound
US10535434B2 (en) * 2017-04-28 2020-01-14 4D Path Inc. Apparatus, systems, and methods for rapid cancer detection
US20210304395A1 (en) * 2018-06-29 2021-09-30 Photogauge, Inc. System and method for digital-representation-based flight path planning for object imaging
US20230055979A1 (en) * 2021-08-17 2023-02-23 California Institute Of Technology Three-dimensional contoured scanning photoacoustic imaging and virtual staining

Patent Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6530873B1 (en) * 1999-08-17 2003-03-11 Georgia Tech Research Corporation Brachytherapy treatment planning method and apparatus
WO2005035061A3 (en) * 2003-10-07 2005-07-07 Nomos Corp Planning system, method and apparatus for conformal radiation therapy
WO2007063980A1 (en) * 2005-12-03 2007-06-07 Atsushi Takahashi Intraoral panoramic image pickup device and intraoral panoramic image pickup system
EP2006802A1 (en) * 2007-06-19 2008-12-24 Agfa HealthCare NV Method of constructing a grey value model and/or a geometric model of an anatomic entity in a 3D digital medical image
WO2013034462A3 (en) * 2011-09-05 2013-10-17 Materialise Dental N.V. A method and system for 3d root canal treatment planning
CN102737382A (en) * 2012-06-22 2012-10-17 刘怡光 Automatic precise partition method for prostate ultrasonic image
WO2014055923A2 (en) * 2012-10-05 2014-04-10 Elizabeth Begin System and method for instant and automatic border detection
CN103236065A (en) * 2013-05-09 2013-08-07 中南大学 Biochip analysis method based on active contour model and cell neural network
WO2014201035A1 (en) * 2013-06-10 2014-12-18 Chandler Jr Howard C Method and system for intraoperative imaging of soft tissue in the dorsal cavity
JP2015045595A (en) * 2013-08-29 2015-03-12 日本メジフィジックス株式会社 Visualization of heart muscle motion
CN103425986A (en) * 2013-08-31 2013-12-04 西安电子科技大学 Breast lump image feature extraction method based on edge neighborhood weighing
JP2016039874A (en) * 2014-08-13 2016-03-24 富士フイルム株式会社 Endoscopic image diagnosis support device, system, method, and program
CN107106096A (en) * 2014-10-10 2017-08-29 皇家飞利浦有限公司 TACE navigation guides based on tumor survival power and blood vessel geometry
JP2016142666A (en) * 2015-02-04 2016-08-08 日本メジフィジックス株式会社 Technique for extracting tumor contours from nuclear medicine image
CN104751457A (en) * 2015-03-19 2015-07-01 浙江德尚韵兴图像科技有限公司 Novel variational energy based liver partition method
CN105160660A (en) * 2015-08-17 2015-12-16 中国科学院苏州生物医学工程技术研究所 Active contour blood vessel extraction method and system based on multi-feature Gaussian fitting
JP2017109074A (en) * 2015-12-15 2017-06-22 コニカミノルタ株式会社 Ultrasonic diagnostic imaging apparatus
CN108108649A (en) * 2016-11-24 2018-06-01 腾讯科技(深圳)有限公司 Auth method and device
CN106682633A (en) * 2016-12-30 2017-05-17 四川沃文特生物技术有限公司 Method for classifying and identifying visible components of microscopic excrement examination images based on machine vision
RU2649474C1 (en) * 2017-03-27 2018-04-03 Николай Сергеевич Грачев Method of visualizing results of surgical treatment of juvenile angiophibromas of nasopharinx and skull basis
CN106997596A (en) * 2017-04-01 2017-08-01 太原理工大学 A kind of Lung neoplasm dividing method of the LBF movable contour models based on comentropy and joint vector
WO2019120032A1 (en) * 2017-12-21 2019-06-27 Oppo广东移动通信有限公司 Model construction method, photographing method, device, storage medium, and terminal
CN108416792A (en) * 2018-01-16 2018-08-17 辽宁师范大学 Medical computer tomoscan image dividing method based on movable contour model
CN112005312A (en) * 2018-02-02 2020-11-27 莫勒库莱特股份有限公司 Wound imaging and analysis
WO2019148265A1 (en) * 2018-02-02 2019-08-08 Moleculight Inc. Wound imaging and analysis
CN109285142A (en) * 2018-08-07 2019-01-29 广州智能装备研究院有限公司 A kind of head and neck neoplasm detection method, device and computer readable storage medium
CN109801360A (en) * 2018-12-24 2019-05-24 北京理工大学 Stomach and intestine three-dimensionalreconstruction and method for visualizing based on image
CN110141772A (en) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 Radioactive particle source total quantity acquisition methods needed for tumour
CN110599501A (en) * 2019-09-05 2019-12-20 北京理工大学 Real scale three-dimensional reconstruction and visualization method for gastrointestinal structure
CN110889896A (en) * 2019-11-11 2020-03-17 苏州润迈德医疗科技有限公司 Method, device and system for obtaining angiostenosis lesion interval and three-dimensional synthesis
CN111145142A (en) * 2019-11-26 2020-05-12 昆明理工大学 Uneven-gray cyst image segmentation method based on level set algorithm
EP3832597A1 (en) * 2019-12-06 2021-06-09 Microsoft Technology Licensing, LLC Refinement of image segmentation
CN110969619A (en) * 2019-12-19 2020-04-07 广州柏视医疗科技有限公司 Method and device for automatically identifying primary tumor of nasopharyngeal carcinoma
CN111340937A (en) * 2020-02-17 2020-06-26 四川大学华西医院 Brain tumor medical image three-dimensional reconstruction display interaction method and system
CN111477298A (en) * 2020-04-03 2020-07-31 北京易康医疗科技有限公司 Method for tracking tumor position change in radiotherapy process
WO2021238438A1 (en) * 2020-05-29 2021-12-02 京东方科技集团股份有限公司 Tumor image processing method and apparatus, electronic device, and storage medium
CN115943305A (en) * 2020-06-24 2023-04-07 索尼集团公司 Information processing apparatus, information processing method, program, and information processing system
CN111785349A (en) * 2020-07-27 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Method for tracking tumor position change in radiotherapy process
WO2022182681A2 (en) * 2021-02-26 2022-09-01 Reflexion Medical, Inc. Methods for automatic target identification, tracking, and safety evaluation for radiotherapy
WO2022198553A1 (en) * 2021-03-25 2022-09-29 中国科学院近代物理研究所 Three-dimensional image-guided positioning method and system, and storage medium
CN112991314A (en) * 2021-03-30 2021-06-18 昆明同心医联科技有限公司 Blood vessel segmentation method, device and storage medium
WO2022261134A1 (en) * 2021-06-10 2022-12-15 Photogauge, Inc. System and method for digital-representation-based flight path planning for object imaging
CN113421272A (en) * 2021-06-22 2021-09-21 厦门理工学院 Method, device and equipment for monitoring tumor infiltration depth and storage medium
CN113679417A (en) * 2021-08-23 2021-11-23 高小翎 Model-guided optimization parallel ultrasonic image 3D reconstruction method
CN113610054A (en) * 2021-08-27 2021-11-05 广州慧瞳科技有限公司 Underwater structure disease depth detection method, system and device and storage medium
CN114170134A (en) * 2021-11-03 2022-03-11 杭州脉流科技有限公司 Stenosis assessment method and device based on intracranial DSA image
CN114529802A (en) * 2022-01-26 2022-05-24 扬州大学 Goose egg identification and positioning method and system based on machine vision
CN114820855A (en) * 2022-04-28 2022-07-29 浙江大学滨江研究院 Lung respiration process image reconstruction method and device based on patient 4D-CT
CN115265545A (en) * 2022-08-02 2022-11-01 杭州视图智航科技有限公司 Map matching navigation method, device, equipment and storage medium based on decision analysis
CN115496771A (en) * 2022-09-22 2022-12-20 安徽医科大学 Brain tumor segmentation method based on brain three-dimensional MRI image design
CN115797448A (en) * 2022-11-08 2023-03-14 中国科学院深圳先进技术研究院 Digestive endoscopy visual reconstruction navigation system and method
CN115741717A (en) * 2022-12-07 2023-03-07 同济大学 Three-dimensional reconstruction and path planning method, device, equipment and storage medium
CN116549107A (en) * 2023-04-23 2023-08-08 北京师范大学 Virtual nose endoscope system and roaming path planning method
CN116540710A (en) * 2023-05-12 2023-08-04 河海大学常州校区 Path planning method of glass cleaning robot based on image recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Contour data acquisition system for electric vehicle distance estimation method;Chew, Kuew Wai等;《Applied Mechanics and Materials》;第479卷;503-507 *
结构化道路下无人车轨迹规划与控制研究;龚城;《中国优秀硕士学位论文全文数据库 工程科技II辑》(第1期);C035-884 *

Also Published As

Publication number Publication date
CN116740768A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN111310731B (en) Video recommendation method, device, equipment and storage medium based on artificial intelligence
CN108764071B (en) Real face detection method and device based on infrared and visible light images
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
CN110956114A (en) Face living body detection method, device, detection system and storage medium
CN106372629A (en) Living body detection method and device
CN110543848B (en) Driver action recognition method and device based on three-dimensional convolutional neural network
CN109948439B (en) Living body detection method, living body detection system and terminal equipment
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN113887387A (en) Ski field target image generation method, system and server
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
US9858471B2 (en) Identification apparatus and authentication system
CN112561986A (en) Secondary alignment method, device, equipment and storage medium for inspection robot holder
CN112446254A (en) Face tracking method and related device
CN113902932A (en) Feature extraction method, visual positioning method and device, medium and electronic equipment
CN110175553A (en) The method and device of feature database is established based on Gait Recognition and recognition of face
CN111738241B (en) Pupil detection method and device based on double cameras
CN116740768B (en) Navigation visualization method, system, equipment and storage medium based on nasoscope
CN110909617B (en) Living body face detection method and device based on binocular vision
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
CN111160233A (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN115471671A (en) Network model training method, target recognition method and related equipment
CN110781712B (en) Human head space positioning method based on human face detection and recognition
CN113887279A (en) Pedestrian re-identification method with half-length being blocked and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant