CN115700759A - Medical image display method, medical image processing method, and image display system - Google Patents

Medical image display method, medical image processing method, and image display system Download PDF

Info

Publication number
CN115700759A
CN115700759A CN202211418663.6A CN202211418663A CN115700759A CN 115700759 A CN115700759 A CN 115700759A CN 202211418663 A CN202211418663 A CN 202211418663A CN 115700759 A CN115700759 A CN 115700759A
Authority
CN
China
Prior art keywords
target
medical image
processed
image
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211418663.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202211418663.6A priority Critical patent/CN115700759A/en
Publication of CN115700759A publication Critical patent/CN115700759A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present application relates to a medical image display method, a medical image processing method, an image display system, a computer device, a storage medium, and a computer program product. The method comprises the following steps: acquiring an initial medical image acquired by augmented reality equipment; carrying out image enhancement processing on the initial medical image to obtain a medical image to be processed; identifying the medical image to be processed through at least one identification model obtained by pre-training to obtain a target state of a target in the medical image to be processed; fusing the targets and target states obtained by the recognition models; and mapping the fused target and the target state to a virtual space corresponding to the augmented reality equipment for display. By adopting the method, the image can be enhanced, the image blurring can be avoided, the identification accuracy can be improved aiming at the states of the wound and the medical instrument of the patient which are not identified or marked, and the display is more accurate.

Description

Medical image display method, medical image processing method, and image display system
Technical Field
The present application relates to the field of medical image processing technology, and in particular, to a medical image display method, a medical image processing method, an image display system, a computer device, a storage medium, and a computer program product.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and was first proposed in 1990. The method and the device adopt various technical means to superpose a virtual object generated by a computer or non-geometric information about a real object on a scene of the real world, thereby realizing the enhancement of the real world.
At present, the AR technology has been applied to a mode of medical surgery, and a deep learning method and a conventional image processing are mainly used for image processing to perform human perception.
However, when the AR device is worn at present, the display of a real object is blurred and the accuracy is reduced because the AR glasses need to fuse reality and virtual.
Disclosure of Invention
In view of the above, it is necessary to provide a medical image display method, a medical image processing method, an image display system, a computer device, a computer-readable storage medium, and a computer program product, which are capable of improving display accuracy, in view of the above technical problems.
In a first aspect, the present application provides a medical image display method, the method comprising:
acquiring an initial medical image acquired by augmented reality equipment;
performing image enhancement processing on the initial medical image to obtain a medical image to be processed;
identifying the medical image to be processed through at least one identification model obtained by pre-training to obtain a target and a target state in the medical image to be processed;
fusing the target and the target state obtained by each recognition model;
and mapping the fused target and the target state to a virtual space corresponding to the augmented reality equipment for display.
In one embodiment, the performing image enhancement processing on the initial medical image to obtain a medical image to be processed includes:
converting the initial medical image to a grayscale image;
calculating a gradient histogram and a gray target value of the gray image;
obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value; calculating to obtain a corresponding color integral image to be processed according to the initial medical image;
and carrying out image enhancement processing on the initial medical image according to the color integral image to be processed and the brightness coefficient to be processed to obtain a medical image to be processed.
In one embodiment, the performing image enhancement processing on the initial medical image according to the to-be-processed color integral map and the to-be-processed luminance coefficient to obtain a to-be-processed medical image includes:
matching the color integrogram to be processed with a standard color integrogram to determine a standard brightness coefficient corresponding to the brightness coefficient to be processed;
and updating the pixel value in the initial medical image through the standard brightness coefficient to obtain the medical image to be processed.
In one embodiment, the fusing the target and the target state obtained by each of the recognition models includes:
acquiring the weight corresponding to each recognition model;
and fusing the target and the target state obtained by each recognition model based on the weight corresponding to the recognition model.
In one embodiment, before fusing the target and the target state obtained by each recognition model based on the weight corresponding to the recognition model, the method further includes:
acquiring processing time corresponding to the first recognition model and the second recognition model and recognition accuracy of a target;
when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is larger than a threshold value, taking the target and the target state corresponding to the second recognition model as a target and a target state obtained by fusion;
when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is greater than that of the second recognition model, continuing to fuse the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models;
and when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is smaller than or equal to the recognition accuracy of the second recognition model, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
In one embodiment, the mapping the fused target and the target state to the virtual space corresponding to the augmented reality device for display includes:
mapping the fused target and the target state to a three-dimensional space of the augmented reality equipment according to the conversion relation of a plurality of acquisition devices of the augmented reality equipment;
and mapping the target and the target state in the three-dimensional space to a virtual space corresponding to the augmented reality equipment for display.
In a second aspect, the present application further provides a medical image processing method, including:
acquiring a target and a target state based on the medical image display method in any one of the embodiments;
matching the target and the target state with a preset scene;
and generating alarm information according to the matching result.
In one embodiment, the matching the target and the target state with a preset scene includes at least one of:
matching the acquired states of the operating equipment and the operating equipment with states of standard operating equipment and the standard operating equipment in a preset scene;
matching the acquired tissue part and the state of the tissue part with a standard tissue part and a state of the standard tissue part in a preset scene; and
and determining an operation process according to the acquired target and the target state, and matching the operation process with a standard process in a preset scene.
In a third aspect, the present application further provides an image display system, the system comprising:
the augmented reality equipment is used for acquiring the initial medical image and displaying the display information obtained by the processing of the processor;
a processor, configured to perform the steps in the method in any of the above embodiments to obtain display information of a virtual space corresponding to the enhanced display device.
In a fourth aspect, the present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method as claimed in any one of the embodiments when executing the computer program.
In a fifth aspect, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described in any one of the embodiments.
According to the medical image display method, the medical image processing method, the image display system, the computer device, the storage medium and the computer program product, after the initial medical image acquired by the augmented reality device is acquired, the initial medical image is subjected to image enhancement processing to obtain the medical image to be processed, so that image blurring can be avoided by performing image enhancement, at least one identification model is obtained through pre-training to identify the medical image to be processed, and the target state in the medical image to be processed are obtained.
Drawings
FIG. 1 is a system diagram of an image display system in one embodiment;
FIG. 2 is a hardware schematic of an image display system in one embodiment;
FIG. 3 is a functional diagram of an augmented reality device in one embodiment;
FIG. 4 is a functional diagram of a terminal or server in one embodiment;
FIG. 5 is a flow chart illustrating a method of displaying a medical image according to an embodiment;
FIG. 6 is a schematic flow chart of the image enhancement step in one embodiment;
FIG. 7 is a diagram illustrating the processing steps of two recognition models, in one embodiment;
FIG. 8 is a flowchart illustrating the training steps of the first recognition model in one embodiment;
FIG. 9 is a flowchart illustrating the training step of a second recognition model in one embodiment;
FIG. 10 is a schematic flow chart diagram illustrating the fusion step in one embodiment;
FIG. 11 is a flow diagram of display steps for an enhanced display device in one embodiment;
FIG. 12 is a flow diagram illustrating a method of medical image processing according to one embodiment;
FIG. 13 is a flowchart of alert information generation steps in one embodiment;
FIG. 14 is a start up prompt in one embodiment;
FIG. 15 is a representation of surgical selection in one embodiment;
FIG. 16 is a schematic illustration of object recognition in one embodiment;
FIG. 17 is a schematic diagram of a reminder in one embodiment;
FIG. 18 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, fig. 1 is a system diagram of an image display system in an embodiment, where the image display system includes an augmented reality device 100 and a processor 200. The processor 200 may be integrated in the augmented reality device 100, or integrated in a separate terminal or server, and communicate with the augmented reality device.
Specifically, referring to fig. 2, fig. 2 is a hardware schematic diagram of an image display system in an embodiment, in which a processor is installed in a terminal or a server, where an augmented reality device communicates with the terminal or the server through a data interface, acquires an initial medical image during an operation, and sends the initial medical image to the terminal or the server through the data interface, so that the processor of the terminal or the server processes the initial medical image.
Specifically, the augmented reality device may be an AR glasses, and in other embodiments, may also be another augmented reality device. Referring to fig. 3, fig. 3 is a functional schematic diagram of an augmented reality device in an embodiment, in which a processor of the augmented reality device includes an image acquisition module, configured to acquire an initial medical image during an operation, and the augmented reality device acquires the initial medical image through the image acquisition module. The initial medical image is obtained from an operation, such as an intraoperative scan, and may be an image including a medical device and a wound of a patient. The augmented reality device is sent to the terminal or the server through the data interface module so that a processor of the terminal or the server can process the initial medical image, the processing result of the terminal or the server on the initial medical image is received through the data interface, the model generation module of the augmented reality device acquires a corresponding model from the local model base module according to the fed-back processing result, the model registration module generates a target medical image based on the processing result and the model, and the target medical image is subjected to holographic projection to instruments and wounds of a patient through the holographic projection module.
Specifically, referring to fig. 4, fig. 4 is a functional diagram of a terminal or a server in an embodiment, where the terminal or the server receives an initial medical image sent by an augmented reality device through a data interface. A preprocessing module in a processor of the terminal or the server performs image enhancement processing on the initial medical image to obtain a medical image to be processed; a model processing module in a processor of the terminal or the server identifies the medical image to be processed through at least one identification model obtained through pre-training to obtain a target and a target state in the medical image to be processed, for example, to obtain a medical device, an object in the patient's mouth or in the environment and a corresponding target state; a self-adaptive fusion module in a processor of the terminal or the server fuses the target and the target state obtained by each recognition model; and a data interface in a processor of the terminal or the server sends the fused target and the target state to the augmented reality equipment, and the fused target and the target state are mapped to a virtual space corresponding to the augmented reality equipment through the processor of the augmented reality equipment to be displayed, wherein the real content can comprise operation reminding, alarming and navigation, so that a doctor can perform an operation more intelligently and safely.
When the operation is started, the terminal or the server communicates with the augmented reality device, and whether the connection between the terminal or the server and the augmented reality device is successful or not is judged, that is, the terminal or the server and the augmented reality device are started to be opened, the corresponding IP address is set, communication is carried out through a network protocol, and if connection is not carried out, namely, a blocking state is met, different IP addresses are adjusted to continue communication.
If the communication connection between the terminal or the server and the augmented reality equipment is successful, the terminal or the server issues an instruction according to a specified data format, the augmented reality equipment opens the camera and stores the image under a corresponding address after receiving an acquisition command, and after the augmented reality equipment acquires the image, the augmented reality equipment transmits the data to the terminal or the server through a network protocol, and finally the whole terminal or server image acquisition process is completed.
Therefore, after the initial medical image acquired by the augmented reality equipment is acquired, the initial medical image is subjected to image enhancement processing to obtain the medical image to be processed, image blurring can be avoided by performing image enhancement, at least one identification model is obtained through pre-training to identify the medical image to be processed, and a target state in the medical image to be processed are obtained.
In one embodiment, as shown in fig. 5, a medical image display method is provided, which is exemplified by the application of the method to the processor in fig. 1, and includes the following steps:
s502: an initial medical image acquired by augmented reality equipment is acquired.
In particular, the initial medical image is obtained from an operation, such as an intra-operative scan; the initial medical image may be an image including a medical device and a wound of a patient. Wherein during operation an initial medical image is acquired by an image acquisition device of the augmented reality device, for example by a camera of the augmented reality device. Optionally, the augmented reality device may include a plurality of cameras, and the initial medical images acquired by the plurality of cameras are acquired, so that during subsequent processing, the plurality of initial medical images may be processed in parallel, or the plurality of initial medical images are fused in advance, and then only the fused initial medical image is processed.
S504: and carrying out image enhancement processing on the initial medical image to obtain a medical image to be processed.
Specifically, the image enhancement processing may refer to brightness enhancement of an image, that is, intelligent light supplement is performed by identifying weak light and backlight in an initial medical image, so as to obtain a medical image to be processed with a brightness degree meeting a requirement.
S506: and identifying the medical image to be processed through at least one identification model obtained by pre-training to obtain a target and a target state in the medical image to be processed.
In particular, the recognition model may be pre-trained, which may include deep learning models as well as traditional recognition models. In other embodiments, the recognition model may also include other algorithmic models, which are not specifically limited herein.
The deep learning model can infer the wound state and position of a patient in an operation by changing a deep learning network branch and adding attention mechanisms of different scale layers, and enables the work in the operation process to be safer and more intelligent by recognizing instruments and utilizing the use condition. The application of the traditional recognition model to the prior knowledge can improve the processing efficiency, for example, the medical image to be processed is processed through the feature vector and the classifier to obtain the target and the target state.
The target may refer to a target in the medical image to be processed, which includes, but is not limited to, various devices, medical instruments, and tissue sites or wounds, etc. in the operation scene, wherein the various devices may include a surgical robot system, the number of dollies, and an operating table. The target states may include corresponding preset respective states, wherein the respective states may be preset. For example, the status of the medical device includes unused, in use, and used. Wound states include unstitched and stitched, among others, and those skilled in the art can set the target state for each target as desired.
In the above embodiment, the recognition state may be unstable due to the distances and angles between the surgical instrument and the patient from the AR image device and the brightness degree, so that the initial medical image and the network structure are reasonably optimized according to the actual situation, and the recognition accuracy and stability are improved.
S508: and fusing the targets and the target states obtained by the recognition models.
Specifically, in order to improve accuracy, in this embodiment, the targets and the target states obtained by the respective recognition models are fused, so that the recognition results of the multiple recognition models are integrated, and accuracy is increased. For example, threshold weight assignment is performed on the targets and target states obtained by the recognition models to obtain fused targets and target states.
S510: and mapping the fused target and the target state to a virtual space corresponding to the augmented reality equipment for display.
Specifically, the processor maps the fused target and the target state to a virtual space corresponding to the augmented reality device for display, for example, first maps the target and the target state to a three-dimensional space of the augmented reality device, and then maps the target and the target state in the three-dimensional space to the virtual space corresponding to the augmented reality device for display.
According to the medical image display method, after the initial medical image acquired by the augmented reality equipment is acquired, the initial medical image is subjected to image enhancement processing to obtain the medical image to be processed, image blurring can be avoided by performing image enhancement, at least one identification model is obtained through pre-training to identify the medical image to be processed, and the target state in the medical image to be processed are obtained.
In one embodiment, referring to fig. 6, fig. 6 is a flowchart illustrating an image enhancement step in an embodiment, where the image enhancement step is to perform image enhancement processing on an initial medical image to obtain a medical image to be processed, and includes: converting the initial medical image into a gray image; calculating a gradient histogram and a gray target value of the gray image; obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value; calculating according to the initial medical image to obtain a corresponding color integral image to be processed; and performing image enhancement processing on the initial medical image according to the color integral image to be processed and the luminance coefficient to be processed to obtain the medical image to be processed.
Specifically, the initial medical image is a color image, on one hand, the gray information of the color image is extracted, and the influence of the color on the overall brightness information is removed, so as to calculate the brightness coefficient to be processed. On the other hand, a color integral image is calculated through a colored initial medical image, the two steps can be parallel processing to improve the processing efficiency, and finally the brightness of the initial medical image is adjusted based on the brightness coefficient to be processed and the color integral image to obtain the medical image to be processed.
Optionally, the obtaining, according to the gray-scale image, a to-be-processed brightness coefficient corresponding to the initial medical image by calculation includes: calculating a gradient histogram and a gray target value of the gray image; and obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value.
After obtaining the gray image, calculating a gradient histogram of the gray image, that is, a brightness difference between adjacent pixels, in order to scan each pixel in the gray image in a certain order, and in the scanning process, calculating a gray target value, which may be a gray median, for example, when traversing each pixel in the gray image, each gray value of the pixel may be sorted to obtain the gray median, and finally, a brightness coefficient of the initial medical image is obtained by combining the gray median according to the gradient histogram distribution diagram, so as to identify light and dim light. Because the dark light and the weak light are dark integrally, the gradient texture is small, the median of the added gray levels is small, and the brightness degree of the whole image is obtained through recognition. If the gradient is small and the median value of the gray scale is small, the luminance coefficient is low, and vice versa. That is, if the gradient is smaller than a certain value, it indicates that the initial medical image is either entirely brighter or entirely darker, and therefore, the brightness coefficient is obtained by determining whether the initial medical image is specifically brighter or darker according to the median value of the gray levels.
For convenience, a relation table of the brightness coefficient and the gradient and gray scale median value can be preset, so that after the gradient and gray scale median value is obtained through calculation, the corresponding brightness coefficient can be obtained through table lookup.
Optionally, the color integral image may be obtained by scanning the entire initial medical image and counting RGB three-channel pixel values through RGB three-channel pixel values in the initial medical image to obtain a color histogram. Then, each gray level of the color histogram is divided by the maximum value 255 of the gray range by normalization operation, and the color histograms are accumulated according to each gray level, thereby obtaining the color integrogram.
Optionally, the image enhancement processing is performed on the initial medical image according to the color integral image to be processed and the luminance coefficient to be processed, so as to obtain a medical image to be processed, and the image enhancement processing includes: matching the color integrogram to be processed with the standard color integrogram to determine a standard brightness coefficient corresponding to the brightness coefficient to be processed; and updating the pixel value in the initial medical image through the standard brightness coefficient to obtain the medical image to be processed.
Specifically, when the integral graph carries out reverse reasoning of the three-channel pixel mapping table, the pixel mapping table is updated by combining the brightness coefficient, so that the gray value correction is carried out on the integral graph, and finally the aim of improving the brightness is achieved.
The standard color integral graph is an integral graph which is stored in advance and has uniform brightness and meets requirements, the color integral graph to be processed is matched with the standard color integral graph, and a standard brightness coefficient corresponding to the successfully matched standard color integral graph is obtained, so that the pixel value in the initial medical image is updated through the standard brightness coefficient, and the medical image to be processed is obtained, and the aim of improving the brightness is fulfilled.
In the embodiment, the medical image to be processed is obtained by performing image enhancement processing on the initial medical image, so that the brightness of the image is improved, the image blurring is avoided, and a foundation is laid for the subsequent identification accuracy.
In one embodiment, as shown in fig. 7, identifying the medical image to be processed by using at least one identification model obtained through pre-training to obtain the target and the target state in the medical image to be processed includes: identifying a medical image to be processed through a first identification model obtained through pre-training to obtain a target and a target state corresponding to the first identification model in the medical image to be processed, wherein the first identification model is a deep learning model comprising an attention mechanism; and identifying the medical image to be processed through a second identification model obtained through pre-training to obtain a target and a target state corresponding to the second identification model in the medical image to be processed, wherein the second identification model is an identification model generated through a feature vector and a classifier.
The recognition models in the embodiment include two types, but the number of the recognition models is not particularly limited, wherein the first recognition model is a deep learning model including an attention mechanism, so that the influence of the distance and the angle between the surgical instrument and the patient from the AR image device can be reduced; the second recognition model is a recognition model generated by the feature vector and the classifier, which can improve the processing efficiency.
Specifically, as shown in fig. 8, the training mode of the first recognition model may include: the method includes the steps of receiving a sample data set and labeling, for example, receiving a sample image carrying an instrument and a wound, labeling the sample image, then preprocessing the labeled sample image, for example, dividing the sample image into data of a conventional scene and data of a complex scene, and then subdividing the data of the complex scene. The method comprises the steps that unidentified and unidentified images are marked in early-stage virtual display, instrument and wound data are carried in a large number of collected operation processes, video coding problems can occur in the collection processes, special conditions can occur in the operation processes, for example, tail shadows which occur due to too much movement can occur, images which are far away and near to AR glasses are clear and fuzzy for a moment, the angle of vision is small and large for a moment, smoking and light reflection occur in the operation processes, the operation instruments are covered by a large amount of bleeding, and the like, so that sample images are divided into data of conventional scenes and data of complex scenes, and the data of the complex scenes are subdivided. And then, building and training a neural network model to obtain a trained first recognition model.
When a first recognition model, such as a neural network recognition model, is trained, conventional scenes are roughly classified, and then fine classification is carried out on complex scenes, which is similar to the way that the neural network model learns conventional knowledge and then learns remote knowledge.
Correspondingly, the method for recognizing the medical image to be processed through the first recognition model obtained through pre-training to obtain the target and the target state in the medical image to be processed comprises the following steps: sequentially identifying medical images to be processed through at least two layers of network structures of a first identification model obtained through pre-training to obtain targets and target states in the medical images to be processed, wherein the input of a next network structure in the at least two layers of network structures is a target output by an adjacent previous network structure, and the output of each layer of network structure which is not input to the next network structure is the target and the target state in the medical images to be processed. The two layers of network structures respectively correspond to the different classifications, for example, the first layer of network structure is to perform rough classification on a conventional scene to obtain a normal scene and a complex scene, and the second layer of network structure is to perform fine classification on the complex scene. In other embodiments, network structures of other layers may also be entered to achieve more detailed classification, and the output of each layer of network structure that is not input to the next network structure is the target and the target state in the medical image to be processed.
The proportion of the conventional situation obtained in the observation of the operation scene is relatively large. Optionally, when the neural network is built, convolution frames with different scales are added under a model framework with mature foundation and are overlapped and fused layer by layer, namely, the attention mechanism is used for repeatedly learning image contents with different sizes so as to resist the influence of different instruments and wound image scales. And then training is carried out through multiple processes and multiple GPUs, so that model output is accelerated. And finally, reasoning and identifying the target and the target state by using a neural network model, such as identifying the wound and the surgical instrument of the patient.
According to the embodiment, the problem of unbalanced samples in the model training process is solved, and the recognition and classification precision of the wound and the instrument condition of the patient in the operation is improved.
In one embodiment, recognizing the medical image to be processed by the second recognition model obtained by pre-training to obtain the target and the target state in the medical image to be processed includes: extracting image information of a medical image to be processed; extracting a feature vector of the image information to obtain an initial feature vector; carrying out dimension mapping on the initial characteristic vector to obtain a target characteristic vector; and obtaining a target and a target state in the medical image to be processed according to the target feature vector by using a classifier obtained by pre-training.
Specifically, for the second recognition model, which is a conventional model, the classification result is more accurate mainly by training the dimension mapping matrix corresponding to the target feature vector, and in practical application, as shown in fig. 9, the training mode of the second recognition model includes: receiving a sample data set and labeling, for example, receiving a sample image carrying an instrument and a wound, and labeling the sample image, where the labeling mode may refer to the labeling mode of the sample image of the first recognition model, which is not described herein again.
And then, preprocessing the sample image to extract image information, for example, extracting RGB three-channel matrix image information of the sample image, wherein the dimension is consistent with the width and the height of the image. And setting a search frame, traversing the whole image, extracting the image blocks of the region, then performing HOG operator characteristic operator operation, and mapping the three-dimensional matrix map information into a one-dimensional vector, namely representing the image blocks by the one-dimensional vector.
Similarly, the standard image is also processed in accordance with the processing method of the sample image. And comparing the one-dimensional vector corresponding to the sample image with the one-dimensional vector corresponding to the standard image, and obtaining an identification result according to the distance between the two vectors, for example, if the distance between the two vectors is smaller than a threshold value, obtaining the category of the corresponding standard image as the category of the sample image.
The distance between the two vectors can be calculated in an SVM mode, namely, one-dimensional vectors are mapped to a higher dimension according to a dimension mapping matrix, then the distances of the mapped vectors with the higher dimension are compared, and finally the dimension mapping matrix is updated according to the obtained type of the sample image and the marked type until the type of the sample image is the same as the marked type, so that the dimension mapping matrix is obtained.
In practical application, image information of the medical image to be processed is extracted; extracting a feature vector of the image information to obtain an initial feature vector; carrying out dimension mapping on the initial characteristic vector to obtain a target characteristic vector; and obtaining a target and a target state in the medical image to be processed according to the target feature vector by using a classifier obtained by pre-training. The method for obtaining the target feature vector by carrying out dimension mapping on the initial feature vector comprises the following steps: and carrying out dimension mapping on the initial characteristic vector through a dimension mapping matrix obtained by pre-training to obtain a target characteristic vector.
In the embodiment, the medical image to be processed is processed through the traditional recognition model generated through the feature vector and the classifier, so that the processing efficiency can be improved, and the results of a plurality of recognition models are finally integrated, and the accuracy is also ensured.
In one embodiment, fusing the target and the target state obtained by each recognition model, includes: acquiring weights corresponding to the recognition models; and fusing the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models.
The weight of each model may be generated in advance, for example, after training is completed, the confidence of each recognition model is calculated through a test sample, and the confidence of each recognition model is normalized to be the weight corresponding to each recognition model, so that in actual processing, the target and the target state obtained by each recognition model are fused based on the weight corresponding to the recognition model, for example, the result and the weight of each recognition model are weighted to obtain the fused target and the fused target state.
Specifically, as shown in fig. 10, before fusing the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models, the method further includes: acquiring processing time corresponding to the first recognition model and the second recognition model and recognition accuracy of the target; when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is larger than the threshold value, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion; when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is greater than that of the second recognition model, continuing to fuse the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models; and when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is smaller than or equal to the recognition accuracy of the second recognition model, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
Specifically, the processor first obtains the processing time, the recognition accuracy of the target, and the weight corresponding to the first recognition model and the second recognition model. And then calculating whether the difference value of the processing time of the first recognition model and the processing time of the second recognition model is larger than a threshold value or not, and when the difference value of the processing time of the first recognition model and the processing time of the second recognition model is larger than the threshold value, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
When the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value, continuously judging whether the recognition accuracy of the target of the first recognition model is larger than that of the target of the second recognition model; when the recognition accuracy of the first recognition model is greater than that of the second recognition model, continuously fusing the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models; and when the recognition accuracy of the first recognition model is smaller than or equal to that of the second recognition model, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
In the embodiment, the processing time of each recognition model, the recognition accuracy of the target and the weight are combined to fuse the results of each recognition model, and the processing efficiency is improved on the premise of ensuring the accuracy.
In one embodiment, mapping the fused target and the target state to a virtual space corresponding to the augmented reality device for display includes: mapping the fused target and the target state to a three-dimensional space of the augmented reality equipment according to the conversion relation of a plurality of acquisition devices of the augmented reality equipment; and mapping the target and the target state in the three-dimensional space to a virtual space corresponding to the augmented reality equipment for display.
Specifically, referring to fig. 11, fig. 11 is a flowchart of a display step of the augmented reality device in an embodiment, in which an identified fusion result is obtained at the beginning, because the multi-camera multi-sensor conversion relationship, that is, the mapping matrix, has been calibrated when the augmented reality device is shipped from the factory. The recognition result, i.e., coordinates, categories, and states, in the two-dimensional image are thus mapped into the AR space, i.e., the three-dimensional image, by this mapping matrix.
And then carrying out consistency conversion on the virtual space scanned by the AR space and the three-dimensional image, namely fusing the recognition result into the AR virtual space. And finally, performing registration display according to the recognition result, namely coordinates and recognition image blocks, wherein the coordinates are virtual display positions, the types are corresponding display models, and the states are wound severe states or the matching states of surgical instruments and the surgical process, so as to remind and alarm, for example, to recognize heavy bleeding and alarm. Wherein the display models are pre-stored in a library of models and different display models are used to display different categories of objects.
In one embodiment, as shown in fig. 12, a medical image processing method is provided, which is exemplified by the application of the method to the processor in fig. 1, and includes the following steps:
s1202: the target and the target state are acquired based on the medical image display method in any one of the above embodiments.
Specifically, the target and the target obtaining manner may be referred to above and are not described herein again.
S1204: and matching the target and the target state with a preset scene.
S1206: and generating alarm information according to the matching result.
Specifically, the preset scene may be a scene corresponding to a standard operation set according to a difference in operation phase, for example, when the operation starts, the preset scene is a scene including states of each standard operation device and the standard operation device, and in the operation process, the preset scene is a scene of a standard organization part and a state of the standard organization part or a scene of a standard progress.
Optionally, matching the target and the target state with a preset scene includes at least one of: matching the acquired states of the operating equipment and the operating equipment with the states of standard operating equipment and the standard operating equipment in a preset scene; matching the acquired tissue part and the state of the tissue part with a standard tissue part and a state of the standard tissue part in a preset scene; and determining an operation process according to the acquired target and the target state, and matching the operation process with a standard process in a preset scene.
Specifically, referring to fig. 13, fig. 13 is a flowchart of a step of generating alarm information in an embodiment, in this embodiment, a doctor has worn AR auxiliary glasses and enters a surgical scene examination link, and needs to observe an entire scene of an operating room at multiple angles, and then performs multi-target identification on a video that is captured in real time of a real scene, where a target may include an existing surgical robot system, the number of carts, various devices, and an operating table, and the multi-target identification method may be a medical image display method in the foregoing, so that the identified target is compared with a robot system type, the number of carts, a device shape, and a positioning manner that are used for a preset surgical type in a preoperative scheme, to determine whether an alarm that "the surgical scene is not matched with a surgical instrument" will be triggered, and if so, an alarm of different levels or types is generated according to a comparison result. Otherwise, the doctor wears AR glasses and is close to the operation table, and the preoperative preparation condition of the patient is carefully observed in multiple angles to can expose external tissue position or wound state according to the initial medical image identification that AR glasses gathered, carry out the comparison judgement with the art formula kind that predetermines in the scheme before the art with it, whether can trigger "art formula kind and wound position mismatch" type warning. If yes, alarms of different levels or types are generated. Otherwise, the environment examination link before operation is finished, the doctor starts the operation, the processor automatically starts a 'surgical procedure monitoring' program, and the type alarm of 'surgical procedure error' is triggered for the disordered, missing and increased steps in the operation process.
In the embodiment, the targets and the target states are identified before and during operation to trigger different types of alarms, so that the doctor is assisted to complete the operation safely, the operation accuracy is improved on the premise of lower calculation cost, and a foundation is laid for a later-stage intelligent operation.
In order to make those skilled in the art fully understand the present application, please refer to fig. 14 to 17, before the operation starts, the doctor wears the AR glasses, and sees the picture in the AR glasses as fig. 14, and the glasses can see the real world and the virtual world projected by the glasses at the same time. Firstly, the system platform is started, a software interface is displayed in the glasses, and the user only needs to lightly click with a finger.
When going to the surgical formula selection interface, what operation is performed is already clear before the surgical object enters the operating room, so the surgical formula is selected first, and the surgical formula list can be slid up and down by using a finger to touch the surgical formula list as shown in fig. 15.
At the beginning of or during surgery, two-dimensional images, i.e., initial medical images, may be acquired through the AR glasses, and then the target and the state of the target are identified by the processor, e.g., the instruments and wound are identified in fig. 16, in preparation for subsequent positioning.
Optionally, the operation reminding menu may be displayed on each interface above the view field without obstructing the operation area of the doctor, for example, in fig. 17, the left side above is the previous operation action, the middle is the ongoing operation action, and the right side is the next operation action, so that the doctor can be reminded to facilitate the operation of the whole operation.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a medical image display apparatus and a medical image processing apparatus for implementing the medical image display method and the medical image processing method mentioned above. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the medical image display apparatus and the medical image processing apparatus provided below can be referred to the limitations of the medical image display method and the medical image processing method in the foregoing, and are not described herein again.
In one embodiment, there is provided a medical image display apparatus including:
the system comprises an initial medical image acquisition module, a display module and a display module, wherein the initial medical image acquisition module is used for acquiring an initial medical image acquired by augmented reality equipment;
the image enhancement module is used for carrying out image enhancement processing on the initial medical image to obtain a medical image to be processed;
the model processing module is used for identifying the medical image to be processed through at least one identification model obtained by pre-training to obtain a target and a target state in the medical image to be processed;
the fusion module is used for fusing the targets and the target states obtained by the recognition models;
and the display module is used for mapping the fused target and the target state to a virtual space corresponding to the augmented reality equipment for display.
In one embodiment, the image enhancement module is further configured to convert the initial medical image into a grayscale image; calculating a gradient histogram and a gray target value of the gray image; obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value; calculating according to the initial medical image to obtain a corresponding color integral image to be processed; and carrying out image enhancement processing on the initial medical image according to the color integral image to be processed and the brightness coefficient to be processed to obtain the medical image to be processed.
In one embodiment, the image enhancement module is further configured to calculate a gradient histogram of the grayscale image and a grayscale target value; and obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value.
In one embodiment, the image enhancement module is further configured to match the color integrogram to be processed with a standard color integrogram to determine a standard luminance coefficient corresponding to the luminance coefficient to be processed; and updating the pixel value in the initial medical image through the standard brightness coefficient to obtain the medical image to be processed.
In one embodiment, the model processing module is further configured to identify a medical image to be processed through a first identification model obtained through pre-training, so as to obtain a target and a target state corresponding to the first identification model in the medical image to be processed, where the first identification model is a deep learning model including an attention mechanism; and identifying the medical image to be processed through a second identification model obtained through pre-training to obtain a target and a target state corresponding to the second identification model in the medical image to be processed, wherein the second identification model is an identification model generated through the feature vector and the classifier.
In one embodiment, the model processing module is further configured to sequentially recognize the medical image to be processed through at least two layers of network structures of the first recognition model obtained through pre-training, so as to obtain a target and a target state in the medical image to be processed, where an input of a next network structure in the at least two layers of network structures is a target output by an adjacent previous network structure, and an output of each layer of network structure, which is not input to the next network structure, is the target and the target state in the medical image to be processed.
In one embodiment, the model processing module is further configured to extract image information of the medical image to be processed; extracting a feature vector of the image information to obtain an initial feature vector; carrying out dimension mapping on the initial characteristic vector to obtain a target characteristic vector; and obtaining a target and a target state in the medical image to be processed according to the target feature vector by using a classifier obtained by pre-training.
In one embodiment, the model processing module is further configured to perform dimension mapping on the initial feature vector through a dimension mapping matrix obtained through pre-training to obtain a target feature vector.
In one embodiment, the fusion module is further configured to obtain weights corresponding to the recognition models; and fusing the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models.
In one embodiment, the fusion module is further configured to obtain processing time corresponding to the first recognition model and the second recognition model and recognition accuracy of the target; when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is larger than the threshold value, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion; when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is greater than that of the second recognition model, continuing to fuse the targets and the target states obtained by the recognition models based on the weights corresponding to the recognition models; and when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value, and the recognition accuracy of the first recognition model is smaller than or equal to that of the second recognition model, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
In one embodiment, the display module is further configured to map the fused target and the target state to a three-dimensional space of the augmented reality device according to a conversion relationship of a plurality of acquisition devices of the augmented reality device; and mapping the target and the target state in the three-dimensional space to a virtual space corresponding to the augmented reality equipment for display.
In one embodiment, there is provided a medical image processing apparatus including:
the identification module is used for acquiring a target and a target state based on the medical image display device in any one of the embodiments;
the matching module is used for matching the target and the target state with a preset scene;
and the alarm module is used for generating alarm information according to the matching result.
In one embodiment, the matching module matches the target and the target state with a preset scene in at least one of the following ways: matching the acquired states of the operating equipment and the operating equipment with the states of standard operating equipment and the standard operating equipment in a preset scene; matching the acquired tissue part and the state of the tissue part with a standard tissue part and a state of the standard tissue part in a preset scene; and determining an operation process according to the acquired target and the target state, and matching the operation process with a standard process in a preset scene.
The modules in the medical image display apparatus and the medical image processing apparatus may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 18. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a medical image display method and a medical image processing method. The display unit of the computer device is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 18 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, carries out the steps in the method embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (12)

1. A method of medical image display, the method comprising:
acquiring an initial medical image acquired by augmented reality equipment;
performing image enhancement processing on the initial medical image to obtain a medical image to be processed;
identifying the medical image to be processed through at least one identification model obtained by pre-training to obtain a target and a target state in the medical image to be processed;
fusing the target and the target state obtained by each recognition model;
and mapping the fused target and the target state to a virtual space corresponding to the augmented reality equipment for display.
2. The method according to claim 1, wherein the image enhancement processing on the initial medical image to obtain a medical image to be processed comprises:
converting the initial medical image into a grayscale image; calculating a gradient histogram and a gray target value of the gray image;
obtaining a brightness coefficient to be processed corresponding to the initial medical image based on the gradient histogram and the gray target value;
calculating to obtain a corresponding color integral image to be processed according to the initial medical image;
and performing image enhancement processing on the initial medical image according to the color integrogram to be processed and the luminance coefficient to be processed to obtain a medical image to be processed.
3. The method according to claim 2, wherein the performing image enhancement processing on the initial medical image according to the to-be-processed color integral map and the to-be-processed luminance coefficient to obtain a to-be-processed medical image comprises:
matching the color integrogram to be processed with a standard color integrogram to determine a standard brightness coefficient corresponding to the brightness coefficient to be processed;
and updating the pixel value in the initial medical image through the standard brightness coefficient to obtain the medical image to be processed.
4. The method according to any one of claims 1 to 3, wherein the identifying the medical image to be processed by at least one identification model obtained through pre-training to obtain the target and the target state in the medical image to be processed comprises:
identifying the medical image to be processed through a first identification model obtained through pre-training to obtain a target and a target state corresponding to the first identification model in the medical image to be processed, wherein the first identification model is a deep learning model comprising an attention mechanism;
and identifying the medical image to be processed through a second identification model obtained through pre-training to obtain a target and a target state corresponding to the second identification model in the medical image to be processed, wherein the second identification model is an identification model generated through a feature vector and a classifier.
5. The method according to claim 4, wherein the fusing the target and the target state obtained by each of the recognition models comprises:
acquiring the weight corresponding to each recognition model;
and fusing the target and the target state obtained by each recognition model based on the weight corresponding to the recognition model.
6. The method according to claim 4, wherein before fusing the target and the target state obtained by each recognition model based on the weight corresponding to the recognition model, the method further comprises:
acquiring processing time corresponding to the first recognition model and the second recognition model and recognition accuracy of a target;
when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is larger than a threshold value, taking the target and the target state corresponding to the second recognition model as a target and a target state obtained by fusion;
when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value and the recognition accuracy of the first recognition model is greater than that of the second recognition model, continuing to fuse the target and the target state obtained by each recognition model based on the weight corresponding to the recognition model;
and when the difference value between the processing time of the first recognition model and the processing time of the second recognition model is smaller than or equal to the threshold value, and the recognition accuracy of the first recognition model is smaller than or equal to the recognition accuracy of the second recognition model, taking the target and the target state corresponding to the second recognition model as the target and the target state obtained by fusion.
7. The method according to claim 1, wherein the mapping the fused target and the target state to be displayed in a virtual space corresponding to the augmented reality device comprises:
mapping the fused target and the target state to a three-dimensional space of the augmented reality equipment according to the conversion relation of a plurality of acquisition devices of the augmented reality equipment;
and mapping the target and the target state in the three-dimensional space to a virtual space corresponding to the augmented reality equipment for display.
8. A medical image processing method, characterized in that the medical image processing method comprises:
acquiring a target and a target state based on the medical image display method according to claims 1 to 7;
matching the target and the target state with a preset scene;
and generating alarm information according to the matching result.
9. The medical image processing method according to claim 8, wherein the matching the target and the target state with a preset scene comprises at least one of:
matching the acquired states of the operating equipment and the operating equipment with states of standard operating equipment and the standard operating equipment in a preset scene;
matching the acquired tissue part and the state of the tissue part with a standard tissue part and a standard tissue part state in a preset scene; and
and determining an operation process according to the acquired target and the target state, and matching the operation process with a standard process in a preset scene.
10. An image display system, characterized in that the system comprises:
the augmented reality equipment is used for acquiring an initial medical image and displaying display information obtained by processing of the processor;
a processor configured to perform the steps of the method of any one of claims 1 to 7 or 8 to 9 to obtain display information of the virtual space corresponding to the enhanced display device.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7 or 8 or 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7 or 8 or 9.
CN202211418663.6A 2022-11-14 2022-11-14 Medical image display method, medical image processing method, and image display system Pending CN115700759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211418663.6A CN115700759A (en) 2022-11-14 2022-11-14 Medical image display method, medical image processing method, and image display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211418663.6A CN115700759A (en) 2022-11-14 2022-11-14 Medical image display method, medical image processing method, and image display system

Publications (1)

Publication Number Publication Date
CN115700759A true CN115700759A (en) 2023-02-07

Family

ID=85121045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211418663.6A Pending CN115700759A (en) 2022-11-14 2022-11-14 Medical image display method, medical image processing method, and image display system

Country Status (1)

Country Link
CN (1) CN115700759A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563299A (en) * 2023-07-12 2023-08-08 之江实验室 Medical image screening method, device, electronic device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563299A (en) * 2023-07-12 2023-08-08 之江实验室 Medical image screening method, device, electronic device and storage medium
CN116563299B (en) * 2023-07-12 2023-09-26 之江实验室 Medical image screening method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
JP7058373B2 (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
JP6798183B2 (en) Image analyzer, image analysis method and program
CN110662484A (en) System and method for whole body measurement extraction
JP2020507836A (en) Tracking surgical items that predicted duplicate imaging
CN112040834A (en) Eyeball tracking method and system
TW202008163A (en) Method, device and electronic apparatus for medical image processing and storage mdeium thereof
US20210236227A1 (en) Instrument tracking machine
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
WO2019035155A1 (en) Image processing system, image processing method, and program
WO2021139494A1 (en) Animal body online claim settlement method and apparatus based on monocular camera, and storage medium
CN110619318B (en) Image processing method, microscope, system and medium based on artificial intelligence
WO2020022027A1 (en) Learning device and learning method
CN108389220A (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN111080639A (en) Multi-scene digestive tract endoscope image identification method and system based on artificial intelligence
CN113435236A (en) Home old man posture detection method, system, storage medium, equipment and application
CN113689578A (en) Human body data set generation method and device
CN112836625A (en) Face living body detection method and device and electronic equipment
JP2024515448A (en) Systems and methods for dynamic identification of surgical trays and items contained thereon - Patents.com
CN115700759A (en) Medical image display method, medical image processing method, and image display system
CN113592726A (en) High dynamic range imaging method, device, electronic equipment and storage medium
CN114758100A (en) Display method, display device, electronic equipment and computer-readable storage medium
CN113240736A (en) Pose estimation method and device based on YOLO6D improved network
CN113421231B (en) Bleeding point detection method, device and system
CN114285984B (en) Camera switching method and device based on AR (augmented reality) glasses, storage medium and terminal
CN115393962A (en) Motion recognition method, head-mounted display device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination