WO2024104388A1 - 一种超声图像处理方法、装置、电子设备及存储介质 - Google Patents

一种超声图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024104388A1
WO2024104388A1 PCT/CN2023/131822 CN2023131822W WO2024104388A1 WO 2024104388 A1 WO2024104388 A1 WO 2024104388A1 CN 2023131822 W CN2023131822 W CN 2023131822W WO 2024104388 A1 WO2024104388 A1 WO 2024104388A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
operation execution
execution target
image
target
Prior art date
Application number
PCT/CN2023/131822
Other languages
English (en)
French (fr)
Inventor
刘恩毅
贺光琳
Original Assignee
杭州海康慧影科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康慧影科技有限公司 filed Critical 杭州海康慧影科技有限公司
Publication of WO2024104388A1 publication Critical patent/WO2024104388A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Definitions

  • the present application relates to the field of ultrasonic images, and in particular to an ultrasonic image processing method, device, electronic equipment and storage medium.
  • Ultrasonic images are based on the signals reflected and scattered by the detected target. Through analog transceiver, beamforming and other processing, the signal amplitude is represented by different grayscale values in time sequence to form an image. Ultrasonic images are widely used, for example, in product flaw detection, surgical assistance, etc.
  • ultrasound images need to be viewed and identified manually, or the parts inside the objects need to be identified manually.
  • the ultrasound images include human body parts and surgical instruments, and doctors need to use their naked eyes to distinguish the human body parts and surgical instruments in the ultrasound images in order to perform accurate operations.
  • the accuracy of human eyes in identifying objects in ultrasound images is low.
  • the purpose of the embodiments of the present application is to provide an ultrasonic image processing method, device, electronic device and storage medium to accurately identify targets in ultrasonic images.
  • the specific technical solution is as follows:
  • an ultrasound image processing method comprising:
  • the ultrasonic image includes an operated target and an operation execution target, and an entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target;
  • the outline of the operation execution target is displayed in the ultrasound image based on the outline position information.
  • the step of performing contour detection on the operation execution target in the ultrasound image to obtain contour position information of the operation execution target includes:
  • the ultrasonic image is binarized to obtain a binarized ultrasonic image; and the binarized ultrasonic image is morphologically processed to obtain contour position information of the operation execution target.
  • the step of displaying the outline of the operation execution target in the ultrasound image based on the outline position information includes:
  • the outline of the operation execution target is displayed in the outline area of the operation execution target after the enhancement process.
  • the step of performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after the enhancement processing includes:
  • grayscale enhancement processing is performed on the contour area to obtain an enhanced contour area of the operation execution target.
  • the step of performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after the enhancement processing includes:
  • mapping the ultrasound image to a preset type of color space to obtain a color ultrasound image or, based on the grayscale value of the contour area, performing grayscale enhancement processing on the contour area, and mapping the ultrasound image after the grayscale enhancement processing to a preset type of color space to obtain a color ultrasound image;
  • a color enhancement process is performed on a color image region corresponding to the contour region in the color ultrasound image to obtain the contour region of the operation execution target after the enhancement process.
  • the step of displaying the outline of the operation execution target in the outline area of the operation execution target after the enhancement processing includes:
  • the outline of the operation execution target is highlighted in the outline area of the operation execution target after enhancement processing.
  • the step of displaying the outline of the operation execution target in the ultrasound image based on the outline position information includes:
  • the contour of the operation performance target is highlighted in the ultrasound image.
  • the step of acquiring an ultrasound image includes:
  • the ultrasound video stream is parsed to obtain video frames as ultrasound images.
  • an ultrasonic image processing device comprising:
  • An ultrasonic image acquisition module used to acquire an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, and an entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target;
  • a contour position information acquisition module used to perform contour detection on the operation execution target in the ultrasound image to obtain contour position information of the operation execution target
  • a contour display module is used to display the contour of the operation execution target in the ultrasound image based on the contour position information.
  • the contour position information acquisition module includes:
  • a contour position information acquisition submodule configured to input the ultrasound image into a pre-trained contour segmentation model, perform contour segmentation on the ultrasound image based on image features of the ultrasound image, and output contour position information of the operation execution target;
  • It is used to perform binarization processing on the ultrasonic image to obtain a binarized ultrasonic image; and perform morphological processing on the binarized ultrasonic image to obtain contour position information of the operation execution target.
  • the outline display module includes:
  • a contour area determination submodule configured to determine a contour area of the operation execution target in the ultrasound image based on the contour position information
  • a contour area display submodule used for performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after the enhancement processing
  • the first display submodule is used to display the outline of the operation execution target in the outline area of the operation execution target after the enhancement processing.
  • the outline area display submodule includes:
  • the first contour region acquisition unit is used to perform grayscale enhancement processing on the contour region based on the grayscale value of the contour region to obtain the enhanced contour region of the operation execution target.
  • the outline area display submodule includes:
  • a color ultrasound image acquisition unit configured to map the ultrasound image to a preset type of color space to obtain a color ultrasound image; or, based on the grayscale value of the contour area, perform grayscale enhancement processing on the contour area, and map the ultrasound image after the grayscale enhancement processing to a preset type of color space to obtain a color ultrasound image;
  • the second contour area acquisition unit is used to perform color enhancement processing on the color image area corresponding to the contour area in the color ultrasound image to obtain the contour area of the operation execution target after the enhancement processing.
  • the outline display submodule includes:
  • the outline display unit is used to highlight the outline of the operation execution target in the outline area of the operation execution target after enhancement processing based on the outline position information.
  • the outline display module includes:
  • the second display submodule is used to highlight the outline of the operation execution target in the ultrasound image based on the outline position information.
  • the ultrasound image acquisition module includes:
  • An ultrasound video stream acquisition submodule is used to acquire an ultrasound video stream collected by an ultrasound device
  • the ultrasound image acquisition submodule is used to parse the ultrasound video stream to obtain video frames as ultrasound images.
  • an electronic device including:
  • Memory used to store computer programs
  • the processor is used to implement any method described in the first aspect when executing a program stored in the memory.
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, any method described in the first aspect is implemented.
  • an embodiment of the present application provides a computer program product comprising instructions, and when the computer program product is executed by a computer, it implements any method described in the first aspect above.
  • the electronic device can obtain an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target, and the contour of the operation execution target in the ultrasonic image is detected to obtain the contour position information of the operation execution target, and the contour of the operation execution target is displayed in the ultrasonic image based on the contour position information. Since the contour of the operation execution target in the ultrasonic image is detected in the case where the operated target and the operation execution target are included in the ultrasonic image, the contour of the operation execution target in the ultrasonic image can be displayed in the ultrasonic image based on the obtained contour position information of the operation execution target.
  • the operation execution target can be distinguished according to the contour of the operation execution target, which can improve the accuracy of the operator's identification of the operation execution target, and then the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the contour of the operation execution target can improve the accuracy of the operator's identification of the operation execution target, and then the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • FIG1 is a flow chart of an ultrasonic image processing method provided in an embodiment of the present application.
  • FIG2 is a schematic diagram of a structure of an ultrasonic diagnostic system provided in an embodiment of the present application.
  • FIG3 is a functional structure diagram of an ultrasonic diagnostic system provided in an embodiment of the present application.
  • FIG4 is a schematic diagram of a structure of an endoscope system provided in an embodiment of the present application.
  • FIG5 is a schematic diagram of a process of training and testing an ultrasound image contour segmentation model based on the embodiment shown in FIG1 ;
  • FIG6 is a schematic diagram of a structure of a target segmentation network based on the operation of the embodiment shown in FIG1 ;
  • FIG7 is a schematic diagram of a flow chart of obtaining contour position information of an operation execution target based on the embodiment shown in FIG1 ;
  • FIG8 is a specific flow chart of obtaining a contour segmentation model based on the embodiment shown in FIG1 ;
  • FIG9( a) is a schematic diagram of an ultrasound image including an operation execution target based on the embodiment shown in FIG1 ;
  • FIG9( b) is another schematic diagram of an ultrasound image including an operation execution target based on the embodiment shown in FIG1 ;
  • FIG10 is a specific flow chart of step S802 in the embodiment shown in FIG8 ;
  • FIG11 is a flow chart showing an outline of an operation execution target based on the embodiment shown in FIG1 ;
  • FIG12 is a flow chart of performing color enhancement processing on a contour area based on the embodiment shown in FIG1 ;
  • FIG13 is a specific flow chart of image enhancement processing based on the embodiment shown in FIG1 ;
  • FIG14 is a schematic diagram of a method for displaying an ultrasound image based on the embodiment shown in FIG1 ;
  • FIG15 is a specific flow chart of step S101 in the embodiment shown in FIG1 ;
  • FIG16 is a schematic diagram of a structure of an ultrasonic image processing system provided in an embodiment of the present application.
  • FIG17 is a specific flow chart of the ultrasound image processing method provided in an embodiment of the present application.
  • FIG18 is a schematic diagram of the structure of an ultrasonic image processing device provided in an embodiment of the present application.
  • FIG19 is a schematic diagram of the structure of an electronic device provided in an embodiment of the present application.
  • embodiments of the present application provide an ultrasound image processing method, device, system, electronic device, computer readable storage medium, and computer program product.
  • an ultrasound image processing method provided by embodiments of the present application is introduced.
  • An ultrasonic image processing method provided in an embodiment of the present application can be applied to any electronic device that needs to perform ultrasonic image processing, for example, it can be an ultrasonic diagnostic system or other ultrasonic image processing equipment, which is not specifically limited here. For the sake of clarity, it is hereinafter referred to as Electronic equipment.
  • an ultrasonic image processing method includes:
  • the ultrasound image includes an operated target and an operation execution target, and the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target.
  • the electronic device can obtain an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target, and the contour of the operation execution target in the ultrasonic image is detected to obtain the contour position information of the operation execution target, and the contour of the operation execution target is displayed in the ultrasonic image based on the contour position information. Since the contour of the operation execution target in the ultrasonic image can be detected in the case where the operated target and the operation execution target are included in the ultrasonic image, the contour of the operation execution target can be displayed in the ultrasonic image based on the obtained contour position information of the operation execution target.
  • the operation execution target can be distinguished according to the contour of the operation execution target, which can improve the accuracy of the operator's identification of the operation execution target, and thus the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • ultrasound images can display various targets detected by ultrasound, during medical operations, operators can use ultrasound images to identify various targets and determine the relative positions of various targets based on the positions of various targets. Ultrasound images are increasingly being used.
  • the operator uses an endoscopic device to perform surgery, he can only see information on the surface of the tissue through the display, while the lesions, important nerves or target locations are often covered deep by fascia and tissue, and the operator cannot see the deep information, resulting in the risk of accidentally injuring key human structures when using surgical instruments to separate tissues. Therefore, the use of ultrasound images can help the operator locate the target location that cannot be seen by conventional visible light vision, as well as the relative position relationship between the surgical instrument and the target location, thereby providing the operator with real-time surgical navigation.
  • the present application provides an ultrasound image processing method that can improve the accuracy of the operator in identifying surgical instrument targets in ultrasound images.
  • the electronic device may acquire an ultrasound image, wherein the ultrasound image may be any video frame in an ultrasound video stream collected by the ultrasound device.
  • the operation image may include an operated target and an operation execution target, and the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target.
  • the preset operation may include a detection operation, a cutting operation, a suturing operation, etc., which are not specifically limited here.
  • the electronic device may be an ultrasound diagnostic system, which may acquire ultrasound images.
  • the ultrasound diagnostic system may include an ultrasound probe, an ultrasound system host, an operating device, a display device, and a storage device.
  • the ultrasound probe can capture the observed part of the subject on the surface of the subject or by inserting into the subject according to its own application scenario and generate ultrasound image data.
  • the ultrasound system host performs prescribed related operations on the signal of the ultrasound image data transmitted by the ultrasound probe, and can uniformly control the overall action of the ultrasound diagnostic system.
  • the display device processes and displays the ultrasound image and related status information corresponding to the ultrasound image data of the ultrasound system host.
  • the storage device stores the ultrasound image corresponding to the ultrasound image data of the ultrasound system host.
  • the ultrasound diagnosis system includes an ultrasound probe 201, an ultrasound system host 202, an operation interface 203, and an ultrasound display screen 204.
  • the operator can use the ultrasound probe 201 to take ultrasound images, either on the surface of the subject or inserted into the subject, and then the ultrasound diagnosis system can obtain the ultrasound image, output it to the ultrasound display screen 204, or store it in a storage device.
  • the operated target included in the ultrasound image can be a lesion site of the subject's detection site, and the operation execution target can be a detection device for detecting the subject's detection site.
  • the electronic device acquires an ultrasonic image, and in step S102, it can perform contour detection on the operation execution target in the ultrasonic image, and then obtain contour position information of the operation execution target.
  • the contour position information is used to indicate the contour position of the operation execution target.
  • Contour detection is the process of extracting the contour of a target in an image that includes both the target and the background, ignoring the texture of the background and the target as well as the influence of noise interference, and using certain techniques and methods. Therefore, when an electronic device acquires an ultrasonic image, it can extract the contour of the target to be operated in the ultrasonic image through contour detection.
  • the electronic device may use a conventional algorithm to perform contour detection on the operation execution target in the ultrasound image to obtain The contour position information of the operation execution target.
  • the traditional algorithm is a method that can perform contour detection based on the position where the gray value in the ultrasound image changes sharply.
  • the electronic device can use a deep learning method to perform contour detection on the operation execution target in the ultrasound image to obtain contour position information of the operation execution target.
  • the deep learning method is a method that can use a convolutional neural network to learn image features and then train a contour segmentation model to perform contour detection on the operation execution target in the ultrasound image.
  • the ultrasound system host can process the image data received from the ultrasound probe, perform contour detection on the operation execution target in the ultrasound image, and obtain contour position information of the operation execution target.
  • the ultrasound probe includes a signal transceiver unit, a processing unit, and an operation unit
  • the ultrasound system host includes an image input unit, an ultrasound image processing unit, an ultrasound intelligent processing unit, a video encoding unit, a control unit, and an operation unit.
  • the display device and the storage device are external devices.
  • the image input unit in the ultrasound system host can receive the signal sent by the ultrasound probe, perform analog transceiver, beam synthesis, signal conversion and other processing on the received signal, and transmit it to the ultrasound image processing unit.
  • the ultrasound image processing unit performs ISP (Image Signal Processor) operations on the ultrasound image of the image input unit, which may include brightness conversion, sharpening, contrast enhancement, etc.
  • ISP Image Signal Processor
  • the ultrasound image processing unit can transmit the processed ultrasound image to the ultrasound intelligent processing unit, the video encoding unit or the display device.
  • the ultrasonic intelligent processing unit performs intelligent analysis on the ultrasonic image processed by the ultrasonic image processing unit, which may include target recognition, detection, and segmentation based on deep learning.
  • the ultrasonic intelligent processing unit may transmit the processed ultrasonic image to the ultrasonic image processing unit or the video encoding unit.
  • the ultrasonic image processing unit may process the ultrasonic image processed by the ultrasonic intelligent processing unit in a manner including contour enhancement, brightness conversion, frame overlap and scaling.
  • the video encoding unit encodes and compresses the ultrasonic image processed by the ultrasonic image processing unit or the ultrasonic image processed by the ultrasonic intelligent processing unit, and transmits it to the storage device.
  • the control unit controls various modules of the ultrasound system, which may include interface operation mode, image processing mode, ultrasound measurement mode and video encoding mode.
  • the operation unit may include switches, buttons and touch panels, etc., receive external indication signals, and output the received indication signals to the control unit.
  • an operator when inspecting a subject, an operator may use an endoscope, a detection instrument, and the ultrasound image acquired by the ultrasound diagnosis system includes an endoscope target.
  • the endoscope may be inserted into the body of the subject to photograph the inspected part of the subject, and the photographed in-vivo image may be output to an external display device and storage device.
  • the structure of the endoscope system may include an endoscope 401, a light source 402, a system host 403, a display device 404, and a storage device 405.
  • the endoscope 401 is inserted into a subject to photograph the detection part of the subject and generate image data.
  • the light source 402 may provide illumination light emitted from the front end of the endoscope device.
  • the system host 403 performs prescribed image-related operations on the image data generated by the endoscope and uniformly controls the overall movement of the endoscope system.
  • the display device displays an image corresponding to the image data of the endoscope system host.
  • the storage device stores images corresponding to the image data of the endoscope system host. However, it is difficult to accurately and clearly identify the endoscope target in the endoscopic image.
  • the ultrasonic diagnostic system may perform contour detection on the endoscopic target in the ultrasonic image, and further obtain the contour position information of the endoscopic target.
  • step S103 when the electronic device obtains the contour position information, in step S103, the contour of the operation execution target may be displayed in the ultrasound image based on the contour position information.
  • the ultrasonic image is composed of pixels, and the contour position information of the operation execution target can be composed of contour pixel information of the operation execution target. Then, the electronic device can display the contour of the operation execution target in the ultrasonic image based on the pixel information.
  • the ultrasonic diagnostic system can perform contour detection on the surgical instrument target in the ultrasonic image, obtain the contour pixel information of the surgical instrument target, and display the contour of the surgical instrument target in the ultrasonic image based on the contour pixel information of the surgical instrument target, so that the operator can view the relative position relationship between the surgical instrument target and the uterine fundus target, operate the surgical instrument to separate the adhesion, and reduce the risk of accidental injury during surgery.
  • the contour position information of the operation execution target can be obtained. Based on the contour position information, the operation execution target can be displayed in the ultrasound image, and the ultrasound image can be accurately identified. Each target in the image enables the operator to accurately operate the executed target to operate the operated target.
  • the step of performing contour detection on the operation execution target in the ultrasound image to obtain contour position information of the operation execution target may include:
  • the ultrasonic image is binarized to obtain a binarized ultrasonic image; and the binarized ultrasonic image is morphologically processed to obtain contour position information of the operation execution target.
  • the electronic device may perform contour detection on the operation execution target in the ultrasonic image to obtain contour position information of the operation execution target.
  • the electronic device can input the ultrasound image into a pre-trained contour segmentation model based on a deep learning method, perform contour segmentation on the ultrasound image based on the image features of the ultrasound image, and then output the contour position information of the operation execution target.
  • the contour segmentation model can include a forward reasoning framework, which is not specifically limited here. After the ultrasound image is segmented, the segmented results can be subjected to morphological processing, interference removal processing, etc., which are all reasonable.
  • the contour segmentation of ultrasound images can be achieved in two stages: training stage and testing stage.
  • the training stage is used to obtain the contour segmentation model
  • the testing stage uses the contour segmentation model to perform contour segmentation on the ultrasound image.
  • network training can be performed based on the training image and label, loss function, and network structure.
  • a segmentation model i.e., a contour segmentation model
  • the contour segmentation model testing process can be performed.
  • the contour segmentation model obtained in the training process can be used to perform network reasoning on the test image to obtain a contour segmentation result.
  • the contour segmentation result can be subjected to morphological processing and interference removal processing to obtain the contour position information of the operation execution target in the ultrasound image.
  • the network structure is a segmentation network structure based on deep learning, which can be composed of an encoding network and a decoding network.
  • the encoding network can be composed of convolution and downsampling
  • the decoding network can be composed of convolution and upsampling.
  • it can be a Unet network structure, which is not specifically limited here.
  • the training image is an endoscopic image, that is, an ultrasound image including an endoscopic target. Then the endoscopic image can be input into the network structure, and after being processed by the encoding network and the decoding network, the segmentation result of the endoscopic target is obtained.
  • an ultrasound image includes a surgical instrument target.
  • the ultrasound image can be input into a pre-trained contour segmentation model.
  • the electronic device can perform contour segmentation on the ultrasound image based on the image features of the ultrasound image to obtain a contour segmentation result, and then output the contour position information of the surgical instrument target.
  • the electronic device can use the grayscale value changes in the ultrasonic image to perform contour detection on the operation execution target.
  • the ultrasonic image can be binarized to obtain a binary ultrasonic image, and then the binarized ultrasonic image can be morphologically processed to obtain the contour position information of the operation execution target.
  • the color ultrasound image needs to be converted into a grayscale image.
  • the grayscale image can be converted into a binary image according to a preset threshold, and the grayscale of pixels greater than the preset threshold is set to the grayscale maximum value, and the grayscale of pixels less than the preset threshold is set to the grayscale minimum value, thereby obtaining a binary ultrasound image.
  • the binarization process can adopt the local mean binarization method, which is reasonable.
  • the electronic device obtains a binary ultrasonic image, and can perform morphological processing on the binary ultrasonic image to make the contour of the obtained operation execution target clearer, thereby obtaining contour position information of the operation execution target.
  • the morphological processing may include corrosion processing, expansion processing, top hat transformation processing, and bottom hat transformation processing, which are not specifically limited here.
  • the ultrasonic diagnostic system can binarize the ultrasonic image to obtain a binary ultrasonic image, corrode the binarized ultrasonic image based on a 9x9 filter core to obtain a corroded binary ultrasonic image, and dilate the corroded binary ultrasonic image based on a 9x9 filter core to obtain contour position information of the operation execution target.
  • the electronic device can input the ultrasound image into a pre-trained contour segmentation model, perform contour segmentation on the ultrasound image based on the image features of the ultrasound image, and output the contour position information of the operation execution target, or perform binarization processing on the ultrasound image to obtain a binarized ultrasound image, perform morphological processing on the binarized ultrasound image, and obtain the contour position information of the operation execution target.
  • the contour detection of the ultrasound image is performed in the above two ways to obtain the contour position information of the operation execution target.
  • the contour of the operation execution target can be displayed in the ultrasound image based on the contour position information, so that the operator can identify the operation execution target in the ultrasound image and accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the training method of the above-mentioned contour segmentation model may include:
  • the sample ultrasound image includes an operated target and an operation execution target
  • the calibration tag is used to identify the contour position information of the operation execution target in the corresponding sample ultrasound image
  • the electronic device can obtain the sample ultrasound images and the calibration labels corresponding to the sample ultrasound images.
  • the calibration labels in the sample ultrasound image can be manually annotated, and the calibration labels can identify the contour position information of the operation execution target.
  • the ultrasound image with the calibration labels can be a binary image, in which the value inside the contour is marked as 1 and the value outside the contour is marked as 0.
  • the operation execution target in the sample ultrasound image is the surgical instrument target 901.
  • an experienced doctor can annotate the contour position information of the surgical instrument target 901 in the ultrasound image and output a binary image.
  • the inner value of the contour of the surgical instrument target 903 in the binary image is 1, and the outer value of the contour is 0.
  • the electronic device can obtain the sample ultrasound image and the calibration label corresponding to the sample ultrasound image.
  • the electronic device can use the sample ultrasound images and the calibration labels corresponding to the sample ultrasound images to train a preset contour segmentation model to obtain a contour segmentation model.
  • each sample ultrasound image has a surgical instrument target, and each sample ultrasound image is labeled to obtain its corresponding calibration label. Then, based on each sample ultrasound image and its corresponding calibration label, the preset contour segmentation model of the surgical instrument target can be trained to obtain the contour segmentation model of the surgical instrument target.
  • a sample ultrasonic image and a calibration label corresponding to the sample ultrasonic image are obtained, wherein the sample ultrasonic image includes an operated target and an operation execution target, and the calibration label is used to identify the contour position information of the operation execution target in the corresponding sample ultrasonic image, and based on the sample ultrasonic image and its corresponding calibration label, a preset contour segmentation model is trained to obtain a contour segmentation model.
  • contour detection of the operation execution target in the ultrasonic image can be performed, so that the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can accurately operate the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the step of training a preset contour segmentation model based on the sample ultrasound image and its corresponding label to obtain the contour segmentation model may include:
  • the sample ultrasound image is input into the preset contour segmentation model, and the preset contour segmentation model can determine the contour position information of the operation execution target in the sample ultrasound image based on the image features of the sample ultrasound image. Then, the electronic device can obtain the contour position information of the operation execution target and use the contour position information as a prediction label.
  • the image features can be color features, texture features, shape features, etc. of the image, which are not specifically limited here.
  • the operation execution target in the sample ultrasound image is an endoscopic target
  • the sample ultrasound image is input into a preset contour segmentation model.
  • the preset contour segmentation model can determine the contour position information of the endoscopic target in the sample ultrasound image based on the image features of the sample ultrasound image. Then, the electronic device can obtain the contour position information of the endoscopic target and use the contour position information as a prediction label.
  • the electronic device can adjust the model parameters of the preset contour segmentation model based on the difference between the predicted label and the corresponding calibration label and the preset loss function until the preset loss function converges, and the contour segmentation model can be obtained.
  • the preset loss function may adopt a cross entropy loss function.
  • the cross entropy can measure the difference between two different probability distributions in the same random variable. The smaller the cross entropy value, the better the model prediction effect.
  • the preset loss function may be the following formula: Where M represents the number of categories, c represents the category, pc represents the predicted distribution of the sample, and yc represents the true distribution of the sample. For example, M is 2, c represents categories 0 and 1, pc is the probability that the sample belongs to category c, and yc is 0 or 1. If the predicted category and the sample label are the same, it is 1, otherwise it is 0.
  • the operation execution target in the sample ultrasound image is a surgical instrument target
  • the preset loss function adopts the cross-entropy loss function.
  • the electronic device obtains the sample ultrasound image and its corresponding calibration label, and inputs the sample ultrasound image into the preset contour segmentation model to obtain the predicted label. Then, the electronic device can adjust the model parameters of the preset contour segmentation model based on the difference between the predicted label and the corresponding calibration label and the cross-entropy loss function until the cross-entropy loss function converges to obtain the contour segmentation model.
  • the sample ultrasound image is input into the preset contour segmentation model, and the contour position information of the operation execution target in the sample ultrasound image determined by the preset contour segmentation model based on the image features of the sample ultrasound image is obtained as a predicted label.
  • the model parameters of the preset contour segmentation model are adjusted until the preset loss function converges to obtain the contour segmentation model.
  • the contour segmentation model is obtained by adjusting the model parameters of the contour segmentation model under the condition that the preset loss function converges, the contour detection of the operation execution target in the ultrasound image can be accurately performed based on the contour segmentation model, so that the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the step of displaying the outline of the operation execution target in the ultrasound image based on the outline position information may include:
  • the electronic device may determine the contour area of the operation execution target in the ultrasound image based on the contour position information.
  • the electronic device may preprocess the ultrasound image, and after the preprocessing, determine the contour area of the operation execution target in the ultrasound image based on the contour position information.
  • the preprocessing may include filtering noise reduction, morphological processing, regional growth connectivity processing, etc., which are not specifically limited here.
  • the ultrasound image is a preprocessed ultrasound image
  • the operation execution target in the ultrasound image is a surgical instrument target 901 .
  • the electronic device can determine the contour area 902 of the surgical instrument target 901 in the ultrasound image based on the contour position information of the surgical instrument target 901 .
  • S1103 Displaying the outline of the operation execution target in the outline area of the operation execution target after the enhancement processing.
  • the electronic device After the electronic device determines the contour area of the operation execution target in the ultrasound image, it can perform image enhancement processing on the contour area of the operation execution target.
  • the image enhancement processing may include grayscale enhancement processing and color enhancement processing, which are not specifically limited here.
  • the image of the contour area of the operation execution target may include the area inside the contour area, or may not include the area inside the contour area, which is not specifically limited here.
  • the electronic device may display the contour of the operation execution target in the contour area of the operation execution target after the enhancement processing.
  • the operation execution target in the ultrasound image is a surgical instrument target 901, and the electronic device can perform color enhancement processing on a contour area 902 of the surgical instrument target 901. Then, the contour of the operation execution target can be displayed in the contour area 902 of the surgical instrument target 901 after the color enhancement processing.
  • the electronic device can determine the contour area of the operation execution target in the ultrasound image; perform image enhancement processing on the contour area to obtain the contour area of the operation execution target, and display the contour of the operation execution target in the contour area of the operation execution target after the enhancement processing. Since after the contour detection of the operation execution target in the ultrasound image, the contour area can be further image enhanced to make the contour of the operation execution target more obvious, the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can accurately operate the entity corresponding to the operation execution target on the entity corresponding to the operated target.
  • the step of performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after enhancement processing may include:
  • grayscale enhancement processing is performed on the contour area to obtain an enhanced contour area of the operation execution target.
  • the ultrasound image is composed of pixels
  • the ultrasound image may be a grayscale image, that is, an image in which each pixel has only one sampled color, so that the contour area of the operation execution target in the ultrasound image may be more obvious.
  • the electronic device may perform grayscale enhancement processing on the contour area based on the grayscale value of the contour area, thereby obtaining the contour area of the enhanced operation execution target.
  • Grayscale enhancement processing refers to the enhancement processing of the ultrasound image based on the grayscale value in the grayscale space, which may include grayscale nonlinear stretching, grayscale contrast enhancement, grayscale gain enhancement, etc., which are not specifically limited here.
  • the ultrasound image is a grayscale image
  • the operation execution target is the surgical instrument target 901
  • the contour area 902 of the surgical instrument target 901 is determined.
  • the electronic device can perform grayscale enhancement processing on the contour area 902 based on the grayscale value of the contour area 902, for example, grayscale contrast enhancement processing, and thereby obtain the enhanced contour area of the operation execution target, so that the contour of the surgical instrument target 901 displayed in the ultrasound image is more obvious.
  • the grayscale enhancement processing is performed on the contour area based on the grayscale value of the contour area to obtain the contour area of the enhanced operation execution target. Since the contour area can be enhanced based on the grayscale value of the contour area when the ultrasound image is a grayscale image, the contour area can be enhanced to make the contour of the operation execution target more obvious, so that the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can accurately operate the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the step of performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after enhancement processing may include:
  • mapping the ultrasound image to a preset type of color space to obtain a color ultrasound image or, based on the grayscale value of the contour area, performing grayscale enhancement processing on the contour area, and mapping the ultrasound image after the grayscale enhancement processing to a preset type of color space to obtain a color ultrasound image;
  • the electronic device may map the ultrasound image to a preset type of color space to obtain a color ultrasound image.
  • the color type of the preset color space supports full color gamut selection.
  • the color space can include RGB (Red Green Blue) space, HSV (Hue Saturation Value) space, YUV (Luminance Chrominance Chroma) space, LAB (Lab color space) space, etc., without specific limitation here.
  • the ultrasound image is a grayscale image
  • the electronic device can perform grayscale enhancement processing on the contour area based on the grayscale value of the contour area, and map the ultrasound image after the grayscale enhancement processing to a preset type of color space to obtain a color ultrasound image.
  • the ultrasound image is a grayscale image.
  • the electronic device can perform grayscale enhancement processing on the contour area based on the grayscale value of the contour area 902 to obtain an ultrasound image after grayscale enhancement processing, and map the ultrasound image to the YUV space to obtain a color ultrasound image of the ultrasound image.
  • S1202 Perform color enhancement processing on the color image area corresponding to the contour area in the color ultrasound image to obtain the contour area of the operation execution target after the enhancement processing.
  • the electronic device may perform color enhancement processing on the color image area corresponding to the contour area in the color ultrasound image to make the contour area of the operation execution target more obvious, so as to obtain the contour area of the operation execution target after enhancement processing.
  • the color enhancement processing is an enhancement processing performed on the ultrasound image based on the color channel value in the color space, which may include saturation enhancement, contrast enhancement, etc., which are not specifically limited here.
  • an ultrasound image is mapped to an RGB space to obtain a color ultrasound image, wherein the operation execution target is a surgical instrument target 901, and a contour area 902 of the surgical instrument target 901 is determined.
  • the electronic device can perform contrast enhancement processing on the color image area corresponding to the contour area 902 in the color ultrasound image to obtain an enhanced color ultrasound image, thereby making the contour area of the surgical instrument target 901 more obvious.
  • FIG13 is a specific flow chart of image enhancement processing provided by an embodiment of the present application. As shown in FIG13 , in order to improve the accuracy of the operator in identifying the operation execution target, the electronic device can enhance the contour area of the operation execution target in the ultrasound image.
  • the specific steps may include:
  • enhancement mode may include a grayscale enhancement mode and a color enhancement mode.
  • the enhancement mode can be determined to be a gray enhancement mode or a color enhancement mode. If it is determined to be a gray enhancement mode, step S1302 can be executed. If it is a color enhancement mode, step S1307 can be executed.
  • grayscale enhancement mode that is, when the grayscale enhancement mode is adopted, the electronic device may perform grayscale enhancement processing on the contour area of the operation execution target in the ultrasound image;
  • the electronic device may perform image preprocessing on the ultrasound image, such as smoothing processing, denoising processing, etc.;
  • gray value enhancement processing that is, the electronic device can perform gray enhancement processing on the contour area of the operation execution target to be enhanced in the ultrasound image based on the gray value to obtain the ultrasound image after gray enhancement processing, and then directly output the enhanced result, that is, execute step S1306, or further process the enhanced result, that is, execute step S1309;
  • color enhancement mode that is, performing color enhancement processing on the contour area of the operation execution target in the ultrasound image
  • obtaining the color type to be mapped that is, the electronic device can obtain the user preset category, such as the preset color space is RGB, YUV, HSV, etc.;
  • mapping the grayscale space to the color space that is, after determining the preset type of color space, the electronic device can map the ultrasound image or the ultrasound image after grayscale enhancement processing to the preset type of color space to obtain a color ultrasound image;
  • the electronic device can perform color enhancement processing on the color image area corresponding to the contour area in the color ultrasound image to obtain the contour area of the operation execution target after enhancement processing.
  • the color enhancement processing may include saturation enhancement, contrast enhancement, etc.
  • the ultrasound image is mapped to a preset type of color space to obtain a color ultrasound image, or, based on the grayscale value of the contour area, the contour area is gray-enhanced, and the ultrasound image after the grayscale enhancement is mapped to a preset type of color space to obtain a color ultrasound image, and the color image area corresponding to the contour image area in the color ultrasound image is color-enhanced to obtain the contour area of the operation execution target after the enhancement.
  • the ultrasound image or the ultrasound image after the grayscale enhancement is mapped to a preset type of color space, a color ultrasound image with bright colors can be obtained, and the contour area of the operation execution target is color-enhanced, the contour area of the operation execution target can be made more obvious, so that the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can accurately operate the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the step of displaying the outline of the operation execution target in the outline area of the operation execution target after the enhancement processing may include:
  • the outline of the operation execution target is highlighted in the outline area of the operation execution target after enhancement processing.
  • the contour of the operation execution target may be highlighted in the contour area of the operation execution target after enhancement processing based on the contour position information.
  • the highlighting may include solid line display, dotted line display, highlight display, contour band display, etc., which are not specifically limited here.
  • the operation execution target in the ultrasound image is a surgical instrument target 901
  • the contour area 902 of the surgical instrument target 901 is grayscale enhanced.
  • the contour of the surgical instrument target 901 is highlighted in the contour area 902 after the enhancement.
  • the electronic device may display the contour of the surgical instrument target in the ultrasound image based on the contour position information of the surgical instrument target in the ultrasound image.
  • the display mode may include "native display” or "stacked frame display”. It may be a single display mode or a combination of multiple display modes, as shown in FIG14 :
  • Native display may include original image display (ultrasound image display), grayscale enhancement display (image display after grayscale enhancement processing), and color enhancement display (ultrasound image display after color enhancement processing), which is not specifically limited here.
  • the stacked frame display selects the outline display mode of the specific operation execution target based on the native display, including non-stacked frame display, solid line outline display (enhanced display by superimposing a solid line on the outline edge), dotted line outline display (enhanced display by superimposing a dotted line on the outline edge), and band outline display (since the outline is sometimes thicker, the band outline is superimposed on the outline edge for enhanced display), etc., without specific limitations here.
  • Electronic devices can use a combination of "native display” and “stacked frame display” to display ultrasound images more clearly during surgery. Equipment target.
  • the contour of the operation execution target is highlighted in the contour area of the operation execution target after enhancement processing, so that the contour of the operation execution target can be made more obvious. Therefore, the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the step of displaying the outline of the operation execution target in the ultrasound image based on the outline position information may include:
  • the contour of the operation performance target is highlighted in the ultrasound image.
  • the electronic device may highlight the contour of the operation execution target in the ultrasound image based on the contour position information.
  • the operation execution target in the ultrasound image is a surgical instrument target 901, and the contour area 902 of the surgical instrument target 901 has not been subjected to grayscale enhancement processing. Then, the electronic device can highlight the contour of the surgical instrument target 901 in the ultrasound image based on the contour position information of the surgical instrument target 901.
  • the contour of the operation execution target is highlighted in the ultrasound image, which can make the contour of the operation execution target more obvious. Therefore, the operator can distinguish the operation execution target according to the contour of the operation execution target, and then the operator can use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the above-mentioned step of acquiring an ultrasound image may include:
  • S1502 parse the ultrasound video stream to obtain video frames as ultrasound images.
  • the ultrasound image can be updated in real time for the operator to view.
  • the ultrasound image can be any video frame in the video stream collected by the ultrasound device.
  • the video stream collected in real time by the ultrasound equipment includes video frames of surgical instruments and detected parts of the subject.
  • the electronic device can obtain the ultrasound video stream collected by the ultrasound equipment, parse the ultrasound video stream to obtain video frames, and use the parsed video frames as ultrasound images, so that the operator can view the target position of the surgical instrument in the ultrasound image.
  • the electronic device parses the acquired ultrasound video stream to obtain a video frame A, and uses the video frame A as an ultrasound image, as shown in FIG9( a ).
  • the ultrasound image includes a surgical instrument target 901 , and the operator can determine the position of the surgical instrument target 901 .
  • the electronic device can obtain the ultrasound video stream collected by the ultrasound device, parse the ultrasound video stream to obtain video frames as ultrasound images, and then perform contour detection on the operation execution target in the ultrasound image, so that the operator can distinguish the operation execution target according to the contour of the operation execution target and determine the position of the surgical instrument target, so that the operator can use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • FIG16 is a schematic diagram of the structure of an ultrasonic image processing system provided in an embodiment of the present application.
  • the ultrasonic image processing system may include an image acquisition unit 1601, an image processing unit 1602, and an image display unit 1603.
  • FIG17 is a specific flow chart of an ultrasonic image processing method provided in an embodiment of the present application.
  • the ultrasonic image processing method provided in an embodiment of the present application is introduced by way of example in conjunction with FIG16 and FIG17.
  • the ultrasonic image processing method provided in an embodiment of the present application may include the following steps:
  • the image acquisition unit 1601 can obtain a real-time ultrasound video stream and capture an ultrasound image of a frame to be processed from the video stream.
  • intelligent processing unit/image processing unit identifying surgical instrument/endoscope lens boundary contour information
  • the image processing unit 1602 can perform contour detection on the input ultrasound image. It can detect the boundary contour information of the surgical instrument by calling the intelligent processing unit to determine the contour area of the surgical instrument. It can also use traditional algorithms through the image processing unit to detect the boundary contour information of the surgical instrument to determine the contour area of the surgical instrument.
  • an intelligent processing unit/image processing unit performs target image enhancement processing based on the recognition information
  • the image processing unit 1602 may perform image enhancement processing on the contour area based on the detection result of the intelligent processing unit by calling the intelligent processing unit; or may perform image enhancement processing on the contour area based on the detection result of the image processing unit by the image processing unit.
  • the image display unit 1603 can display the enhanced boundary contour of the surgical instrument in the ultrasound image for the operator to use.
  • the electronic device can obtain an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target, and the contour of the operation execution target in the ultrasonic image is detected to obtain the contour position information of the operation execution target, and the contour of the operation execution target is displayed in the ultrasonic image based on the contour position information. Since the ultrasonic image includes the operated target and the operation execution target, the contour of the operation execution target in the ultrasonic image can be detected, and the contour of the operation execution target can be displayed in the ultrasonic image based on the obtained contour position information of the operation execution target.
  • the operation execution target can be distinguished according to the contour of the operation execution target, which can improve the accuracy of the operator's identification of the operation execution target, and thus the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the contours of surgical instrument targets or endoscope lens targets in the ultrasound image are detected, and the contour edges are subjected to multi-mode image enhancement processing to improve the distinction and visibility of the target area and display it on the ultrasound display. This can help doctors reduce the difficulty of distinguishing targets and improve the efficiency of ultrasound intraoperative navigation.
  • an embodiment of the present application further provides an ultrasonic image processing device.
  • the ultrasonic image processing device provided by the embodiment of the present application is introduced below.
  • an ultrasonic image processing device includes:
  • the ultrasound image acquisition module 1810 is used to acquire an ultrasound image, wherein the ultrasound image includes an operated target and an operation execution target, and the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target;
  • the contour position information acquisition module 1820 is used to perform contour detection on the operation execution target in the ultrasound image to obtain contour position information of the operation execution target;
  • the contour display module 1830 is configured to display the contour of the operation execution target in the ultrasound image based on the contour position information.
  • the electronic device can obtain an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target, and the contour of the operation execution target in the ultrasonic image is detected to obtain the contour position information of the operation execution target, and the contour of the operation execution target is displayed in the ultrasonic image based on the contour position information. Since the ultrasonic image includes the operated target and the operation execution target, the contour of the operation execution target in the ultrasonic image can be detected, and the contour of the operation execution target can be displayed in the ultrasonic image based on the obtained contour position information of the operation execution target.
  • the operation execution target can be distinguished according to the contour of the operation execution target, which can improve the accuracy of the operator's identification of the operation execution target, and thus the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the above-mentioned contour position information acquisition module 1820 may include:
  • a contour position information acquisition submodule configured to input the ultrasound image into a pre-trained contour segmentation model, perform contour segmentation on the ultrasound image based on image features of the ultrasound image, and output contour position information of the operation execution target;
  • It is used to perform binarization processing on the ultrasonic image to obtain a binarized ultrasonic image; and perform morphological processing on the binarized ultrasonic image to obtain contour position information of the operation execution target.
  • the above-mentioned outline display module 1830 may include:
  • a contour area determination submodule configured to determine a contour area of the operation execution target in the ultrasound image based on the contour position information
  • a contour area display submodule used for performing image enhancement processing on the contour area to obtain the contour area of the operation execution target after the enhancement processing
  • the first display submodule is used to display the outline of the operation execution target in the outline area of the operation execution target after the enhancement processing.
  • the above outline area display submodule may include:
  • the first contour region acquisition unit is used to perform grayscale enhancement processing on the contour region based on the grayscale value of the contour region to obtain the enhanced contour region of the operation execution target.
  • the outline area display submodule may include:
  • a color ultrasound image acquisition unit used for mapping the ultrasound image to a preset type of color space to obtain a color ultrasound image; Or, based on the grayscale value of the contour area, grayscale enhancement processing is performed on the contour area, and the ultrasound image after the grayscale enhancement processing is mapped to a preset type of color space to obtain a color ultrasound image;
  • the second contour area acquisition unit is used to perform color enhancement processing on the color image area corresponding to the contour area in the color ultrasound image to obtain the contour area of the operation execution target after the enhancement processing.
  • the above-mentioned outline display submodule may include:
  • the outline display unit is used to highlight the outline of the operation execution target in the outline area of the operation execution target after enhancement processing based on the outline position information.
  • the outline display module 1830 may include:
  • the second display submodule is used to highlight the outline of the operation execution target in the ultrasound image based on the outline position information.
  • the ultrasound image acquisition module 1810 may include:
  • An ultrasound video stream acquisition submodule is used to acquire an ultrasound video stream collected by an ultrasound device
  • the ultrasound image acquisition submodule is used to parse the ultrasound video stream to obtain video frames as ultrasound images.
  • the embodiment of the present application further provides an electronic device, as shown in FIG19 , including:
  • Memory 1901 used for storing computer programs
  • the processor 1902 is used to implement an ultrasonic image processing method described in any of the above embodiments when executing the program stored in the memory 1901.
  • the electronic device may further include a communication bus and/or a communication interface, and the processor 1902, the communication interface, and the memory 1901 communicate with each other via the communication bus.
  • the electronic device can obtain an ultrasonic image, wherein the ultrasonic image includes an operated target and an operation execution target, the entity corresponding to the operation execution target performs a preset operation on the entity corresponding to the operated target, and the contour of the operation execution target in the ultrasonic image is detected to obtain the contour position information of the operation execution target, and the contour of the operation execution target is displayed in the ultrasonic image based on the contour position information. Since the ultrasonic image includes the operated target and the operation execution target, the contour of the operation execution target in the ultrasonic image can be detected, and the contour of the operation execution target can be displayed in the ultrasonic image based on the obtained contour position information of the operation execution target.
  • the operation execution target can be distinguished according to the contour of the operation execution target, which can improve the accuracy of the operator's identification of the operation execution target, and thus the operator can accurately use the entity corresponding to the operation execution target to operate the entity corresponding to the operated target.
  • the communication bus mentioned in the above electronic device can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, etc. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above electronic device and other devices.
  • the memory may include a random access memory (RAM) or a non-volatile memory (NVM), such as at least one disk storage.
  • RAM random access memory
  • NVM non-volatile memory
  • the memory may also be at least one storage device located away from the aforementioned processor.
  • processors can be general-purpose processors, including central processing units (CPU), network processors (NP), etc.; they can also be digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing units
  • NP network processors
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • FPGA field programmable gate arrays
  • a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, the steps of the ultrasound image processing method described in any of the above embodiments are implemented.
  • a computer program product including instructions is also provided, which, when executed on a computer, enables the computer to execute the ultrasound image processing method described in any of the above embodiments.
  • all or part of the embodiments may be implemented by software, hardware, firmware, or any combination thereof.
  • all or part of the embodiments may be implemented in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integrated therein.
  • the available medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a solid-state drive Solid State Disk (SSD)), etc.
  • a magnetic medium e.g., a floppy disk, a hard disk, a magnetic tape
  • an optical medium e.g., a DVD
  • a semiconductor medium e.g., a solid-state drive Solid State Disk (SSD)

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例提供了一种超声图像处理方法、装置、电子设备及存储介质,所述方法包括:获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于可以在超声图像中显示操作执行目标的轮廓,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。

Description

一种超声图像处理方法、装置、电子设备及存储介质
本申请要求于2022年11月18日提交中国专利局、申请号为202211447127.9、发明名称为“一种超声图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及超声图像领域,特别是涉及一种超声图像处理方法、装置、电子设备及存储介质。
背景技术
超声图像是基于被探测目标对超声信号的反射、散射而回的信号,通过模拟收发、波束合成等处理,将信号幅度按时间先后用不同灰阶值来表示所形成的图像。超声图像的应用非常广泛,例如,可以用于产品探伤、手术辅助等。
目前对于超声图像,需要人工观看并识别其中的各个目标,或者是识别目标内部的各部分。例如,在利用超声图像进行手术辅助时,超声图像中包括人体部位和手术器械,需要医生用肉眼分辨超声图像中的人体部位和手术器械,才能进行准确的操作。然而,人眼识别超声图像中的目标准确度是较低的。
发明内容
本申请实施例的目的在于提供一种超声图像处理方法、装置、电子设备及存储介质,以准确识别超声图像中的目标。具体技术方案如下:
第一方面,本申请实施例提供了一种超声图像处理方法,所述方法包括:
获取超声图像,其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作;
对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
可选的,所述对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息的步骤,包括:
将所述超声图像输入预先训练的轮廓分割模型,基于所述超声图像的图像特征对所述超声图像进行轮廓分割,输出所述操作执行目标的轮廓位置信息;或,
对所述超声图像进行二值化处理,得到二值化超声图像;对所述二值化超声图像进行形态学处理,得到所述操作执行目标的轮廓位置信息。
可选的,所述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,包括:
基于所述轮廓位置信息,确定所述超声图像中所述操作执行目标的轮廓区域;
对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域;
在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓。
可选的,所述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,包括:
基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,得到增强后的所述操作执行目标的轮廓区域。
可选的,所述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,包括:
将所述超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;或,基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
对所述彩色超声图像中与所述轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的所述操作执行目标的轮廓区域。
可选的,所述在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓的步骤,包括:
基于所述轮廓位置信息,在增强处理后的所述操作执行目标的轮廓区域中突出显示所述操作执行目标的轮廓。
可选的,所述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,包括:
基于所述轮廓位置信息,在所述超声图像中突出显示所述操作执行目标的轮廓。
可选的,所述获取超声图像的步骤,包括:
获取超声设备采集的超声视频流;
从所述超声视频流中解析得到视频帧,作为超声图像。
第二方面,本申请实施例提供了一种超声图像处理装置,所述装置包括:
超声图像获取模块,用于获取超声图像,其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作;
轮廓位置信息获取模块,用于对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
轮廓显示模块,用于基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
可选的,所述轮廓位置信息获取模块,包括:
轮廓位置信息获取子模块,用于将所述超声图像输入预先训练的轮廓分割模型,基于所述超声图像的图像特征对所述超声图像进行轮廓分割,输出所述操作执行目标的轮廓位置信息;或,
用于对所述超声图像进行二值化处理,得到二值化超声图像;对所述二值化超声图像进行形态学处理,得到所述操作执行目标的轮廓位置信息。
可选的,所述轮廓显示模块,包括:
轮廓区域确定子模块,用于基于所述轮廓位置信息,确定所述超声图像中所述操作执行目标的轮廓区域;
轮廓区域显示子模块,用于对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域;
第一显示子模块,用于在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓。
可选的,所述轮廓区域显示子模块,包括:
第一轮廓区域获取单元,用于基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,得到增强后的所述操作执行目标的轮廓区域。
可选的,所述轮廓区域显示子模块,包括:
彩色超声图像获取单元,用于将所述超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;或,基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
第二轮廓区域获取单元,用于对所述彩色超声图像中与所述轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的所述操作执行目标的轮廓区域。
可选的,所述轮廓显示子模块,包括:
轮廓显示单元,用于基于所述轮廓位置信息,在增强处理后的所述操作执行目标的轮廓区域中突出显示所述操作执行目标的轮廓。
可选的,所述轮廓显示模块,包括:
第二显示子模块,用于基于所述轮廓位置信息,在所述超声图像中突出显示所述操作执行目标的轮廓。
可选的,所述超声图像获取模块,包括:
超声视频流获取子模块,用于获取超声设备采集的超声视频流;
超声图像获取子模块,用于从所述超声视频流中解析得到视频帧,作为超声图像。
第三方面,本申请实施例提供了一种电子设备,包括:
存储器,用于存放计算机程序;
处理器,用于执行存储器上所存放的程序时,实现上述第一方面任一所述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面任一所述的方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,所述计算机程序产品被计算机执行时实现上述第一方面任一所述的方法。
本申请实施例有益效果:
本申请实施例提供的方案中,电子设备可以获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于在超声图像中包括被操作目标和操作执行目标的情况下,可以对超声图像中的操作执行目标的轮廓进行检测,基于得到的操作执行目标的轮廓位置信息,在超声图像中显示操作执行目标的轮廓,因此,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,可以提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。当然,实施本申请的任一产品或方法并不一定需要同时达到以上所述的所有优点。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。
图1为本申请实施例所提供的一种超声图像处理方法的流程图;
图2为本申请实施例所提供的超声诊断系统的一种结构示意图;
图3为本申请实施例所提供的超声诊断系统的一种功能结构示意图;
图4为本申请实施例所提供的内窥镜系统的一种结构示意图;
图5为基于图1所示实施例的超声图像轮廓分割模型训练和测试的一种流程示意图;
图6为基于图1所示实施例的操作执行目标分割网络的一种结构示意图;
图7为基于图1所示实施例的得到操作执行目标的轮廓位置信息的一种流程示意图;
图8为基于图1所示实施例的得到轮廓分割模型的一种具体流程图;
图9(a)为基于图1所示实施例的包括操作执行目标的超声图像的一种示意图;
图9(b)为基于图1所示实施例的包括操作执行目标的超声图像的另一种示意图;
图10为图8所示实施例中步骤S802的一种具体流程图;
图11为基于图1所示实施例显示操作执行目标的轮廓的一种流程图;
图12为基于图1所示实施例对轮廓区域进行彩色增强处理的一种流程图;
图13为基于图1所示实施例的图像增强处理的一种具体流程图;
图14为基于图1所示实施例的超声图像显示方式的一种示意图;
图15为图1所示实施例中步骤S101的一种具体流程图;
图16为本申请实施例所提供的超声图像处理系统的一种结构示意图;
图17为本申请实施例所提供的超声图像处理方法的一种具体流程图;
图18为本申请实施例所提供的一种超声图像处理装置的结构示意图;
图19为本申请实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本申请保护的范围。
为了准确识别超声图像中的目标,本申请实施例提供了一种超声图像处理方法、装置、系统、电子设备、计算机可读存储介质以及计算机程序产品。下面首先对本申请实施例所提供的一种超声图像处理方法进行介绍。
本申请实施例所提供的一种超声图像处理方法,可以应用于任意需要进行超声图像处理的电子设备,例如,可以为超声诊断系统或者其他超声图像处理设备,在此不做具体限定,为了描述清楚,以下称为 电子设备。
如图1所示,一种超声图像处理方法,所述方法包括:
S101,获取超声图像;
其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作。
S102,对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
S103,基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
可见,在本申请实施例所提供的方案中,电子设备可以获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于在超声图像中包括被操作目标和操作执行目标的情况下,可以对超声图像中的操作执行目标的轮廓进行检测,基于得到的操作执行目标的轮廓位置信息,在超声图像中显示操作执行目标的轮廓,因此,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,可以提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
由于超声图像可以显示超声探测到的各个目标,因此,在医疗手术中,操作者可以利用超声图像识别各个目标,并基于各个目标的位置,确定各个目标之间的相对位置,其应用越来越广泛。
例如,在医疗手术中,操作者使用内窥镜设备进行手术时,只能通过显示器看到组织表面的信息,而病灶、重要神经或目标位置往往被筋膜和组织覆盖在深处,操作者无法看到深层的信息,导致在使用手术器械分离组织的情况下,有产生误伤关键人体结构的风险。因此,使用超声图像可以帮助操作者定位常规可见光视觉无法看到的目标位置,以及手术器械和目标位置的相对位置关系,进而为操作者提供实时的手术导航。
但是,手术器械在超声图像中的回声往往不稳定,因此,操作者用肉眼分辨超声图像中的手术器械目标,准确度是较低的。本申请提供了一种图像超声图像处理方法,可以提高操作者识别超声图像中的手术器械目标的准确度。
在步骤S101中,电子设备可以获取超声图像,其中,超声图像可以为超声设备采集的超声视频流中的任一视频帧。操作图像中可以包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作。预设操作可以包括检测操作、切割操作、缝合操作等,在此不做具体限定。
在一种实施方式中,电子设备可以是超声诊断系统,超声诊断系统可以获取超声图像。
超声诊断系统可以包括超声探头、超声系统主机、操作装置、显示装置和存储装置。其中,超声探头根据自身的应用场景,可以在被检体体表或通过插入被检体来拍摄被检体的观察部位并生成超声图像数据。超声系统主机对超声探头传入的超声图像数据的信号执行规定的相关操作,并且可以统一控制超声诊断系统整体的动作。显示装置处理并显示与超声系统主机的超声图像数据对应的超声图像及相关状态信息。存储装置存储与超声系统主机的超声图像数据对应的超声图像。
例如,如图2所示,超声诊断系统包括超声探头201、超声系统主机202、操作界面203、超声显示屏204。其中,操作者可以通过超声探头201,或在被检体体表,或插入被检体体内,拍摄超声图像,那么,超声诊断系统可以获取超声图像,输出至超声显示屏204,也可以存储至存储装置。其中,该超声图像中包括的被操作目标可以为被检体检测部位的病变部位,操作执行目标可以为对被检体检测部位进行检测的检测器械。
电子设备获取到超声图像,在步骤S102中,可以对超声图像中的操作执行目标进行轮廓检测,进而可以得到操作执行目标的轮廓位置信息。其中,轮廓位置信息用于表示操作执行目标的轮廓位置。
轮廓检测是在包括目标和背景的图像中,忽略背景和目标内部的纹理以及噪声干扰的影响,采用一定的技术和方法来实现目标轮廓提取的过程。因此,电子设备获取到超声图像,可以通过轮廓检测实现对超声图像中的操作执行目标的轮廓的提取。
在一种实施方式中,电子设备可以利用传统算法对超声图像中的操作执行目标进行轮廓检测,得到 操作执行目标的轮廓位置信息。其中,传统算法是可以基于超声图像中灰度值发生急剧变化的位置进行轮廓检测的方法。
在另一种实施方式中,电子设备可以利用深度学习方法对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息。其中,深度学习方法是可以利用卷积神经网络学习图像特征,进而训练得到轮廓分割模型对超声图像中的操作执行目标进行轮廓检测的方法。
在超声诊断系统中,超声系统主机可以对从超声探头接收到的图像数据进行处理,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息。
如图3所示,在超声诊断系统中,超声探头包括信号收发单元、处理单元、操作单元,超声系统主机包括图像输入单元、超声图像处理单元、超声智能处理单元、视频编码单元、控制单元和操作单元。显示装置和存储装置为外置设备。
超声系统主机中图像输入单元可以接收超声探头发送过来的信号,将接受到的信号进行模拟收发、波束合成、信号转换等处理并传输给超声图像处理单元。超声图像处理单元对图像输入单元的超声图像进行ISP(Image Signal Processor,图像信号处理器)操作,可以包括亮度变换、锐化、对比度增强等。超声图像处理单元可以将处理后的超声图像传输至超声智能处理单元、视频编码单元或显示装置。
超声智能处理单元对超声图像处理单元处理后的超声图像进行智能分析,可以包括基于深度学习的目标识别、检测、分割。超声智能处理单元可以将处理后的超声图像传输给超声图像处理单元或视频编码单元。
超声图像处理单元对超声智能处理单元处理后的超声图像的处理方式可以包括轮廓增强、亮度变换、叠框和缩放。视频编码单元将超声图像处理单元处理后的超声图像或超声智能处理单元处理后的超声图像进行编码压缩,并传输给存储装置。
控制单元控制超声系统的各个模块,可以包括界面操作方式、图像处理方式、超声测量方式和视频编码方式。操作单元可以包括开关、按钮和触摸面板等,接收外部指示信号,将接收的指示信号输出到控制单元。
例如,在医疗手术中,在对被检体进行检测时,操作者可以采用一种检测器械内窥镜,超声诊断系统获取的超声图像中包括内窥镜目标。其中,内窥镜可以插入被检体的体内来拍摄被检体的检测部位,将拍摄的体内图像输出到外部的显示装置和存储装置。
如图4所示,内窥镜系统的结构可以包括内窥镜401、光源402、系统主机403、显示装置404和存储装置405。内窥镜401通过插入被检体来拍摄被检体的检测部位并生成图像数据。光源402可以提供从内窥镜装置前端射出的照明光。系统主机403对内窥镜生成的图像数据执行规定的图像相关操作,并且统一控制内窥镜系统整体的动作。显示装置显示与内窥镜系统主机的图像数据对应的图像。存储装置存储与内窥镜系统主机的图像数据对应的图像。但是,在内窥镜图像中很难准确清楚的识别内窥镜目标。
为了使操作者在使用内窥镜时,可以在显示的包括内窥镜目标的超声图像中准确识别内窥镜目标,可以由超声诊断系统对超声图像中的内窥镜目标进行轮廓检测,进而可以得到内窥镜目标的轮廓位置信息。
接下来,在电子设备得到轮廓位置信息时,在步骤S103中,可以基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。
超声图像是由像素构成的,操作执行目标的轮廓位置信息可以由操作执行目标的轮廓像素信息组成,那么,电子设备可以基于像素信息在超声图像中显示操作执行目标的轮廓。
例如,在宫腔镜手术中,若宫腔粘连严重,内窥镜图像视野被白色瘢痕组织占据,无法探测宫底,在手术器械分离粘连时,易误伤到子宫壁,造成子宫损伤。那么,超声诊断系统可以对超声图像中的手术器械目标进行轮廓检测,得到手术器械目标的轮廓像素信息,基于该手术器械目标的轮廓像素信息,在超声图像中显示手术器械目标的轮廓,进而可以使操作者观看手术器械目标和宫底目标的相对位置关系,操作手术器械分离粘连,减少手术误伤风险。
在本实施例的方案中,由于对超声图像中的操作执行目标的轮廓进行检测,可以得到操作执行目标的轮廓位置信息,基于该轮廓位置信息,可以在超声图像中显示操作执行目标,进而可以准确识别超声 图像中的各目标,使操作者可以准确操作执行目标对被操作目标进行操作。
作为本申请实施例的一种实施方式,上述对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息的步骤,可以包括:
将所述超声图像输入预先训练的轮廓分割模型,基于所述超声图像的图像特征对所述超声图像进行轮廓分割,输出所述操作执行目标的轮廓位置信息;或,
对所述超声图像进行二值化处理,得到二值化超声图像;对所述二值化超声图像进行形态学处理,得到所述操作执行目标的轮廓位置信息。
电子设备在获取到超声图像后,可以对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息。
在一种实施方式中,电子设备可以基于深度学习方法,将超声图像输入预先训练的轮廓分割模型,基于超声图像的图像特征对超声图像进行轮廓分割,进而可以输出操作执行目标的轮廓位置信息。其中,轮廓分割模型可以包括前向推理框架,在此不做具体限定。在对超声图像进行分割后,可以对分割后的结果进行形态学处理、去除干扰处理等,这都是合理的。
基于深度学习方法,可以分为两个阶段实现对超声图像的轮廓分割:训练阶段和测试阶段。其中,训练阶段用于得到轮廓分割模型,测试阶段利用轮廓分割模型对超声图像进行轮廓分割。
具体的,如图5所示,在轮廓分割模型的训练过程中,可以基于训练图像和标签、损失函数、网络结构进行网络训练,经过网络训练后,可以得到分割模型,即轮廓分割模型,进而进行轮廓分割模型的测试过程。在测试过程中,可以利用训练过程得到的轮廓分割模型对测试图像进行网络推理,得到轮廓分割结果,可以对轮廓分割结果进行形态学处理和去除干扰处理,进而得到超声图像中的操作执行目标的轮廓位置信息。
其中,网络结构为基于深度学习的分割网络结构,可以由编码网络和解码网络组成,编码网络可以由卷积和下采样组成,解码网络可以由卷积和上采样组成。例如,可以为Unet网络结构,在此不做具体限定。如图6所示,训练图像为内窥镜图像,即包括内窥镜目标的超声图像,那么可以将内窥镜图像输入至网络结构,经由编码网络处理和解码网络处理后,得到内窥镜目标的分割结果。
例如,超声图像中包括有手术器械目标,为了准确识别手术器械目标,可以将超声图像输入预先训练的轮廓分割模型,那么,电子设备可以基于超声图像的图像特征对该超声图像进行轮廓分割,得到轮廓分割结果,进而可以输出手术器械目标的轮廓位置信息。
在另一种实施方式中,电子设备可以利用超声图像中灰度值变化对操作执行目标进行轮廓检测,如图7所示,可以对超声图像进行二值化处理,得到二值化超声图像,进而对二值化超声图像进行形态学处理,得到操作执行目标的轮廓位置信息。
其中,在超声图像为彩色超声图像的情况下,需要将该彩色超声图像转换为灰度图像。具体的,可以针对超声图像中的每一个像素点,基于公式Gray=R*0.299+G*0.587+B*0.114对该像素点的像素值进行变换,其中,R、G、B表示红、绿、蓝三个通道的颜色,Gray是变换后的像素灰度。
在对超声图像进行二值化处理后,可以根据预设阈值把灰度图像转换成二值图像,将大于预设阈值的像素点灰度设为灰度极大值,将小于预设阈值的像素点灰度设为灰度极小值,进而可以得到二值化超声图像。当然,二值化处理可以采用局部均值二值化方法,这都是合理的。
电子设备得到二值化超声图像,可以对该二值化超声图像进行形态学处理,使得到的操作执行目标的轮廓更加清晰,进而得到操作执行目标的轮廓位置信息。其中,形态学处理可以包括腐蚀处理、膨胀处理、顶帽变换处理、底帽变换处理,在此不做具体限定。
例如,超声图像中包括有内窥镜目标,超声诊断系统可以对超声图像进行二值化处理,得到二值化超声图像,基于9x9的滤波核对该二值化超声图像进行腐蚀处理,获取到经过腐蚀处理的二值化超声图像,基于9x9的滤波核对经过腐蚀处理的二值化超声图像进行膨胀处理,可以得到操作执行目标的轮廓位置信息。
可见,在本实施例中,电子设备可以将超声图像输入预先训练的轮廓分割模型,基于超声图像的图像特征对超声图像进行轮廓分割,输出操作执行目标的轮廓位置信息,或,对超声图像进行二值化处理,得到二值化超声图像,对二值化超声图像进行形态学处理,得到操作执行目标的轮廓位置信息。由于可 以采用以上两种方式对超声图像进行轮廓检测,以得到操作执行目标的轮廓位置信息,可以基于该轮廓位置信息在超声图像中显示操作执行目标的轮廓,进而使操作者可以识别超声图像中的操作执行目标,可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,如图8所示,上述轮廓分割模型的训练方式,可以包括:
S801,获取样本超声图像以及所述样本超声图像对应的标定标签;
其中,所述样本超声图像包括被操作目标和操作执行目标,所述标定标签用于标识对应的样本超声图像中的操作执行目标的轮廓位置信息;
为了训练轮廓分割模型,需要采集样本超声图像,同时样本超声图像标注有操作执行目标对应的标定标签,那么,电子设备可以获取样本超声图像以及样本超声图像对应的标定标签。
在一种实施方式中,样本超声图像中的标定标签可以由人工进行标注,标定标签可以标识出操作执行目标的轮廓位置信息,具有标定标签的超声图像可以是一张二值化图像,其中,轮廓内部值标为1,轮廓外部值标为0。
例如,如图9(a)所示,样本超声图像中的操作执行目标是手术器械目标901,那么,可以由经验丰富的医生对超声图像中的手术器械目标901的轮廓位置信息进行标注,并输出二值化图像,如图9(b)所示,二值化图像中手术器械目标903的轮廓内部值为1,轮廓外部值为0,进而电子设备可以获取样本超声图像以及样本超声图像对应的标定标签。
S802,基于所述样本超声图像以及其对应的标定标签,对预设轮廓分割模型进行训练,得到所述轮廓分割模型。
获取到样本超声图像以及样本超声图像对应的标定标签,电子设备可以采用各样本超声图像以及其对应的标定标签,对预设轮廓分割模型进行训练,得到轮廓分割模型。
例如,承接上述步骤S801中的例子,各样本超声图像中具有手术器械目标,对各样本超声图像进行标注,可以得到其对应的标定标签,那么,可以基于各样本超声图像以及其对应的标定标签,对手术器械目标的预设轮廓分割模型进行训练,进而得到手术器械目标的轮廓分割模型。
可见,在本实施例中,获取样本超声图像以及样本超声图像对应的标定标签,其中,样本超声图像包括被操作目标和操作执行目标,标定标签用于标识对应的样本超声图像中的操作执行目标的轮廓位置信息,基于样本超声图像以及其对应的标定标签,对预设轮廓分割模型进行训练,得到轮廓分割模型。由于基于样本超声图像以及其对应的标定标签,可以训练预设轮廓分割模型,得到轮廓分割模型,可以对超声图像中操作执行目标进行轮廓检测,使操作者可以根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以准确操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,上述基于所述样本超声图像以及其对应的标签,对预设轮廓分割模型进行训练,得到所述轮廓分割模型的步骤,如图10所示,可以包括:
S1001,将所述样本超声图像输入预设轮廓分割模型,得到所述预设轮廓分割模型基于所述样本超声图像的图像特征,确定的所述样本超声图像中的操作执行目标的轮廓位置信息,作为预测标签;
由于超声图像具有图像特征,将样本超声图像输入预设轮廓分割模型,预设轮廓分割模型可以基于样本超声图像的图像特征,确定样本超声图像中的操作执行目标的轮廓位置信息,那么,电子设备可以得到操作执行目标的轮廓位置信息,将该轮廓位置信息作为预测标签。其中,图像特征可以为图像的颜色特征、纹理特征、形状特征等,在此不做具体限定。
例如,样本超声图像中的操作执行目标为内窥镜目标,将样本超声图像输入预设轮廓分割模型,预设轮廓分割模型可以基于样本超声图像的图像特征,确定样本超声图像中的内窥镜目标的轮廓位置信息,那么,电子设备可以得到内窥镜目标的轮廓位置信息,进而将该轮廓位置信息作为预测标签。
S1002,基于所述预测标签与对应的标定标签之间的差异以及预设损失函数,调整所述预设轮廓分割模型的模型参数,直到所述预设损失函数收敛,得到所述轮廓分割模型。
为了提高轮廓分割模型的精度,电子设备可以基于预测标签与对应的标定标签之间的差异以及预设损失函数,调整预设轮廓分割模型的模型参数,直到预设损失函数收敛,可以得到轮廓分割模型。
在一种实施方式中,预设损失函数可以采用交叉熵损失函数,交叉熵能够衡量同一个随机变量中的两个不同概率分布的差异程度,交叉熵的值越小,模型预测效果就越好。预设损失函数可以为如下公式, 其中,M表示类别的数量,c表示类别,pc表示样本所预测的分布,yc表示样本的真实分布。例如,M为2,c表示类别0和1,pc为样本属于类别c的概率,yc为0或1,如果预测出的类别和样本标记相同为1,否则为0。
例如,样本超声图像中的操作执行目标为手术器械目标,预设损失函数采用交叉熵损失函数,电子设备获取样本超声图像以及其对应的标定标签,将样本超声图像输入预设轮廓分割模型,可以得到预测标签,那么,电子设备可以基于预测标签与对应的标定标签之间的差异以及交叉熵损失函数,调整预设轮廓分割模型的模型参数,直到交叉熵损失函数收敛时,可以得到轮廓分割模型。
可见,在本实施例中,将样本超声图像输入预设轮廓分割模型,得到预设轮廓分割模型基于样本超声图像的图像特征,确定的样本超声图像中的操作执行目标的轮廓位置信息,作为预测标签,基于预测标签与对应的标定标签之间的差异以及预设损失函数,调整预设轮廓分割模型的模型参数,直到预设损失函数收敛,得到轮廓分割模型。由于轮廓分割模型是通过调整轮廓分割模型的模型参数,在预设损失函数收敛的条件下得到的,因此,基于该轮廓分割模型可以对超声图像中的操作执行目标进行准确的轮廓检测,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,如图11所示,上述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,可以包括:
S1101,基于所述轮廓位置信息,确定所述超声图像中所述操作执行目标的轮廓区域;
由于操作执行目标的轮廓位置信息可以是操作执行目标的轮廓像素信息,像素可以由一个像素点组成,也可以是多个像素点组成,那么,电子设备可以基于轮廓位置信息,确定超声图像中操作执行目标的轮廓区域。
在一种实施方式中,电子设备可以对超声图像进行预处理,在预处理之后,基于轮廓位置信息,确定超声图像中操作执行目标的轮廓区域。其中,预处理可以包括滤波降噪、形态学处理、区域生长连通处理等,在此不做具体限定。
例如,如图9(a)所示,超声图像为经过预处理的超声图像,超声图像中的操作执行目标为手术器械目标901,那么,电子设备可以基于手术器械目标901的轮廓位置信息,确定超声图像中的手术器械目标901的轮廓区域902。
S1102,对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域;
S1103,在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓。
电子设备确定超声图像中操作执行目标的轮廓区域后,可以对操作执行目标的轮廓区域进行图像增强处理。其中,图像增强处理可以包括灰度增强处理、彩色增强处理,在此不做具体限定。对操作执行目标的轮廓区域进行图像增强处理图像可以包括轮廓区域内部的区域,也可以为不包括轮廓区域内部的区域,在此不做具体限定。
电子设备在对操作执行目标的轮廓区域进行图强增强处理后,可以在增强处理后的操作执行目标的轮廓区域中显示操作执行目标的轮廓。
例如,如图9(a)所示,超声图像中的操作执行目标为手术器械目标901,电子设备可以对手术器械目标901的轮廓区域902进行彩色增强处理,那么,可以在彩色增强处理后的手术器械目标901的轮廓区域902中,显示操作执行目标的轮廓。
可见,在本实施例中,基于轮廓位置信息,电子设备可以确定超声图像中操作执行目标的轮廓区域;对轮廓区域进行图像增强处理,得到操作执行目标的轮廓区域,在增强处理后的操作执行目标的轮廓区域中显示操作执行目标的轮廓。由于在对超声图像中的操作执行目标进行轮廓检测后,可以对轮廓区域进行进一步图像增强处理,使操作执行目标的轮廓更加明显,因此,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以准确操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,上述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,可以包括:
基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,得到增强后的所述操作执行目标的轮廓区域。
由于超声图像是由像素组成的,在一种情况中,超声图像可以为灰度图像,即每个像素只有一个采样颜色的图像,这样超声图像中的操作执行目标的轮廓区域可以更加明显。
在一种实施方式中,电子设备可以基于轮廓区域的灰度值,对轮廓区域进行灰度增强处理,进而得到增强后的操作执行目标的轮廓区域。其中,灰度增强处理是指超声图像在灰度空间基于灰度值进行的增强处理,可以包括灰度非线性拉伸、灰度对比度增强、灰度增益增强等,在此不做具体限定。
例如,如图9(a)所示,超声图像为灰度图像,操作执行目标为手术器械目标901,确定手术器械目标901的轮廓区域902,那么,电子设备可以基于轮廓区域902的灰度值,对轮廓区域902进行灰度增强处理,例如,灰度对比度增强处理,进而得到增强后的操作执行目标的轮廓区域,以使在超声图像中显示的手术器械目标901的轮廓更加明显。
可见,在本实施例中,基于轮廓区域的灰度值,对轮廓区域进行灰度增强处理,得到增强后的操作执行目标的轮廓区域。由于在超声图像为灰度图像的情况下,可以基于轮廓区域的灰度值,对轮廓区域进行灰度增强处理,使操作执行目标的轮廓更加明显,因此,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以准确操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,如图12所示,上述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,可以包括:
S1201,将所述超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;或,基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
为了使超声图像色彩鲜明,在一种实施方式中,电子设备可以将超声图像向预设类型的彩色空间进行映射,得到彩色超声图像。
其中,预设类型的彩色空间的彩色类型支持全色域选择。彩色空间可以包括RGB(Red Green Blue,红绿蓝)空间、HSV(Hue Saturation Value,色调饱和度明度)空间、YUV(Luminance Chrominance Chroma,明亮度色度色度)空间、LAB(Lab color space,颜色对立空间)空间等,在此不做具体限定。
在另一种实施方式中,超声图像为灰度图像,电子设备可以基于轮廓区域的灰度值,对轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像。
例如,如图9(a)所示,超声图像为灰度图像,为了使超声图像中的手术器械目标901色彩鲜明,电子设备可以基于轮廓区域902的灰度值,对轮廓区域进行灰度增强处理,得到灰度增强处理后的超声图像,将该超声图像向YUV空间进行映射,进而得到该超声图像的彩色超声图像。
S1202,对所述彩色超声图像中与所述轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的所述操作执行目标的轮廓区域。
电子设备在得到彩色超声图像后,为了使操作执行目标的轮廓区域明显,可以对彩色超声图像中与轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的操作执行目标的轮廓区域。其中,彩色增强处理是对超声图像在彩色空间基于彩色通道值进行的增强处理,可以包括饱和度增强、对比度增强等,在此不做具体限定。
例如,如图9(a)所示,将超声图像向RGB空间进行映射,得到彩色超声图像,其中,操作执行目标为手术器械目标901,确定手术器械目标901的轮廓区域902,那么,电子设备可以对该彩色超声图像中与轮廓区域902对应的彩色图像区域进行对比度增强处理,得到增强后的彩色超声图像,进而使手术器械目标901的轮廓区域更明显。
图13为本申请实施例所提供的图像增强处理的一种具体流程图。如图13所示,为了提高操作者识别操作执行目标的准确度,电子设备可以对超声图像中的操作执行目标的轮廓区域进行增强处理,具体步骤可以包括:
S1301,增强模式,根据实际场景的不同,电子设备可以采用不同的增强模式对超声图像中的操作执行目标(如手术器械目标)的轮廓区域进行增强处理。增强模式可以包括灰度增强模式和彩色增强模 式,可以确定增强模式为灰度增强模式或彩色增强模式,如果确定为灰度增强模式,可以执行步骤S1302,如果为彩色增强模式,可以执行步骤S1307;
S1302,灰度增强模式,即在采用灰度增强模式时,电子设备可以对超声图像中的操作执行目标的轮廓区域进行灰度增强处理;
S1303,输入待增强区域位置信息,即电子设备可以基于操作执行目标的轮廓位置信息,确定待增强的操作执行目标的轮廓区域;
S1304,图像预处理,电子设备可以对该超声图像进行图像预处理,如平滑处理、去噪处理等;
S1305,基于灰度值增强处理,即电子设备可以基于灰度值对超声图像中待增强的操作执行目标的轮廓区域进行灰度增强处理,得到灰度增强处理后的超声图像,进而可以直接输出增强后的结果,即执行步骤S1306,也可以对增强后的结果进行进一步的处理,即执行步骤S1309;
S1306,输出增强后结果,即电子设备可以输出灰度增强处理后的超声图像;
S1307,彩色增强模式,即对超声图像中的操作执行目标的轮廓区域进行彩色增强处理;
S1308,获取待映射彩色类型,即电子设备可以获取用户预设类别,如预设的彩色空间为RGB、YUV、HSV等;
S1309,灰度空间向彩色空间映射,即在确定预设类型的彩色空间后,电子设备可以将超声图像或灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
S1310,基于彩色通道彩色增强处理;
电子设备可以对彩色超声图像中与轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的操作执行目标的轮廓区域。其中,彩色增强处理可以包括饱和度增强、对比度增强等。
S1311,输出增强后结果,即输出彩色增强处理后的超声图像。
可见,在本实施例中,将超声图像向预设类型的彩色空间进行映射,得到彩色超声图像,或,基于轮廓区域的灰度值,对轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像,对彩色超声图像中与轮廓图像区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的操作执行目标的轮廓区域。由于将超声图像或灰度增强处理后的超声图像向预设类型的彩色空间进行映射,可以得到色彩鲜明的彩色超声图像,对操作执行目标的轮廓区域进行彩色增强处理,可以使操作执行目标的轮廓区域更加明显,因此,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以准确操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,上述在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓步骤,可以包括:
基于所述轮廓位置信息,在增强处理后的所述操作执行目标的轮廓区域中突出显示所述操作执行目标的轮廓。
在电子设备对操作执行目标的轮廓区域进行增强处理的情况下,为了进一步突出操作执行目标的轮廓,可以基于轮廓位置信息,在增强处理后的操作执行目标的轮廓区域中突出显示操作执行目标的轮廓。其中,突出显示可以包括实线显示、虚线显示、高亮显示、轮廓带显示等,在此不做具体限定。
例如,如图9(a)所示,超声图像中的操作执行目标为手术器械目标901,该手术器械目标901的轮廓区域902进行灰度增强处理,基于手术器械目标901的轮廓位置信息,在增强处理后的轮廓区域902中以高亮显示手术器械目标901的轮廓。
在医疗手术中,操作者查看超声图像中的手术器械目标时,电子设备可以基于超声图像中的手术器械目标的轮廓位置信息,在超声图像中显示手术器械目标的轮廓,显示方式可以包括“原生显示”、“叠框显示”,可以为单一显示方式,也可以为多种显示方式的结合,如图14所示:
原生显示可以包括对原始图像显示(超声图像显示)、灰度增强显示(灰度增强处理后的图像显示)、彩色增强显示(彩色增强处理的超声图像显示),在此不做具体限定。
叠框显示在原生显示的基础上,选择具体操作执行目标的轮廓显示方式,包括非叠框显示、轮廓实线显示(对轮廓边缘叠加实线进行增强显示)、轮廓虚线显示(对轮廓边缘叠加虚线进行增强显示)、轮廓带显示(由于轮廓有时会较粗,对轮廓边缘叠加带状线条进行增强显示)等,在此不做具体限定。
电子设备可以采用“原生显示”与“叠框显示”结合的显示方式,以更清晰的显示超声图像的手术 器械目标。
可见,在本实施例中,基于轮廓位置信息,在增强处理后的操作执行目标的轮廓区域中突出显示操作执行目标的轮廓,可以使操作执行目标的轮廓更加明显,因此,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,上述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,可以包括:
基于所述轮廓位置信息,在所述超声图像中突出显示所述操作执行目标的轮廓。
在电子设备未对操作执行目标的轮廓区域进行增强处理的情况下,为了进一步突出操作执行目标的轮廓,电子设备可以基于轮廓位置信息,在超声图像中突出显示操作执行目标的轮廓。
例如,如图9(a)所示,超声图像中的操作执行目标为手术器械目标901,该手术器械目标901的轮廓区域902未进行灰度增强处理,那么,电子设备可以基于手术器械目标901的轮廓位置信息,在超声图像中高亮显示手术器械目标901的轮廓。
可见,在本实施例中,基于轮廓位置信息,在超声图像中突出显示操作执行目标的轮廓,可以使操作执行目标的轮廓更加明显,因此,可以使操作者根据操作执行目标的轮廓分辨操作执行目标,进而使操作者可以使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,如图15所示,上述获取超声图像的步骤,可以包括:
S1501,获取超声设备采集的超声视频流;
S1502,从所述超声视频流中解析得到视频帧,作为超声图像。
由于在医疗手术中,操作者需要实时确定手术器械目标的位置,那么,超声图像可以实时更新以供操作者进行查看。其中,超声图像可以是超声设备采集的视频流中的任一视频帧。
在一种实施方式中,在利用超声设备辅助医疗手术的过程中,超声设备实时采集的视频流中包括有手术器械和被检体检测部位的视频帧,那么,电子设备可以获取超声设备采集的超声视频流,从超声视频流中解析得到视频帧,将解析得到的视频帧作为超声图像,进而使操作者可以在超声图像中查看手术器械目标的位置。
例如,电子设备从获取超声视频流中解析得到视频帧A,将视频帧A作为超声图像,如图9(a)所示。该超声图像中包括手术器械目标901,那么,操作者可以确定手术器械目标901的位置。
可见,在本实施例中,电子设备可以获取超声设备采集的超声视频流,从超声视频流中解析得到视频帧,作为超声图像,进而可以对超声图像中的操作执行目标进行轮廓检测,使操作者根据操作执行目标的轮廓分辨操作执行目标,确定手术器械目标的位置,使操作者可以使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
图16为本申请实施例所提供的超声图像处理系统的一种结构示意图,如图16所示,超声图像处理系统可以包括图像采集部1601、图像处理部1602、图像显示部1603。图17为本申请实施例所提供的超声图像处理方法的一种具体流程图。下面结合图16和图17对本申请实施例所提供的超声图像处理方法进行举例介绍。如图17所示,本申请实施例所提供的超声图像处理方法可以包括以下步骤:
S1701,超声视频流输入;
S1702,从视频流中截取待处理帧的超声图像;
图像采集部1601可以获取超声实时视频流,从视频流中截取待处理帧的超声图像。
S1703,智能处理单元/图像处理单元,识别手术器械/内窥镜头边界轮廓信息;
图像处理部1602可以对输入的超声图像进行轮廓检测,可以通过调用智能处理单元检测手术器械边界轮廓信息,确定手术器械的轮廓区域,也可以通过图像处理单元利用传统算法检测手术器械边界轮廓信息,确定手术器械的轮廓区域。
S1704,智能处理单元/图像处理单元,基于识别信息进行目标图像增强处理;
图像处理部1602可以通过调用智能处理单元,基于智能处理单元的检测结果对轮廓区域进行图像增强处理;也可以通过图像处理单元基于图像处理单元的检测结果对轮廓区域进行图像增强处理。
S1705,图像输出。
图像显示部1603可以将增强后的手术器械边界轮廓在超声图像中进行显示,提供给操作者使用。
可见,在本申请实施例所提供的方案中,电子设备可以获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于在超声图像中包括被操作目标和操作执行目标的情况下,可以对超声图像中的操作执行目标的轮廓进行检测,基于得到的操作执行目标的轮廓位置信息,在超声图像中显示操作执行目标的轮廓,因此,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,可以提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
此外,在术中超声导航过程中,针对超声图像中的手术器械目标或内窥镜镜头目标的轮廓进行检测,并将轮廓边缘进行多模式的图像增强处理,以提升目标区域的区分度、可视性,并在超声显示屏上进行显示,可以帮助医生减少分辨目标的难度,提升超声术中导航效率。
相应于上述一种超声图像处方法,本申请实施例还提供了一种超声图像处装置,下面对本申请实施例所提供的一种超声图像处装置进行介绍。
如图18所示,一种超声图像处理装置,所述装置包括:
超声图像获取模块1810,用于获取超声图像,其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作;
轮廓位置信息获取模块1820,用于对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
轮廓显示模块1830,用于基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
可见,在本申请实施例所提供的方案中,电子设备可以获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于在超声图像中包括被操作目标和操作执行目标的情况下,可以对超声图像中的操作执行目标的轮廓进行检测,基于得到的操作执行目标的轮廓位置信息,在超声图像中显示操作执行目标的轮廓,因此,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,可以提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
作为本申请实施例的一种实施方式,上述轮廓位置信息获取模块1820,可以包括:
轮廓位置信息获取子模块,用于将所述超声图像输入预先训练的轮廓分割模型,基于所述超声图像的图像特征对所述超声图像进行轮廓分割,输出所述操作执行目标的轮廓位置信息;或,
用于对所述超声图像进行二值化处理,得到二值化超声图像;对所述二值化超声图像进行形态学处理,得到所述操作执行目标的轮廓位置信息。
作为本申请实施例的一种实施方式,上述轮廓显示模块1830,可以包括:
轮廓区域确定子模块,用于基于所述轮廓位置信息,确定所述超声图像中所述操作执行目标的轮廓区域;
轮廓区域显示子模块,用于对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域;
第一显示子模块,用于在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓。
作为本申请实施例的一种实施方式,上述轮廓区域显示子模块,可以包括:
第一轮廓区域获取单元,用于基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,得到增强后的所述操作执行目标的轮廓区域。
作为本申请实施例的一种实施方式,所述轮廓区域显示子模块,可以包括:
彩色超声图像获取单元,用于将所述超声图像向预设类型的彩色空间进行映射,得到彩色超声图像; 或,基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
第二轮廓区域获取单元,用于对所述彩色超声图像中与所述轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的所述操作执行目标的轮廓区域。
作为本申请实施例的一种实施方式,上述轮廓显示子模块,可以包括:
轮廓显示单元,用于基于所述轮廓位置信息,在增强处理后的所述操作执行目标的轮廓区域中突出显示所述操作执行目标的轮廓。
作为本申请实施例的一种实施方式,所述轮廓显示模块1830,可以包括:
第二显示子模块,用于基于所述轮廓位置信息,在所述超声图像中突出显示所述操作执行目标的轮廓。
作为本申请实施例的一种实施方式,所述超声图像获取模块1810,可以包括:
超声视频流获取子模块,用于获取超声设备采集的超声视频流;
超声图像获取子模块,用于从所述超声视频流中解析得到视频帧,作为超声图像。
本申请实施例还提供了一种电子设备,如图19所示,包括:
存储器1901,用于存放计算机程序;
处理器1902,用于执行存储器1901上所存放的程序时,实现上述任一实施例所述的一种超声图像处理方法。
并且上述电子设备还可以包括通信总线和/或通信接口,处理器1902、通信接口、存储器1901通过通信总线完成相互间的通信。
可见,在本申请实施例所提供的方案中,电子设备可以获取超声图像,其中,超声图像包括被操作目标和操作执行目标,操作执行目标对应的实体对被操作目标对应的实体进行预设操作,对超声图像中的操作执行目标进行轮廓检测,得到操作执行目标的轮廓位置信息,基于轮廓位置信息在超声图像中显示操作执行目标的轮廓。由于在超声图像中包括被操作目标和操作执行目标的情况下,可以对超声图像中的操作执行目标的轮廓进行检测,基于得到的操作执行目标的轮廓位置信息,在超声图像中显示操作执行目标的轮廓,因此,在操作者识别超声图像中的操作执行目标时,可以根据操作执行目标的轮廓分辨操作执行目标,可以提高操作者识别操作执行目标的准确度,进而使操作者可以准确使用操作执行目标对应的实体对被操作目标对应的实体进行操作。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
通信接口用于上述电子设备与其他设备之间的通信。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。
上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,该计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述的超声图像处理方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一实施例所述的超声图像处理方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或 功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、系统、电子设备、计算机可读存储介质以及计算机程序产品而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (12)

  1. 一种超声图像处理方法,其特征在于,所述方法包括:
    获取超声图像,其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作;
    对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
    基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息的步骤,包括:
    将所述超声图像输入预先训练的轮廓分割模型,基于所述超声图像的图像特征对所述超声图像进行轮廓分割,输出所述操作执行目标的轮廓位置信息;或,
    对所述超声图像进行二值化处理,得到二值化超声图像;对所述二值化超声图像进行形态学处理,得到所述操作执行目标的轮廓位置信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,包括:
    基于所述轮廓位置信息,确定所述超声图像中所述操作执行目标的轮廓区域;
    对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域;
    在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓。
  4. 根据权利要求3所述的方法,其特征在于,所述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,包括:
    基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,得到增强后的所述操作执行目标的轮廓区域。
  5. 根据权利要求3所述的方法,其特征在于,所述对所述轮廓区域进行图像增强处理,得到增强处理后的所述操作执行目标的轮廓区域的步骤,包括:
    将所述超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;或,基于所述轮廓区域的灰度值,对所述轮廓区域进行灰度增强处理,并将灰度增强处理后的超声图像向预设类型的彩色空间进行映射,得到彩色超声图像;
    对所述彩色超声图像中与所述轮廓区域对应的彩色图像区域进行彩色增强处理,得到增强处理后的所述操作执行目标的轮廓区域。
  6. 根据权利要求3所述的方法,其特征在于,所述在所述增强处理后的所述操作执行目标的轮廓区域中显示所述操作执行目标的轮廓的步骤,包括:
    基于所述轮廓位置信息,在增强处理后的所述操作执行目标的轮廓区域中突出显示所述操作执行目标的轮廓。
  7. 根据权利要求1或2所述的方法,其特征在于,所述基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓的步骤,包括:
    基于所述轮廓位置信息,在所述超声图像中突出显示所述操作执行目标的轮廓。
  8. 根据权利要求1所述的方法,其特征在于,所述获取超声图像的步骤,包括:
    获取超声设备采集的超声视频流;
    从所述超声视频流中解析得到视频帧,作为超声图像。
  9. 一种超声图像处理装置,其特征在于,所述装置包括:
    超声图像获取模块,用于获取超声图像,其中,所述超声图像包括被操作目标和操作执行目标,所述操作执行目标对应的实体对所述被操作目标对应的实体进行预设操作;
    轮廓位置信息获取模块,用于对所述超声图像中的操作执行目标进行轮廓检测,得到所述操作执行目标的轮廓位置信息;
    轮廓显示模块,用于基于所述轮廓位置信息在所述超声图像中显示所述操作执行目标的轮廓。
  10. 一种电子设备,其特征在于,包括:
    存储器,用于存放计算机程序;
    处理器,用于执行存储器上所存放的程序时,实现权利要求1-8任一所述的方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任一所述的方法。
  12. 一种包含指令的计算机程序产品,其特征在于,所述计算机程序产品被计算机执行时实现权利要求1-8任一所述的方法。
PCT/CN2023/131822 2022-11-18 2023-11-15 一种超声图像处理方法、装置、电子设备及存储介质 WO2024104388A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211447127.9A CN117237268A (zh) 2022-11-18 2022-11-18 一种超声图像处理方法、装置、电子设备及存储介质
CN202211447127.9 2022-11-18

Publications (1)

Publication Number Publication Date
WO2024104388A1 true WO2024104388A1 (zh) 2024-05-23

Family

ID=89083206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/131822 WO2024104388A1 (zh) 2022-11-18 2023-11-15 一种超声图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN117237268A (zh)
WO (1) WO2024104388A1 (zh)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525662A (zh) * 2012-02-28 2012-07-04 中国科学院深圳先进技术研究院 组织器官三维可视化手术导航方法和系统
CN103808804A (zh) * 2014-03-06 2014-05-21 北京理工大学 一种超声显微伪彩色快速映射成像方法
CN106952347A (zh) * 2017-03-28 2017-07-14 华中科技大学 一种基于双目视觉的超声手术辅助导航系统
WO2019020048A1 (zh) * 2017-07-28 2019-01-31 浙江大学 一种基于超声拓片技术的脊椎图像生成系统以及脊柱手术导航定位系统
CN109949254A (zh) * 2019-03-19 2019-06-28 青岛海信医疗设备股份有限公司 穿刺针超声图像增强方法及装置
CN111317567A (zh) * 2018-12-13 2020-06-23 柯惠有限合伙公司 胸腔成像、距离测量以及通知系统和方法
WO2020243493A1 (en) * 2019-05-31 2020-12-03 Intuitive Surgical Operations, Inc. Systems and methods for detecting tissue contact by an ultrasound probe
CN114581944A (zh) * 2022-02-18 2022-06-03 杭州睿影科技有限公司 一种毫米波图像处理方法、装置及电子设备
CN115272362A (zh) * 2022-06-22 2022-11-01 杭州迪英加科技有限公司 一种数字病理全场图像有效区域分割方法、装置
CN115300100A (zh) * 2022-08-11 2022-11-08 威朋(苏州)医疗器械有限公司 手术引导与监测方法、装置、计算机设备

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102525662A (zh) * 2012-02-28 2012-07-04 中国科学院深圳先进技术研究院 组织器官三维可视化手术导航方法和系统
CN103808804A (zh) * 2014-03-06 2014-05-21 北京理工大学 一种超声显微伪彩色快速映射成像方法
CN106952347A (zh) * 2017-03-28 2017-07-14 华中科技大学 一种基于双目视觉的超声手术辅助导航系统
WO2019020048A1 (zh) * 2017-07-28 2019-01-31 浙江大学 一种基于超声拓片技术的脊椎图像生成系统以及脊柱手术导航定位系统
CN111317567A (zh) * 2018-12-13 2020-06-23 柯惠有限合伙公司 胸腔成像、距离测量以及通知系统和方法
CN109949254A (zh) * 2019-03-19 2019-06-28 青岛海信医疗设备股份有限公司 穿刺针超声图像增强方法及装置
WO2020243493A1 (en) * 2019-05-31 2020-12-03 Intuitive Surgical Operations, Inc. Systems and methods for detecting tissue contact by an ultrasound probe
CN114581944A (zh) * 2022-02-18 2022-06-03 杭州睿影科技有限公司 一种毫米波图像处理方法、装置及电子设备
CN115272362A (zh) * 2022-06-22 2022-11-01 杭州迪英加科技有限公司 一种数字病理全场图像有效区域分割方法、装置
CN115300100A (zh) * 2022-08-11 2022-11-08 威朋(苏州)医疗器械有限公司 手术引导与监测方法、装置、计算机设备

Also Published As

Publication number Publication date
CN117237268A (zh) 2023-12-15

Similar Documents

Publication Publication Date Title
US11734820B2 (en) Medical image processing device, medical image processing method, and medical image processing program
US11937973B2 (en) Systems and media for automatically diagnosing thyroid nodules
AU2019431299B2 (en) AI systems for detecting and sizing lesions
US9445713B2 (en) Apparatuses and methods for mobile imaging and analysis
CN103325128B (zh) 一种智能识别阴道镜所采集的图像特征的方法及装置
WO2023103467A1 (zh) 图像处理方法、装置及设备
JP6967602B2 (ja) 検査支援装置、内視鏡装置、内視鏡装置の作動方法、及び検査支援プログラム
US11910994B2 (en) Medical image processing apparatus, medical image processing method, program, diagnosis supporting apparatus, and endoscope system
US11298012B2 (en) Image processing device, endoscope system, image processing method, and program
JP5442542B2 (ja) 病理診断支援装置、病理診断支援方法、病理診断支援のための制御プログラムおよび該制御プログラムを記録した記録媒体
CN111214255A (zh) 一种医学超声图像计算机辅助诊断方法
KR20160118037A (ko) 의료 영상으로부터 병변의 위치를 자동으로 감지하는 장치 및 그 방법
US20200184192A1 (en) Image analysis apparatus, image analysis method, and image analysis program
JP2016019665A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
WO2020133510A1 (zh) 一种超声成像方法及设备
JP6840263B2 (ja) 内視鏡システム及びプログラム
CN113520317A (zh) 基于oct的子宫内膜检测分析方法、装置、设备及存储介质
WO2024104388A1 (zh) 一种超声图像处理方法、装置、电子设备及存储介质
CN112998755A (zh) 解剖结构的自动测量方法和超声成像系统
CN115035086A (zh) 一种基于深度学习的结核皮试智能筛查分析方法和装置
US20240065540A1 (en) Apparatus and method for detecting cervical cancer
Andrade et al. Automatic Segmentation of the Cervical Region in Colposcopic Images.
US11830185B2 (en) Medical image processing system and learning method
EP4302681A1 (en) Medical image processing device, medical image processing method, and program
Andrade A Portable System for Screening of Cervical Cancer