CN113380385A - Image display method, device, equipment and storage medium - Google Patents

Image display method, device, equipment and storage medium Download PDF

Info

Publication number
CN113380385A
CN113380385A CN202110594505.5A CN202110594505A CN113380385A CN 113380385 A CN113380385 A CN 113380385A CN 202110594505 A CN202110594505 A CN 202110594505A CN 113380385 A CN113380385 A CN 113380385A
Authority
CN
China
Prior art keywords
marked
area
region
target area
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110594505.5A
Other languages
Chinese (zh)
Inventor
张黎玮
罗婷
段琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202110594505.5A priority Critical patent/CN113380385A/en
Publication of CN113380385A publication Critical patent/CN113380385A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides an image display method, an image display device, image display equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image; determining at least one region to be marked in the target image; marking the at least one region to be marked by adopting a marking graph in the at least one region to be marked; in response to trigger information, determining a target area matched with the trigger information in the at least one marked area; and in response to a selection instruction, outputting description information of the target area, wherein the selection instruction indicates that the target area is selected; therefore, under the condition that the target image is a medical image, the marking graph is adopted to mark the focus area in the coverage range of the focus area, so that the marking graphs of a plurality of marked areas are not shielded, and a user can more clearly check the details of the selected focus area.

Description

Image display method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of image processing, and relates to but is not limited to an image display method, an image display device, image display equipment and a storage medium.
Background
In the related art, Digital Radiography (DR) and Computed Tomography (CT) are different in form, CT being Tomography and DR being a plain film. The DR image is subjected to artificial intelligent image reading, and if false positive focuses are more in the image, focus frames are shielded from each other, so that a user cannot clearly view the focus positions.
Disclosure of Invention
The embodiment of the application provides an image display technical scheme.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image display method, which comprises the following steps:
acquiring a target image;
determining at least one region to be marked in the target image;
marking the at least one region to be marked by adopting a marking pattern in the at least one region to be marked to obtain at least one marked region;
in response to trigger information, determining a target area matched with the trigger information in the at least one marked area; and
and outputting the description information of the target area in response to a selection instruction, wherein the selection instruction indicates that the target area is selected.
In some embodiments, said determining, in response to trigger information, a target area that matches said trigger information among said at least one marked area comprises: under the condition that the suspension position of the cursor is detected to be positioned on any marking graph, determining that the trigger information is detected on any marking graph; and determining the area to be marked corresponding to the suspension position of the cursor as the target area.
In some embodiments, said determining, in response to trigger information, a target area that matches said trigger information among said at least one marked area further comprises: adjusting the display state of the marked graph of the target area into a first display frame; wherein, the coverage of the first display frame is larger than that of the mark graph. In this manner, the user is enabled to quickly and clearly navigate through the various marked areas.
In some embodiments, the adjusting the display state of the marked graphic of the target area to the first display frame includes: determining target size information surrounding the target area based on the size information of the target area; and adjusting the marking graph of the target area into a first display frame with the target size information. Thus, the position of the target area can be clearly marked by adopting the first display frame.
In some embodiments, the adjusting the display state of the marked graphic of the target area to the first display frame further includes: acquiring attribute information of the target area; and displaying the attribute information at a preset position of the first display frame. In this way, the user can quickly grasp the approximate situation of the target area.
In some embodiments, said outputting, in response to the selected instruction, the description information of the target area includes: outputting a list of details in an unexpanded state in response to the selected instruction; wherein the list of details includes the description information; and adjusting the detail list from the unexpanded state to an expanded state in response to a display instruction to display the description information in the detail list, wherein the display instruction instructs to display the description information. Therefore, the description information in the detail list can be presented, and the user can further know the description information of the target area in more detail.
In some embodiments, said marking, within the at least one region to be marked, the at least one region to be marked with a marking pattern to obtain at least one marked region includes: determining the range of each region to be marked based on the size information of each region to be marked; and marking each region to be marked by adopting a marking pattern with an area smaller than that of each region to be marked within the determined range of each region to be marked to obtain at least one marked region. Therefore, the marking graphs of different areas to be marked cannot be mutually shielded, and then a user can clearly check each marked area.
In some embodiments, before the adjusting the detail list from the unexpanded state to the expanded state to display the description information in the detail list in response to a display instruction, the method further comprises: determining other marked areas outside the target area in the at least one marked area; responding to the selected instruction, adjusting the first display frame to be a second display frame, and adjusting the marked graphics of the other marked areas from a display state to a hidden state; wherein the display states of the first display frame and the second display frame are different. In this way, when the target area is selected, the mark pattern of the other marked area is hidden, and the target area can be highlighted.
In some embodiments, after the adjusting the first display frame to the second display frame and the adjusting the marked graphics of the other marked areas from the display state to the hidden state in response to the selected instruction, the method further comprises: in response to an exit instruction, adjusting a first display frame of the target area to be a mark graph of the target area, wherein the exit instruction indicates that the target area is selected by exit; and adjusting the marking patterns of the other marked areas from the hidden state to the display state. Therefore, other marked areas are adjusted from the hidden state to the display state while the selected target area is quitted, and a user can conveniently select other marked areas
In some embodiments, after the outputting the description information of the target area in response to the selected instruction, the method further comprises: determining a zoom size in response to a zoom instruction, wherein the zoom instruction instructs to zoom the selected target region; and scaling the target region based on the scaled size. Therefore, the user can zoom and view the selected target area, so that the detail information of the target area can be more clearly and conveniently viewed.
In some embodiments, in a case that the target image is a medical image, the region to be marked is a lesion region, and marking the at least one region to be marked with a marking pattern in the at least one region to be marked to obtain at least one marked region includes: in the medical image, determining at least one lesion area; and marking each focus area by using a dot pattern in each focus area to obtain at least one marked area. Therefore, the focus area is marked by adopting the relatively obvious dot pattern with a relatively small coverage area, so that the marking patterns of a plurality of marked focus areas are not shielded from each other, and a user can conveniently check each marked focus area.
An embodiment of the present application provides an image display device, the device including:
the first acquisition module is used for acquiring a target image;
the first determination module is used for determining at least one region to be marked in the target image;
the first marking module is used for marking the at least one area to be marked by adopting a marking pattern in the at least one area to be marked to obtain at least one marked area;
a first adjusting module, configured to determine, in response to trigger information, a target area that matches the trigger information among the at least one marked area; and
and the first output module is used for responding to a selection instruction and outputting the description information of the target area, wherein the selection instruction indicates that the target area is selected.
In some embodiments, the first adjustment module comprises:
the first determining submodule is used for determining that the trigger information is detected on any marking graph under the condition that the suspension position of the cursor is detected to be positioned on any marking graph;
and the second determining submodule is used for determining the area to be marked corresponding to the suspension position of the cursor as the target area.
In some embodiments, the first adjusting module further comprises:
the first adjusting submodule is used for adjusting the display state of the marking graph of the target area into a first display frame; wherein, the coverage of the first display frame is larger than that of the mark graph.
In some embodiments, the first adjustment submodule includes:
a first determination unit configured to determine target size information surrounding the target area based on size information of the target area; and
and the first adjusting unit is used for adjusting the mark graph of the target area into a first display frame with the target size information.
In some embodiments, the first adjusting submodule further includes:
a first acquisition unit configured to acquire attribute information of the target area; and
and the first display unit is used for displaying the attribute information at a preset position of the first display frame.
In some embodiments, the first output module comprises:
the first output submodule is used for responding to the selected instruction and outputting a detail list in an unexpanded state; wherein the list of details includes the description information; and
a first display sub-module, configured to adjust the detail list from the unexpanded state to an expanded state in response to a display instruction to display the description information in the detail list, where the display instruction indicates to display the description information.
In some embodiments, the first marking module comprises:
the third determining submodule is used for determining the range of each region to be marked based on the size information of each region to be marked; and
and the first marking submodule is used for marking each to-be-marked region by adopting a marking graph with the area smaller than that of each to-be-marked region within the range of each to-be-marked region to obtain at least one marked region.
In some embodiments, the apparatus further comprises:
a second determining module, configured to determine, in the at least one marked region, other marked regions outside the target region; and
a second adjusting module, configured to adjust the first display frame to a second display frame in response to the selected instruction, and adjust the marked graphics of the other marked areas from a display state to a hidden state; wherein the display states of the first display frame and the second display frame are different.
In some embodiments, the apparatus further comprises:
a third adjusting module, configured to adjust the first display frame of the target area to a mark pattern of the target area in response to an exit instruction, where the exit instruction indicates to exit from selecting the target area; and
and the fourth adjusting module is used for adjusting the marking graphs of the other marked areas from the hidden state to the display state.
In some embodiments, the apparatus further comprises:
a third determining module, configured to determine a scaling size in response to a scaling instruction, where the scaling instruction instructs to scale the selected target region; and
a first scaling module, configured to scale the target region based on the scaling size.
In some embodiments, in a case where the target image is a medical image, the region to be marked is a lesion region, the first marking module includes:
a fourth determination submodule for determining at least one lesion area in the medical image; and
and the second marking submodule is used for marking each focus area by adopting a dot pattern in each focus area to obtain at least one marked area.
Correspondingly, an embodiment of the present application provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and after the computer-executable instructions are executed, the image display method according to the first aspect described above can be implemented.
The embodiment of the application provides computer equipment, the computer equipment comprises a memory and a processor, wherein computer executable instructions are stored on the memory, and the image display method can be realized when the processor runs the computer executable instructions on the memory.
The embodiment of the application provides an image display method, an image display device and a storage medium, wherein for any target image, at least one area to be marked is determined in the target image; marking the area to be marked by adopting a marking graph in the area to be marked to obtain and output a marked area; therefore, the marking patterns are in the areas to be marked, so that the marking patterns of different marking areas are not mutually overlapped, and a user can conveniently identify the marked areas. Then, in response to the trigger information, determining a target area matched with the trigger information in the marked area; finally, in response to the selected instruction, the description information of the target area is output. Therefore, under the condition that the trigger information is detected, the target area matched with the trigger information is determined, the approximate position of the target area of the user can be prompted in time, and under the condition that the selection instruction is received, the description information of the target area is presented for the user, so that the user can conveniently and clearly view the related description of the selected target area.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of an image display method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another implementation of an image display method according to an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of an image display method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) DR, also known as digital radiography, receives X-rays not from plain film, nor from an imaging plate that is scanned by laser to read information, but from flat panel detectors of various types, and because the process of developing and fixing is no longer required, nor is it required to send the imaging plate to a reading system for processing, inspection speed is high.
2) Artificial Intelligence (AI) is a new technical science to study and develop theories, methods, techniques and application systems for simulating, extending and expanding human Intelligence. Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence, a field of research that includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others.
An exemplary application of the image display device provided in the embodiments of the present application is described below, and the image display device provided in the embodiments of the present application may be implemented as various types of user terminals such as a notebook computer having an image capture function, a tablet computer, a desktop computer, a camera, a mobile device (e.g., a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. In the following, an exemplary application will be explained when the device is implemented as a terminal or a server.
The method can be applied to a computer device, and the functions realized by the method can be realized by calling a program code by a processor in the computer device, although the program code can be stored in a computer storage medium, which at least comprises the processor and the storage medium.
Fig. 1 is a schematic view of an implementation flow of an image display method provided in an embodiment of the present application, as shown in fig. 1, and the following steps are described in conjunction with the steps shown in fig. 1:
step S101, a target image is acquired.
In some embodiments, the target image may be an image acquired by any type of image acquisition device, and may also be a medical image acquired by a medical Imaging device, and may be an image scanned by a scanning workstation such as a lung, heart, and pneumonia image, a CT image, a Magnetic Resonance Imaging (MRI) image, a Positron Emission Tomography (PET), a DR image, or may also be an image acquired by a camera or a mobile phone in other scenes. The manner of acquiring the target image in step S101 may be acquired by the device, or may be receiving an image sent by another device.
Step S102, at least one region to be marked in the target image is determined.
In some embodiments, at least one region to be marked in the target image is an image whose picture content meets a preset requirement; the preset requirement can be determined based on the scene where the target image is located and the input requirement of the user. In some possible implementations, the selected region in the target image that satisfies the user's needs is selected according to the needs. For example, the target image is a medical image, and the region to be marked may be a lesion region satisfying a certain pathological structure. In other embodiments, the target image may also be other regions of the medical image that do not contain a lesion, for example, if the user needs to see a detailed description of common tissues, then the region to be marked may be a region of normal tissues in the medical image. Or, the target image is a captured landscape image, and the region to be marked may be a region where an object such as a flower, a grass, a tree, or the like appears in the image.
Step S103, marking the at least one region to be marked by adopting a marking graph in the at least one region to be marked to obtain at least one marked region.
In some embodiments, the size information of the area to be marked comprises: the length and width of the area to be marked, etc. Setting the shape of a marking graph with the area smaller than that of each to-be-marked area in the to-be-marked areas according to the size information of the to-be-marked areas; the shape of the marking pattern may be the same as or different from the shape of the area to be marked. The marking pattern is a pattern covering an area not exceeding the area to be marked. In some possible implementation manners, a plurality of non-overlapping marking patterns are adopted, and a plurality of areas to be marked are marked in a one-to-one correspondence manner to obtain a plurality of marked areas. In this way, in the target image, the plurality of regions to be marked are marked by using the non-overlapping marking patterns, so that a marked pattern comprising a plurality of marked regions is obtained and presented. In a specific example, the size information of the area to be marked indicates that the area to be marked is an ellipse, and the marking pattern may be a pattern with any coverage area smaller than that of the ellipse, such as a circle, a square, a rectangle, or an irregular pattern.
And step S104, responding to the trigger information, and determining a target area matched with the trigger information in the at least one marked area.
In some embodiments, the trigger information may be input by other devices, and may be the detection of a hover position of a mouse cursor. For example, when receiving trigger information for changing the display state of a marked area input by other equipment, determining a target area corresponding to the trigger information; or, when the floating position of the mouse cursor is detected to be located on a certain marking graph, the marked area corresponding to the marking graph is used as the target area. Meanwhile, the display state of the marking graph on the target area is changed, so that a user can observe the target area in time, and the approximate position of each marked area can be quickly known by inputting trigger information.
And step S105, responding to the selected instruction, and outputting the description information of the target area.
In some embodiments, the selection instruction indicates that the target area is selected, and the selection instruction may be that the user inputs the selection instruction by clicking any point in the first display frame; for example, the description information of the target area is output in response to a selected instruction input in the first display frame. Thus, in response to the selected instruction input in the first display frame, the description information of the target area is output; it may also be an instruction to receive a selected target area sent by another device. And in the case of receiving the selected instruction, presenting the description information of the target area to the user, for example, presenting the description information to the user in the form of a table, a text or the like. The description information is used for describing details of content contained in the target area, and comprises the following steps: position information of the target area, a photographing angle of the image, screen contents of the target area, and the like. Taking the target area as the lesion area as an example, the description information of the lesion area includes: the specific body part where the focus is located, the severity of the focus, the size of the focus, the shooting angle of the medical image for the patient, the possible lesion direction and the like.
In the embodiment of the application, firstly, in the area to be marked, the area to be marked is marked by adopting a marking graph to obtain and present at least one marked area; therefore, the marking patterns are matched with the areas to be marked, so that the marking patterns of different marking areas cannot be mutually overlapped, and a user can conveniently identify the marked areas. Under the condition that the trigger information is detected, the target area is determined, and a user is prompted in time; and under the condition of receiving the selected instruction, the description information of the target area is presented for the user, so that the user can more clearly view the related description of the target area.
In some embodiments, taking a target image as a medical image and taking a region to be marked as a lesion area as an example, clear and non-shielding marking is performed on the lesion area by adopting a dot in the center of the lesion area, and the method can be implemented by the following steps:
in a first step, in the medical image, at least one lesion area is determined.
In some embodiments, the medical image may be a DR image, such as a DR image taken of a patient's lung; by adopting an artificial intelligence mode, a focus area is determined in the medical image. For example, a neural network may be used to perform lesion recognition on the input medical image, thereby determining a lesion region.
And secondly, marking each focus area by using a dot pattern in each focus area to obtain at least one marked area.
In some embodiments, for each focal region, marking the focal region with a dot pattern; the location of the marking may be at any point within the focal region, e.g., the location of the marking is at the center point of the focal region.
Third, a marked graphic including the marked lesion area is output.
In some embodiments, on a display interface of an image display device, the display screen content includes a marked graphic of a marked lesion area marked with a dot image; in this way, the user can clearly see the location of the lesion area on the marked graphic. Therefore, the focus area is marked by adopting the relatively obvious dot pattern with a small area, so that the marked patterns of a plurality of marked focus areas are not shielded, and a user can conveniently check each marked focus area.
In some embodiments, detecting whether to generate the trigger information by detecting the position where the mouse cursor is hovering, i.e., "in response to the trigger information, determining the target area matching the trigger information in the at least one marked area" in the step S104, may be implemented by:
step S141, when it is detected that the hovering position of the cursor is located on any of the mark patterns, determining that the trigger information is detected on any of the mark patterns.
In some embodiments, by monitoring the movement track of the mouse, if the floating position of the cursor is positioned on a marking graph, the floating position is determined to form trigger information for the area marked by the marking graph. In a specific example, taking a marked figure as an example of a dot, when the cursor is detected to be suspended over any dot, it is determined that trigger information is formed for the area marked by the dot.
And step S142, determining the area to be marked corresponding to the suspension position of the cursor as the target area.
In some embodiments, the area to be marked by the marking graph at the floating position of the cursor is used as a target area needing to change the display state of the marking graph. In a specific example, taking the target image as a medical image and the region to be marked as a lesion area as an example for explanation, if the floating position of the cursor is located on the marking graph of the lung lesion area, the lung lesion area is taken as a target area to change the display state of the marking graph of the lung lesion area.
Step S143, adjusting the display state of the mark figure of the target area to a first display frame.
In some embodiments, the display state of the marker graphic of the target area includes: a display mode, a display shape, a display size, and the like of the mark pattern, for example, the mark pattern of the target area is adjusted to a first display frame whose coverage is larger than that of the mark pattern; alternatively, the first display frame surrounds the target area, and the first display frame may be in any shape such as a rectangle or a square.
The coverage range of the first display frame is larger than that of the mark graph. For example, the first frame may be a minimum rectangle that encompasses the target area. The trigger information is used to trigger a change in the display state of the marked area. And adjusting the mark graph with smaller coverage on the target area to comprise a first display frame surrounding the target area. Taking the target image as a medical image and the region to be marked as a lesion area as an example, as shown in fig. 3, when the trigger information is detected, the marking graph (dot 303) is converted into a first display frame (focus frame 302) whose area surrounds the target area, so that the user can clearly know the position of the target area.
In the above steps S141 to S143, by detecting whether the floating position of the cursor is located on the mark pattern, the target area in which the display state of the mark pattern needs to be changed is determined, and the mark pattern of the target area is adjusted to the first display frame with a larger range, so that the user can quickly and clearly browse each marked area.
In some embodiments, by analyzing the size information of the target area, the first display frame with a coverage area larger than the range of the mark pattern is determined, that is, the step S143 may be implemented by:
first, target size information surrounding the target area is determined based on the size information of the target area.
In some embodiments, the size information of the target area is obtained first, and then the target size information surrounding the target area is determined based on the size information. The size information of the target area includes: the length, width, and number of pixels of the target area represent the size of the target area. The size information of the target region may be obtained by image region edge detection, or may be determined according to a ratio of the area of the target region to the area of the target image. And determining target size information with a certain shape surrounding the target area according to the size of the target area. For example, the size information of the smallest rectangle surrounding the target area is determined according to the size of the target area.
And secondly, adjusting the mark graph of the target area into a first display frame with the target size information.
In some embodiments, in the case that the trigger information is detected, the marking graph of the target area is adjusted to the first display frame surrounding the target area, so that a user can conveniently and quickly browse each marked area, and a specific position of each marked area can be quickly and accurately acquired.
In the first step and the second step, the size information of the target area is analyzed to determine the size information of the target area; therefore, under the condition that the trigger information is detected, the smaller marking graph of the target area is adjusted to be a first display frame surrounding the target area, and the position of the target area can be clearly marked by adopting the first display frame.
In another embodiment, while the above steps S141 to S143 are performed, the attribute information of the target area is output, that is, when the trigger information is detected, the attribute information of the target area is presented to the user while the mark pattern of the target area is adjusted to the first display frame, that is, the above step S143 further includes the following steps:
first, obtaining attribute information of the target area.
In some embodiments, the attribute information of the target area includes a name and rough location information of the target area, and the like. If the target region is a lesion region, the attribute information includes: the name of the lesion, the body part to which the lesion belongs, the size of the lesion, and the like.
And secondly, displaying the attribute information at a preset position of the first display frame.
In some embodiments, the attribute information is presented at a position of the first presentation frame while changing a display state of the mark figure of the target area. For example, the attribute information of the target area is output at the edge of the first display frame or in the first display frame. Taking a target image as a medical image and a target area as a focus area as an example, adjusting a marking graph of the focus area into a first display frame surrounding the focus area, and simultaneously outputting summary information such as the name of the focus area and the body part where the focus area is located at the upper edge of the first display frame; the user can know the general situation of the focus area quickly. In this way, the attribute information of the target area is output in the first display frame while the display state of the marker graphic of the target area is changed, so that the user can quickly grasp the approximate situation of the target area.
In some embodiments, after selecting the target area, the user may further click on the detail list of the target area to view the detailed description of the target area, that is, step S105 may be implemented by the steps shown in fig. 2, and fig. 2 is another implementation flow diagram of the image display method provided in this embodiment of the present application, and the following description is performed in conjunction with the steps shown in fig. 1 and 2:
step S201, in response to the selected instruction, outputting a detail list in an unexpanded state.
In some embodiments, the selection instruction is input in the first display frame, for example, a user performs a click operation in the first display frame to select the target area, and at the same time, a detailed list of the area to be marked can be obtained and displayed. The list of details may be in an unexpanded state, including: description information associated with the target area. In a specific example, the unexpanded detail list in the unexpanded state can be presented at the edge of the first display frame, so that the user can view the detail information in the list by clicking on the detail list in the unexpanded state when the user wants to view the detail list.
Step S202, responding to a display instruction, adjusting the detail list from the undeployed state to a deployed state, so as to display the description information in the detail list.
In some embodiments, the display instruction indicates to display the descriptive information. The display instruction may be obtained by a user through clicking on a detail list, and in response to the display instruction input on the detail list, the detail list is adjusted from the unexpanded state to an expanded state, and the description information in the detail list is presented. The instruction may also be input by other devices. And in the case of obtaining the display instruction, adjusting the display state of the detail list, and expanding the detail list, so as to present the content in the detail list for the user. In this way, when the selected instruction is obtained, the unexpanded detail list is presented, and the detail list is expanded by further detecting the input display instruction, so that the description information in the detail list can be presented, and the user can know the description information of the target area in more detail.
In some embodiments, according to the size information of the to-be-marked area, the shape and size of the mark pattern are set so that the mark patterns of different to-be-marked areas are not occluded with each other, that is, the step S102 may be implemented by:
step S121, determining a range of each to-be-marked region based on the size information of each to-be-marked region.
In some embodiments, the area of each to-be-marked region is determined according to the size information of each to-be-marked region, such as information of length, width, or circumference, so as to obtain the coverage of each to-be-marked region. Taking the target image as a medical image and the region to be marked as a lesion area as an example, the area of the lesion area can be obtained by analyzing the size information of the lesion area, so as to obtain the coverage area of the lesion area.
And step S122, marking each to-be-marked region by adopting a marking pattern with an area smaller than that of each to-be-marked region within the determined range of each to-be-marked region to obtain at least one marked region.
In some embodiments, a mark pattern smaller than the area is set according to the area of each region to be marked, and in a specific example, the mark pattern may be set as a dot. Marking the region to be marked by adopting the marking graph with the smaller area in the region to be marked; therefore, the marking graphs of different areas to be marked cannot be mutually shielded, and a user can clearly view each marked area.
In some embodiments, in the case of selecting the target area, the marking patterns of other marked areas are hidden, so that the target area can be highlighted, that is, while performing the above S201, the method further includes the following steps:
step S211, determining other marked areas outside the target area in the at least one marked area.
In some embodiments, upon receiving the selected instruction, a plurality of other marked regions outside the target region corresponding to the selected instruction are screened out of the plurality of marked regions.
Step S212, in response to the selection instruction, adjusting the first display frame to a second display frame, and adjusting the marked graphics of the other marked areas from a display state to a hidden state.
In some embodiments, in the case of receiving a selection instruction indicating a selected target area, the marking graphics of other marked areas are automatically hidden, for example, the marking graphics of other marked areas are converted into a transparent state, and the display state of the first display frame of the target area is changed, so that a user can conveniently distinguish whether the target area is selected. For example, the color of the first display frame is changed to indicate that the target area within the first display frame is selected.
In other embodiments, the display state of the first display frame of the target area in response to the detection of the trigger information and the display state of the second display frame in response to the selected instruction may be set in advance; for example, a first display frame of the target area in response to the detection of the trigger information is set, and a color or a line shape of a second display frame in response to the selection instruction is different. In some possible implementations, the user may input a selection instruction for instructing selection of the marked region marked by any marked region by clicking on the marked region, and then adjust the marked region to be selected in the second display frame. In this way, by responding to the input selection instruction and changing the current display state of the marking graph of the marked area, the user can see the selected area more clearly.
After step S212, the selection of the target area may be exited by performing a click operation on any area in the target image, which may be implemented by the following steps S213 and S214:
step S213, in response to the exit instruction, adjusting the first display frame of the target area to the mark pattern of the target area.
In some embodiments, the exit instruction indicates to exit from selecting the target area, and may be input by clicking on another image area, or may be an exit instruction input by another device. The target area is an area which is selected by the selected instruction, namely an area surrounded by the second display frame. In some possible implementations, if it is desired to quit the selection of the region, image regions other than the target region corresponding to the selection instruction are determined in the target image. Then, in response to an exit instruction input in the other image area, the first display frame of the target area is adjusted to the mark figure of the target area.
Step S214, adjusting the marked graphics of the other marked areas from the hidden state to the display state.
In some embodiments, in response to the exit instruction, the mark graphics of the other marked areas in the hidden state are exited from the hidden state, i.e. are adjusted from the hidden state to the display state; and, the marking patterns of the other marked areas are adjusted from a transparent state to a non-transparent state. Therefore, under the condition that the target area needs to be quitted, the target area can be quitted by clicking in other image areas except the selected area in the target image, and meanwhile, other marked areas are adjusted from the hidden state to the display state, so that a user can conveniently select other marked areas.
In some embodiments, for the selected region, the content in the region can be further viewed by zooming the region, that is, after the target region is selected, zooming the target region can be achieved by:
in a first step, a zoom size is determined in response to a zoom instruction.
In some embodiments, the zoom instruction instructs to zoom the selected target region. The zoom instruction may be input in the selected target area, for example, a user performs finger pinch on a display interface of the target area to input the zoom instruction; the zoom instruction may also be an instruction entered by other devices. The zoom size is a multiple of the target area being zoomed.
And secondly, scaling the target area based on the scaling size.
In some embodiments, the enlargement or reduction of the selected target area is achieved in accordance with the zoom size. Taking the target image as the medical image and the target area as the focus area as an example, the user can zoom and view the selected focus area. Therefore, the user can zoom and view the selected target area, so that the detail information of the target area can be more clearly and conveniently viewed.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described, taking an example of displaying a lesion on a DR image as an example.
The method can realize that the film reading of a user is not blocked, and the position area and the information of the focus can be clearly known at the same time:
firstly, as for the acquired medical image, the focus is displayed as a dot by default, and the dot is positioned in the central area of the focus, so that the approximate position of the focus can be clearly known.
In the second step, the mouse is hovered to the dots (corresponding to the marked pattern in the above embodiment), and the focus area (i.e., focus frame) and focus name are enlarged and displayed.
Thus, the range of the focus can be clearly known, and the focus name can be known without looking up a list. If there are other focus dots in the focus frame (corresponding to the first display frame in the above embodiment), the other focus dots in the focus frame are transparently displayed in case that the focus frame is selected. Therefore, under the condition of reading the current focus, the dots of the rest focuses are not shielded.
As shown in fig. 3, the report 300 is an X-ray taken of the patient's chest, and the focus box 301 indicates the focus selection, while the focus box 302 and the dot 303 are kept coincident as a mouse-over focus point. Mouse suspension and mouse selection are interaction modes defined according to different scene operations of a doctor; the mouse suspends the focus (presents the focus frame 302), so that a doctor can quickly and conveniently check all the focuses, can slide to other focus dots to quickly check the focuses, and accordingly the film reading cost is saved; the focus is selected by a mouse (the focus frame 301 is presented), so that a doctor can conveniently check the details of the focus, zoom and check the focus, other dots in the focus frame are hidden, and the focus can be quitted from being selected by clicking other areas except the focus frame 301 in the medical image. Wherein, focus central point and mouse suspension enlarge, mouse are chosen and enlarged to become three indispensable states.
In the embodiment of the application, a doctor opens a DR image to read a film, judges the focus according to the prompt of the focus center point, can quickly check the focus by mouse suspension, and transfers to the next focus to quickly check after finishing checking the focus; and the suspicious focus is selected and amplified by a mouse for judgment; in this way, the focus position is displayed through the focus central point in a point-surface interaction mode, focus details are developed in a surface mode, and a focus area and a focus name are displayed; therefore, the focus position and the focus name can be clear for doctors and the operation is simple and convenient while the focus area is not shielded.
Fig. 4 is a schematic structural composition diagram of an image display device according to an embodiment of the present application, and as shown in fig. 4, the image display device 400 includes:
a first obtaining module 401, configured to obtain a target image;
a first determining module 402, configured to determine at least one region to be marked in the target image;
a first marking module 403, configured to mark, in the at least one to-be-marked area, the at least one to-be-marked area with a marking pattern, so as to obtain at least one marked area;
a first adjusting module 404, configured to determine, in response to trigger information, a target area that matches the trigger information in the at least one marked area; and
a first output module 405, configured to output description information of the target area in response to a selection instruction, where the selection instruction indicates that the target area is selected.
In some embodiments, the first adjusting module 404 includes:
the first determining submodule is used for determining that the trigger information is detected on any marking graph under the condition that the suspension position of the cursor is detected to be positioned on any marking graph;
and the second determining submodule is used for determining the area to be marked corresponding to the suspension position of the cursor as the target area.
In some embodiments, the first adjusting module 404 further includes:
the first adjusting submodule is used for adjusting the display state of the marking graph of the target area into a first display frame; wherein, the coverage of the first display frame is larger than that of the mark graph.
In some embodiments, the first adjustment submodule includes:
a first determination unit configured to determine target size information surrounding the target area based on size information of the target area; and
and the first adjusting unit is used for adjusting the mark graph of the target area into a first display frame with the target size information.
In some embodiments, the first adjusting submodule further includes:
a first acquisition unit configured to acquire attribute information of the target area; and
and the first display unit is used for displaying the attribute information at a preset position of the first display frame.
In some embodiments, the first output module 405 includes:
the first output submodule is used for responding to the selected instruction and outputting a detail list in an unexpanded state; wherein the list of details includes the description information; and
a first display sub-module, configured to adjust the detail list from the unexpanded state to an expanded state in response to a display instruction to display the description information in the detail list, where the display instruction indicates to display the description information.
In some embodiments, the first marking module 403 includes:
the third determining submodule is used for determining the range of each region to be marked based on the size information of each region to be marked; and
and the first marking submodule is used for marking each to-be-marked region by adopting a marking graph with the area smaller than that of each to-be-marked region within the range of each to-be-marked region to obtain at least one marked region.
In some embodiments, the apparatus further comprises:
a second determining module, configured to determine, in the at least one marked region, other marked regions outside the target region; and
a second adjusting module, configured to adjust the first display frame to a second display frame in response to the selected instruction, and adjust the marked graphics of the other marked areas from a display state to a hidden state; wherein the display states of the first display frame and the second display frame are different.
In some embodiments, the apparatus further comprises:
a third adjusting module, configured to adjust the first display frame of the target area to a mark pattern of the target area in response to an exit instruction, where the exit instruction indicates to exit from selecting the target area; and
and the fourth adjusting module is used for adjusting the marking graphs of the other marked areas from the hidden state to the display state.
In some embodiments, the apparatus further comprises:
a third determining module, configured to determine a scaling size in response to a scaling instruction, where the scaling instruction instructs to scale the selected target region; and
a first scaling module, configured to scale the target region based on the scaling size.
In some embodiments, in the case that the target image is a medical image, the region to be marked is a lesion region, and the first marking module 403 includes:
a fourth determination submodule for determining at least one lesion area in the medical image; and
and the second marking submodule is used for marking each focus area by adopting a dot pattern in each focus area to obtain at least one marked area.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the image display method is implemented in the form of a software functional module and sold or used as a standalone product, the image display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a hard disk drive, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the present application further provides a computer program product, where the computer program product includes computer-executable instructions, and after the computer-executable instructions are executed, the steps in the image display method provided by the embodiment of the present application can be implemented.
An embodiment of the present application further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and when executed by a processor, the computer-executable instructions implement the image display method provided in the foregoing embodiment.
An embodiment of the present application provides a computer device, fig. 5 is a schematic structural diagram of a composition of the computer device provided in the embodiment of the present application, and as shown in fig. 5, the computer device 500 includes: a processor 501, at least one communication bus, a communication interface 502, at least one external communication interface, and a memory 503. Wherein the communication interface 502 is configured to enable connected communication between these components. The communication interface 502 may include a display screen, and the external communication interface may include a standard wired interface and a wireless interface. The processor 501 is configured to execute an image display program in a memory to implement the image display method provided in the foregoing embodiments.
The above descriptions of the embodiments of the image display apparatus, the computer device and the storage medium are similar to the above descriptions of the embodiments of the method, have similar technical descriptions and advantages to the corresponding embodiments of the method, and are limited by the space. For technical details not disclosed in the embodiments of the image display apparatus, the computer device and the storage medium of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit. Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. An image display method, characterized in that the method comprises:
acquiring a target image;
determining at least one region to be marked in the target image;
marking the at least one region to be marked by adopting a marking pattern in the at least one region to be marked to obtain at least one marked region;
in response to trigger information, determining a target area matched with the trigger information in the at least one marked area; and
and outputting the description information of the target area in response to a selection instruction, wherein the selection instruction indicates that the target area is selected.
2. The method of claim 1, wherein the determining, in response to trigger information, a target area that matches the trigger information among the at least one marked area comprises:
under the condition that the suspension position of the cursor is detected to be positioned on any marking graph, determining that the trigger information is detected on any marking graph; and
and determining the area to be marked corresponding to the suspension position of the cursor as the target area.
3. The method of claim 1 or 2, wherein the determining, in response to trigger information, a target area that matches the trigger information among the at least one marked area further comprises:
adjusting the display state of the marked graph of the target area into a first display frame; wherein, the coverage of the first display frame is larger than that of the mark graph.
4. The method according to claim 3, wherein the adjusting the display state of the mark-up graphic of the target area to the first display frame comprises:
determining target size information surrounding the target area based on the size information of the target area; and
and adjusting the mark graph of the target area into a first display frame with the target size information.
5. The method according to claim 3 or 4, wherein the adjusting the display state of the marker graphic of the target area to the first display frame further comprises:
acquiring attribute information of the target area; and
and displaying the attribute information at a preset position of the first display frame.
6. The method according to any one of claims 1 to 3, wherein the outputting the description information of the target area in response to the selected instruction comprises:
outputting a list of details in an unexpanded state in response to the selected instruction; wherein the list of details includes the description information; and
adjusting the detail list from the unexpanded state to an expanded state in response to a display instruction to display the description information in the detail list, wherein the display instruction indicates to display the description information.
7. The method according to any one of claims 1 to 6, wherein the marking the at least one region to be marked with a marking pattern within the at least one region to be marked to obtain at least one marked region comprises:
determining the range of each region to be marked based on the size information of each region to be marked; and
and marking each region to be marked by adopting a marking graph with the area smaller than that of each region to be marked within the determined range of each region to be marked to obtain at least one marked region.
8. The method of claim 6, wherein prior to adjusting the list of details from the unexpanded state to the expanded state to display the descriptive information in the list of details in response to a display instruction, the method further comprises:
determining other marked areas outside the target area in the at least one marked area; and
in response to the selected instruction, adjusting the first display frame to be a second display frame, and adjusting the marked graphics of the other marked areas from a display state to a hidden state; wherein the display states of the first display frame and the second display frame are different.
9. The method of claim 8, wherein after adjusting the first display frame to a second display frame and the markup graphics of the other marked regions from a display state to a hidden state in response to the selected instruction, the method further comprises:
in response to an exit instruction, adjusting a first display frame of the target area to be a mark graph of the target area, wherein the exit instruction indicates that the target area is selected by exit; and
and adjusting the marking graph of the other marked areas from the hidden state to the display state.
10. The method according to any one of claims 1 to 9, wherein after outputting the description information of the target area in response to the selected instruction, the method further comprises:
determining a zoom size in response to a zoom instruction, wherein the zoom instruction instructs to zoom the selected target region; and
scaling the target region based on the scaled size.
11. The method according to any one of claims 1 to 10, wherein in a case where the target image is a medical image, the region to be marked is a lesion region, and the marking of the at least one region to be marked with a marking pattern within the at least one region to be marked results in at least one marked region, including:
in the medical image, determining at least one lesion area; and
and marking each focus area by using a dot pattern in each focus area to obtain at least one marked area.
12. An image display apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a target image;
the first determination module is used for determining at least one region to be marked in the target image;
the first marking module is used for marking the at least one area to be marked by adopting a marking pattern in the at least one area to be marked to obtain at least one marked area;
a first adjusting module, configured to determine, in response to trigger information, a target area that matches the trigger information among the at least one marked area; and
and the first output module is used for responding to a selection instruction and outputting the description information of the target area, wherein the selection instruction indicates that the target area is selected.
13. A computer storage medium having computer-executable instructions stored thereon that, when executed, are capable of implementing the image display method of any one of claims 1 to 11.
14. A computer device comprising a memory having computer-executable instructions stored thereon and a processor operable to implement the image display method of any one of claims 1 to 11 when executing the computer-executable instructions on the memory.
CN202110594505.5A 2021-05-28 2021-05-28 Image display method, device, equipment and storage medium Withdrawn CN113380385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110594505.5A CN113380385A (en) 2021-05-28 2021-05-28 Image display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110594505.5A CN113380385A (en) 2021-05-28 2021-05-28 Image display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113380385A true CN113380385A (en) 2021-09-10

Family

ID=77574798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110594505.5A Withdrawn CN113380385A (en) 2021-05-28 2021-05-28 Image display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113380385A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393323A (en) * 2022-08-26 2022-11-25 数坤(上海)医疗科技有限公司 Target area obtaining method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393323A (en) * 2022-08-26 2022-11-25 数坤(上海)医疗科技有限公司 Target area obtaining method, device, equipment and storage medium
CN115393323B (en) * 2022-08-26 2023-05-30 数坤(上海)医疗科技有限公司 Target area obtaining method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109754389B (en) Image processing method, device and equipment
EP2710957B1 (en) Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading
US10692272B2 (en) System and method for removing voxel image data from being rendered according to a cutting region
CN105167793A (en) Image display apparatus, display control apparatus and display control method
US11776692B2 (en) Training data collection apparatus, training data collection method, program, training system, trained model, and endoscopic image processing apparatus
US10261681B2 (en) Method for displaying a medical image and a plurality of similar medical images obtained from a case search system
WO2008061967A1 (en) Auto-zoom mark-up display system and method
JP2006502751A (en) I / O interface for computer diagnostic (CAD) systems
JP2001008928A (en) Method and device for display of image
CN106250665A (en) Information processor, information processing method and information processing system
JP2009273886A5 (en)
JP6004875B2 (en) MEDICAL IMAGE DISPLAY DEVICE, MEDICAL IMAGE DISPLAY METHOD, AND PROGRAM
US20220343589A1 (en) System and method for image processing
JPH11306264A (en) Computer aided diagnostic device
CN113380385A (en) Image display method, device, equipment and storage medium
JP2000287957A (en) Mammogram image diagnostic supporting apparatus
US10324582B2 (en) Medical image display apparatus, method for controlling the same
KR20200116842A (en) Neural network training method for utilizing differences between a plurality of images, and method thereof
CN114140408A (en) Image processing method, device, equipment and storage medium
JP2020061081A (en) Image processor and method for processing image
JP2020131019A (en) Image processing device, image processing method, and program
Le et al. A Web-Based Augmented Reality Approach to Instantly View and Display 4D Medical Images
JP2001340327A (en) Image display method and device
WO2024117042A1 (en) Image processing device, radiography system, image processing method, and program
US20200359979A1 (en) Artificial intelligence based automatic marker placement in radiographic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210910