CN113793422B - Display control method for three-dimensional model, electronic device and readable storage medium - Google Patents

Display control method for three-dimensional model, electronic device and readable storage medium Download PDF

Info

Publication number
CN113793422B
CN113793422B CN202110934092.0A CN202110934092A CN113793422B CN 113793422 B CN113793422 B CN 113793422B CN 202110934092 A CN202110934092 A CN 202110934092A CN 113793422 B CN113793422 B CN 113793422B
Authority
CN
China
Prior art keywords
view angle
dimensional model
target
information
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110934092.0A
Other languages
Chinese (zh)
Other versions
CN113793422A (en
Inventor
李太和
龚金思
黄有志
魏仁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Annet Innovation System Co ltd
Original Assignee
Shenzhen Annet Innovation System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Annet Innovation System Co ltd filed Critical Shenzhen Annet Innovation System Co ltd
Priority to CN202110934092.0A priority Critical patent/CN113793422B/en
Publication of CN113793422A publication Critical patent/CN113793422A/en
Application granted granted Critical
Publication of CN113793422B publication Critical patent/CN113793422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Architecture (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a display control method of a three-dimensional model, which comprises the following steps: identifying a target region in the three-dimensional model; acquiring attribute information of a target area; determining a target view angle of the three-dimensional model according to the attribute information; acquiring initial preset weights corresponding to all target visual angles; and displaying the view angle image corresponding to the target view angle according to the initial preset weight. The invention also discloses electronic equipment and a readable storage medium, wherein the target view angle is determined by acquiring attribute information of a target area corresponding to a three-dimensional model, view angle images corresponding to the target view angle are displayed to a user in a sequence based on initial preset weights of the target view angle, the user can quickly observe the three-dimensional model through the view angle images so as to evaluate the three-dimensional model, manual operation is not needed, and the three-dimensional model is repeatedly adjusted to acquire view angle images which meet the expectations of the user.

Description

Display control method for three-dimensional model, electronic device and readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a display control method for a three-dimensional model, an electronic device, and a readable storage medium.
Background
In the existing medical technology, three-dimensional visual reconstruction can be carried out on the checked part of a patient through CT, MR and US, so that medical staff can evaluate the illness state of the patient intuitively, but in the actual operation process, the medical staff needs to adjust the checking visual angle repeatedly, so that the checking process is complicated, and the connection relation between the focus and important viscera or blood vessels in the checked part can be missed, so that misdiagnosis and missed diagnosis of the illness of the patient are caused.
Disclosure of Invention
The invention mainly aims to provide a display control method, electronic equipment and a readable storage medium of a three-dimensional model, and aims to solve the problems that when the three-dimensional model is observed, the observation time is too long and the steps are complicated due to the fact that the view angle needs to be adjusted repeatedly.
In order to achieve the above object, the present invention provides a display control method of a three-dimensional model, the display control method of the three-dimensional model comprising:
identifying a target area in a three-dimensional model, wherein the three-dimensional model is a three-dimensional model determined by three-dimensional modeling of an image through a preset neural network algorithm;
acquiring attribute information of the target area;
determining a target view angle of the three-dimensional model according to the attribute information;
acquiring initial preset weights corresponding to the target visual angles;
And displaying the view angle image corresponding to the target view angle according to the initial preset weight.
Optionally, the attribute information includes at least one of short-long-short-diameter information of the target area, volume information of the target area, projection area information of the target area, position information of the target area, association information of the target area and an adjacent area, property information of the target area, and knowledge-graph relationship information corresponding to the target area.
Optionally, the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight includes:
comparing initial preset weights corresponding to the target view angles;
and displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence from the large initial preset weights to the small initial preset weights, and outputting prompt information, wherein the preset display mode comprises overlapping and displaying the view angle images and the three-dimensional model.
Optionally, the step of outputting the prompt information includes:
determining prompt information according to the visual angle image, wherein the prompt information comprises operation information aiming at the visual angle image;
outputting the prompt information in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.
Optionally, the step of displaying the view angle image corresponding to the target view angle according to the preset weight further includes:
determining viewing angle parameters according to the three-dimensional model and a target viewing angle, wherein the viewing angle parameters comprise at least one of aperture, focal length and depth of field;
and displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.
Optionally, the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight further includes:
recording and storing the current browsing time length corresponding to each view angle image and acquiring the historical browsing time length corresponding to each view angle image;
adjusting the weight corresponding to the target visual angle according to the current browsing time length and/or the historical browsing time length;
and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a sequence from large to small according to the next preset weight.
Optionally, the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight further includes:
Recording and storing current adjustment operations corresponding to the view angle images and acquiring historical adjustment operations corresponding to the view angle images;
adjusting the visual angle parameters of each visual angle image according to the current adjustment operation and/or the historical adjustment operation;
and determining a next view angle parameter according to the adjusted view angle parameter, so that when the view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.
Optionally, the step of identifying the target region in the three-dimensional model includes:
and identifying the target area in the three-dimensional model according to a preset neural network algorithm.
In addition, in order to achieve the above object, the present invention also provides an electronic device including a memory, a processor, and a display control program of a three-dimensional model stored on the memory and executable on the processor, the display control program of the three-dimensional model implementing the steps of the display control method of the three-dimensional model as described above when executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a readable storage medium having stored thereon a display control program of a three-dimensional model, which when executed by a processor, implements the steps of the display control method of a three-dimensional model as described above.
According to the display control method, the electronic equipment and the readable storage medium related to the three-dimensional model, through identifying the target area of the three-dimensional model and the attribute information of the target area, at least one target view angle for observing the three-dimensional model is determined according to the attribute information, and view angle images corresponding to the target view angles are displayed to a user according to initial preset weights of the target view angles, so that the user can quickly and accurately observe the three-dimensional model and evaluate scanning objects corresponding to the three-dimensional model, and manual adjustment of the three-dimensional model is reduced or omitted to obtain view angle images which meet the expectations of the user.
Drawings
FIG. 1 is a schematic diagram of an electronic device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of a display control method for a three-dimensional model according to the present invention;
FIG. 3 is a schematic diagram of a refinement flow of step S50 of a second embodiment of a display control method for a three-dimensional model according to the present invention;
FIG. 4 is a schematic diagram of a refinement flow of step S50 of a third embodiment of a display control method for a three-dimensional model according to the present invention;
FIG. 5 is a flowchart of a display control method for a three-dimensional model according to a fourth embodiment of the present invention;
Fig. 6 is a flowchart of a display control method for a three-dimensional model according to a fifth embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The main solutions of the embodiments of the present invention are: identifying a target region in the three-dimensional model; acquiring attribute information of the target area; determining a target view angle of the three-dimensional model according to the attribute information; and adjusting the display angle of the three-dimensional model according to the target visual angle.
Referring to fig. 1, fig. 1 is a schematic diagram of an electronic device structure of a hardware running environment according to an embodiment of the present invention.
An embodiment of the present invention provides an electronic device, which may be a terminal, as shown in fig. 1, where the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a touch layer covered on the Display screen, a key, a track ball or a touch pad disposed on a casing of the computer device, an external keyboard, a touch pad or a mouse, etc., and the optional user interface 1003 may further include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a display control program of the three-dimensional model may be included in a memory 1005 as one type of computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call a display control program of the three-dimensional model stored in the memory 1005 and perform the following operations:
identifying a target area in a three-dimensional model, wherein the three-dimensional model is a three-dimensional model determined by three-dimensional modeling of an image through a preset neural network algorithm;
acquiring attribute information of the target area;
determining a target view angle of the three-dimensional model according to the attribute information;
acquiring initial preset weights corresponding to the target visual angles;
and displaying the view angle image corresponding to the target view angle according to the initial preset weight.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
the attribute information comprises at least one of short and short path information of a target area, volume information of the target area, projection area information of the target area, position information of the target area, association information of the target area and an adjacent area, property information of the target area and knowledge-graph relation information corresponding to the target area.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
comparing initial preset weights corresponding to the target view angles;
and displaying the view angle images corresponding to the target view angles in a preset display mode based on the sequence from the large initial preset weights to the small initial preset weights, and outputting prompt information, wherein the preset display mode comprises overlapping and displaying the view angle images and the three-dimensional model.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
determining prompt information according to the visual angle image, wherein the prompt information comprises operation information aiming at the visual angle image;
outputting the prompt information in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
determining viewing angle parameters according to the three-dimensional model and a target viewing angle, wherein the viewing angle parameters comprise at least one of aperture, focal length and depth of field;
And displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
recording and storing the current browsing time length corresponding to each view angle image and acquiring the historical browsing time length corresponding to each view angle image;
adjusting the weight corresponding to the target visual angle according to the current browsing time length and/or the historical browsing time length;
and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a sequence from large to small according to the next preset weight.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
recording and storing current adjustment operations corresponding to the view angle images and acquiring historical adjustment operations corresponding to the view angle images;
adjusting the visual angle parameters of each visual angle image according to the current adjustment operation and/or the historical adjustment operation;
And determining a next view angle parameter according to the adjusted view angle parameter, so that when the view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.
Further, the processor 1001 may call a display control program of the three-dimensional model stored in the memory 1005, and further perform the following operations:
and identifying the target area in the three-dimensional model according to a preset neural network algorithm.
In order to acquire a three-dimensional model of a scan object, the scan object needs to be first scanned with an imaging device. The imaging device scans the scanning object to obtain scanning data, and generates an image sequence according to the scanning data. Wherein the medical image sequence is an image of each cross section of the scan object in the scan direction. And finally generating a three-dimensional model of the internal structure of the scanning object according to the image sequence. Wherein the imaging device may be: x-ray imaging instruments (Xray), CT (plain CT, spiral CT), positive sub-scanning (PET), magnetic resonance imaging (MR), infrared scanning devices, endoscopes, US, and combinations of multiple scanning devices, etc. When the three-dimensional model is acquired, in order to evaluate the scanning object, a user needs to repeatedly adjust the display angle of the three-dimensional model to inspect the scanning object, which results in complicated inspection steps and easy missing of the important area of the scanning object, resulting in an evaluation error.
It may be understood that the scan object may include any kind of physical object, including but not limited to a patient and an industrial device, where in the embodiment of the present invention, the patient is used for example analysis, and when the three-dimensional model is a three-dimensional model of the patient, after an image corresponding to the patient is acquired based on the imaging device, the three-dimensional model is acquired based on the image.
Referring to fig. 2, a first embodiment of the present invention provides a display control method of a three-dimensional model, the display control method of the three-dimensional model including:
step S10, identifying a target area in a three-dimensional model, wherein the three-dimensional model is a three-dimensional model determined by three-dimensional modeling of an image through a preset neural network algorithm;
step S20, obtaining attribute information of the target area;
step S30, determining a target view angle of the three-dimensional model according to the attribute information;
step S40, obtaining initial preset weights corresponding to the target view angles;
and S50, displaying the view angle image corresponding to the target view angle according to the initial preset weight.
In this embodiment, before the target area of the three-dimensional model is identified, the image may be obtained by forming an image of a scan object by using an imaging device such as an X-ray imaging instrument (Xray), CT (common CT, spiral CT), positive sub-scan (PET), magnetic resonance imaging (MR), an infrared scanning device, an endoscope, US, and a combination scanning device of multiple scanning devices, and obtaining the three-dimensional model of the scan object according to the image by using a preset neural network algorithm, where the scan object may be an organ, a tissue, or a cell set that needs to be observed by a patient, or may be a component that needs to be observed by an industrial device, and it may be understood that the imaging device may obtain the three-dimensional model of different real objects by using preset scan parameters.
Optionally, the three-dimensional model is acquired based on a volume rendering mode.
Optionally, the three-dimensional model includes a target area and a non-target area of the scanned object, where the target area is an object that is emphasized by a user.
Optionally, the identifying the target area of the three-dimensional model may be that the user observes the three-dimensional model, determines the target area in the three-dimensional model, further receives user input, determines the target area in the three-dimensional model according to the user input, and further displays the target area in a preset manner, where the preset manner may be that the outline of the target area is outlined, the target area is highlighted, or the target area is intercepted by a frame to display. For example, when a doctor observes a three-dimensional model of a patient, the target region may be a lesion region, and after the doctor determines the lesion region of the three-dimensional model, the determined lesion region is input, and then the lesion region is displayed in a preset manner according to the doctor input.
Optionally, identifying the target area of the three-dimensional model may be acquired based on a neural network algorithm, and the step S10 includes:
And identifying the target area in the three-dimensional model according to a preset neural network algorithm.
Specifically, the three-dimensional model is input into a preset neural network algorithm which is obtained based on image training set training, and a target area is obtained through extraction and matching of features or variables. The preset neural network algorithm can acquire a large number of training sets of images for training the neural network, the training images can be two-dimensional images or three-dimensional images acquired by any imaging equipment, and the neural network is trained on the basis of machine learning through learning features or variables of the images.
Optionally, the attribute information includes at least one of short and long path information of the target area, volume information of the target area, projection area information of the target area, position information of the target area, association information of the target area and an adjacent area, property information of the target area, and knowledge-graph relationship information corresponding to the target area.
Optionally, determining a target view angle of the three-dimensional model according to the attribute information, where the target view angle is a preferred view angle that a user can quickly observe a scanned object when observing the three-dimensional model, for example, when a doctor observes a three-dimensional model of a patient, the target view angle may be a line of sight corresponding to a maximum projection area of the focal region, or may be a line of sight corresponding to a vascular tissue corresponding to the focal region. It can be understood that the target areas have different attributes and different corresponding target viewing angles.
Optionally, the mode of acquiring the target viewing angle may be determining a maximum projection area based on projection area information of the target area, determining a normal direction of the maximum projection area, and determining a preset viewing angle of the target area according to the normal direction. For example, when a doctor refers to a three-dimensional model of a patient, in order to more clearly observe a focal region, the maximum projection area corresponding to the focal region is given to the user, so that the doctor can quickly see the focal region.
Optionally, the three-dimensional model further includes an adjacent region adjacent to the target region, and the method for obtaining the target viewing angle may further include determining a shortest distance between the target region and the adjacent region based on the position information of the target region, and further determining the target viewing angle of the target region according to a normal direction corresponding to the shortest distance. For example, when a doctor refers to a three-dimensional model of a patient, it is necessary to judge whether or not an important organ or blood vessel in a non-target area is close to the focal area, thereby circumventing the influence on the important organ or blood vessel in the non-target area in the subsequent treatment of the focal area.
It can be understood that the method for acquiring the target viewing angle is not limited to the two modes, in actual operation, the terminal stores the mapping relation between the attribute information of the target area and the target viewing angle, and after the user inputs the three-dimensional model, multiple different target viewing angles are automatically matched for the user according to the attribute information of the target area of the three-dimensional model and the mapping relation, and the viewing angle images corresponding to the target viewing angles are displayed to the user.
Optionally, after the target view angles of the three-dimensional model are acquired, initial preset weights of the target view angles are determined. Optionally, the initial preset weights corresponding to the target viewing angles may be different or the same.
Optionally, the mapping relationship between the attribute information of the target area and the target view angle further includes an initial preset weight of each target view angle, and it can be understood that the size of the initial preset weight corresponding to each target view angle is different, where the initial preset weight is used to represent the useful degree of the target view angle, the higher the initial preset weight is, the higher the corresponding useful degree is, the higher the important degree of the target view angle is represented, and conversely, the lower the weight is, the lower the corresponding useful degree is, and the importance degree of the view angle to be targeted is represented. In the actual operation process, when generating a target view angle corresponding to a target area in the three-dimensional model, matching a corresponding initial preset weight for the target view angle according to the mapping relation. For example, in a three-dimensional model of a patient checked by a doctor, in order to better evaluate a lesion and determine a treatment plan, when the three-dimensional model is observed, a normal direction of a maximum projection area of a lesion area is determined as a preferred observation view angle, an initial preset weight of a target view angle corresponding to the normal direction of the maximum projection area of the lesion area is set to be higher, and when a position of the lesion area is not close to a blood vessel or an important organ, a manner of determining the target view angle based on association information of the lesion area and an adjacent area is not high for a user, and an initial preset weight corresponding to association information of the target area and the adjacent area is set to be lower.
In a specific operation, the attribute information of the target area in the three-dimensional model is acquired, a better target view angle is automatically matched for the three-dimensional model, initial preset weights of all the target view angles are automatically matched according to the attribute information of the current target area, and then the target view angles are sequentially displayed for a user according to the initial preset weights.
Optionally, after the target view angle of the three-dimensional model is obtained, the computer forms a view angle image corresponding to the target view angle according to the target view angle, and further sequentially displays the view angle image corresponding to the target view angle to a user according to the initial preset weight, for example, when a doctor observes the three-dimensional model of a patient, the doctor generates the view angle image corresponding to the maximum projection area of the three-dimensional model when the target view angle a is a view line corresponding to the maximum projection area of the focus area, and generates the view angle image corresponding to the association information of the focus area and the adjacent area when the target view angle B is a view line corresponding to the association information of the focus area and the adjacent area. Meanwhile, the initial preset weight corresponding to the target visual angle A is calculated to be 100 by a computer, and the initial preset weight corresponding to the target visual angle B is 50, so that the visual angle image corresponding to the target visual angle A is arranged in front of the visual angle image corresponding to the target visual angle B, and a doctor can quickly evaluate the focus area according to the visual angle image.
Alternatively, the view angle image may be acquired based on a surface rendering manner.
In the embodiment of the application, the target view angles are acquired through the attribute information of the target area in the three-dimensional model, and the initial preset weights are allocated to the target view angles, so that the view angle images corresponding to the target view angles are sequentially displayed to the user according to the initial preset weights, so that the user can conveniently view the three-dimensional model, the problem that the user needs to manually adjust the display angles of the three-dimensional model for multiple times when observing the three-dimensional model is solved, and the efficiency of the user in evaluating the scanning objects corresponding to the three-dimensional model is improved.
Optionally, the three-dimensional model includes a plurality of different viewing angles, so that in order to enable the user to better view the three-dimensional model, avoiding missing some important parts and causing erroneous judgment on the scanned object, based on this, referring to fig. 3, the step S50 in the second embodiment of the present application includes:
step S51, comparing initial preset weights corresponding to the target view angles;
and step S52, displaying the view angle images corresponding to the target views in a preset display mode based on the sequence from the initial preset weight to the initial preset weight, and outputting prompt information, wherein the preset display mode comprises overlapping and displaying the view angle images and the three-dimensional model.
In this embodiment, after the initial preset weights of the target view angles are obtained, view angle images corresponding to the target view angles may be displayed on a display interface of the terminal according to the order from large to small, and a user may determine which target view angles are useful view angles and which may be unimportant view angles based on the distribution sequence of the target view angles in the display interface.
Optionally, the preset display mode includes overlapping and displaying the view angle image and the three-dimensional model. Specifically, the three-dimensional model is used as a reference, and the view angle image is superimposed in the three-dimensional model.
Optionally, in a further embodiment, the three-dimensional model is separately displayed in a display interface for a user to observe the three-dimensional model when the three-dimensional model is acquired.
Optionally, in another embodiment, after the target area of the three-dimensional model is acquired, according to a cross-sectional view corresponding to the target area, the cross-sectional view and the three-dimensional model are displayed in the display interface in a superimposed manner, so that a user can quickly diagnose the target area according to the cross-sectional view.
It is understood that the preset display modes include, but are not limited to, the above three modes.
Optionally, the embodiment of the application further includes outputting prompt information to the user while displaying the view angle image to the user, so that the user can know the view angle image based on the prompt information while browsing the view angle image.
Optionally, the step S52 includes:
determining prompt information according to the visual angle image, wherein the prompt information comprises operation information aiming at the visual angle image;
outputting the prompt information in a preset mode, wherein the preset mode comprises but is not limited to voice and marks.
When a doctor actually browses the visual angle image, the surgical treatment route aiming at the three-dimensional model needs to be planned according to the visual angle image, important organs and/or blood vessels needing to be avoided in the actual surgical process need to be judged in the process of planning the surgical treatment route, and meanwhile, the positions inconvenient to operate in the actual operation process need to be avoided. Therefore, it takes a long time for the doctor to observe and think. Based on the above, the embodiment of the invention provides a method for outputting prompt information, and a doctor determines a surgical treatment route faster according to the prompt information.
Optionally, the hint information includes, but is not limited to, vital organs and/or vessels that need to be evaded, and an operative location. For example, when a doctor observes a three-dimensional model of a patient, the target area is a lung, adjacent positions of the lung are distributed over blood vessels, and the blood vessels need to be avoided during subsequent operation treatment of the doctor to prevent the patient from suffering from massive hemorrhage. In addition, the doctor also needs to judge which positions are inconvenient to operate when the target area is subjected to surgical treatment, and automatically output prompt information to the user based on the blood vessel and the positions so that the user can adjust a surgical treatment route based on the positions.
Optionally, after determining the prompt information, the terminal may mark the prompt information, where a specific marking manner may be a circle or a dot, and is not limited herein. In addition, the terminal can play the prompt information in a voice mode, and after receiving the voice, the user can acquire the prompt information. Or the terminal can also mark the prompt information and play the voice at the same time.
It is understood that the preset modes include, but are not limited to, the above three modes.
According to the method, all the target view angles are displayed in sequence from large to small according to the initial preset weight, namely, the target view angles with low initial preset weight are arranged at the back of a display interface, the target view angles with high initial preset weight are arranged at the front of the display interface, so that a user can conveniently and quickly browse view angle images from the front of the display interface, and prompt information is output to the user, so that the user can better evaluate the three-dimensional model according to the prompt information.
Optionally, referring to fig. 4, based on the second embodiment, step S50 in the third embodiment of the present application further includes:
and step S53, determining viewing angle parameters according to the three-dimensional model and a target viewing angle, wherein the viewing angle parameters comprise at least one of aperture, focal length and depth of field.
And step S54, displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.
In this embodiment of the present application, the aperture is used to control the light incoming amount, the terminal obtains the image information of the target area, determines the image brightness of the display position corresponding to the target area, and when the image brightness does not meet the preset brightness threshold, it is difficult to see the image details of the display position corresponding to the target area, and increases or reduces the aperture, so that the image brightness of the display position corresponding to the target area meets the preset brightness threshold, and thus, the user can clearly detect the image details of the target area in the three-dimensional model according to the adjusted viewing angle parameter.
The focal length is used for controlling the distance between the three-dimensional model and the user, after the terminal acquires the image information of the target area, whether the distance between the target area and the user does not meet a preset distance threshold value is judged, and when the distance does not meet the preset distance threshold value, the focal length is increased or reduced, so that the user can clearly detect the image details of the target area matched with the three-dimensional model according to the adjusted focal length.
The depth of field is used for controlling the distance between clear images presented in the range before and after the focus of the three-dimensional model, and the terminal automatically adjusts the depth of field parameters according to the image information after acquiring the image information of the target area.
It is understood that the viewing angle parameters include, but are not limited to, aperture, focal length, and depth of field.
In an actual operation process, after a user uploads the three-dimensional model to a terminal, the terminal obtains a target area of the three-dimensional model and attribute information of the target area according to the three-dimensional model, automatically matches a target view angle for the user according to the attribute information, and automatically obtains view angle parameters according to the three-dimensional model and the target view angle, and further displays view angle images corresponding to the target view angle according to the view angle parameters and initial preset weights, so that the user observes the three-dimensional model based on a proper distance, a proper brightness and a proper view angle.
Optionally, based on the first embodiment, referring to fig. 5, after the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight, the method further includes:
step S60, recording and storing the current browsing duration corresponding to each view angle image and obtaining the historical browsing duration corresponding to each time image;
Step S70, adjusting the weight corresponding to the target visual angle according to the current browsing duration and/or the historical browsing duration;
and S80, determining the next preset weight corresponding to the target view angle according to the adjusted weight, so that when the view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed in a sequence from large to small according to the next preset weight.
In this embodiment of the present application, the terminal is configured with a timing device, and when a user browses the view angle image based on his own needs, the browsing duration of the user browses the view angle image is recorded and saved, where the browsing durations of the view angle images may be different or the same.
In general, when a user actually browses the visual angle image, the user views the visual angle image corresponding to the self requirement according to the self requirement, if the browsing time is long, the visual angle image importance degree is higher, and if the browsing time is short, the visual angle image importance degree is lower.
Optionally, after the current browsing duration corresponding to the view angle images corresponding to the target view angles is obtained, the weight of the target view angles is adjusted according to the current browsing duration. And if the weight of the target view angle corresponding to the view angle image with the longest current browsing duration is adjusted to be the highest, the weight corresponding to the corresponding target view angle with the shortest current browsing duration is adjusted to be the highest. For example, the current browsing duration of the view angle image a is 10min, the current browsing duration of the view angle image B is 9min, the current browsing duration of the view angle image C is 3min, and so on, after the current browsing duration is determined, the weight of the target view angle corresponding to the view angle image a is adjusted to 100, the weight of the target view angle corresponding to the view angle image B is adjusted to 90, and the weight of the target view angle corresponding to the view angle image C is adjusted to 80.
Optionally, after the browsing duration corresponding to the view angle image corresponding to each target view angle is obtained, the historical browsing duration corresponding to each view angle image is obtained at the same time, wherein the historical browsing duration is the historical browsing duration obtained by each view angle image before current browsing. Based on this, the manner of adjusting the weight of the target view angle according to the browsing duration may further be to record and store, at the terminal, a current browsing duration of each view angle image by the user, and based on the historical browsing duration, by overlapping the current browsing duration with the historical browsing duration, and further adjust the weight of the view angle image based on the overlapped browsing duration. For example, the historical browsing duration of the view angle image a is 1200s, the historical browsing duration of the view angle image B is 2000s, the browsing duration of the current browsing view angle image a is 60s, the browsing duration of the current browsing view angle image B is 100s, the superimposed browsing duration corresponding to the view angle image a is 1290s, the superimposed browsing duration corresponding to the view angle image B is 2100s, the importance degree corresponding to the view angle image B is judged to be high based on the fact that the superimposed browsing duration corresponding to the view angle image B is longer than the superimposed browsing duration corresponding to the view angle image a, the weight of the target view angle corresponding to the view angle image B is increased, and the weight corresponding to the target view angle corresponding to the view angle image a is reduced, so that the next preset weight of the view angle image B is greater than the next preset weight of the view angle image a.
It can be appreciated that, in the embodiments of the present application, weights corresponding to the target views are readjusted based on the browsing duration, where the weights may be the same as or different from the initial preset weights.
Optionally, determining a next preset weight corresponding to the target view according to the adjusted weight, where the next preset weight is used for iterating the initial preset weight by the next preset weight when the user observes the three-dimensional model next time, so that the terminal displays the view image corresponding to the target view in order from big to small according to the next preset weight.
Optionally, in still another implementation, when the user browses each view angle image, the microphone is used for acquiring voice information input by the user when browsing the view angle image while recording and saving the current browsing time length corresponding to each view angle image, converting the voice information into corresponding text information, and marking the corresponding view angle image according to the text information. It will be appreciated that the importance of the marked viewing angle image is greater than the importance of the unmarked viewing angle image, the more text information the higher the importance of the viewing angle image. Based on the above, the current browsing duration corresponding to each view angle image is recorded and stored, the view angle image corresponding to the voice information is marked according to the corresponding voice information, and then the weight corresponding to the target view angle corresponding to the view angle image is adjusted according to the mark and the corresponding current browsing duration.
Optionally, in another embodiment, when the user browses each view angle image, recording and saving the current browsing duration corresponding to each view angle image, marking the video image corresponding to the voice information according to the voice information input by the user, and simultaneously obtaining the historical browsing duration corresponding to each view angle image, so as to adjust the weight corresponding to the target view angle corresponding to the view angle image according to the current browsing duration, the marking and the historical browsing duration.
It will be appreciated that the adjustment of the weights corresponding to the target viewing angle includes, but is not limited to, the several ways described above.
In this embodiment of the present application, the weight corresponding to the target viewing angle is readjusted according to the browsing duration when the user actually browses the viewing angle image, so that when the user observes the three-dimensional model next time, the terminal displays the viewing angle image corresponding to the target viewing angle to the user according to the adjusted weight.
Optionally, based on the first embodiment, referring to fig. 6, after the step of displaying the view angle image corresponding to the target view angle according to the initial preset weight, the method further includes:
step S90, recording and storing current adjustment operations corresponding to the view images and obtaining historical adjustment operations corresponding to the view images;
Step S100, adjusting the visual angle parameters of each visual angle image according to the current adjustment operation and/or the historical adjustment operation;
step S110, determining a next view angle parameter according to the adjusted view angle parameter, so that when the view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.
In the actual browsing process of the user, the user adjusts the visual angle image according to the own requirement, wherein the adjustment mode comprises, but is not limited to, enlarging/reducing the visual angle image and increasing or reducing the image brightness of the visual angle image.
Optionally, after the current adjustment operation of the user is obtained, the current adjustment operation is saved, and the viewing angle parameter is adjusted according to the current adjustment operation. For example, when the user enlarges the visual angle image, the focal length of the visual angle parameter can be reduced, and when the user increases the image brightness of the visual angle image, the aperture of the visual angle parameter is enlarged.
Optionally, in another embodiment, when recording and saving the current adjustment operation corresponding to each view angle image, a history adjustment operation corresponding to each view angle image is further acquired, where the history adjustment operation includes a history adjustment operation of the view angle image before the current adjustment operation, and further, viewing angle parameters of each view angle image are adjusted according to the current adjustment operation and the history adjustment operation.
Optionally, determining a next view angle parameter according to the adjusted view angle parameter, where the next view angle parameter is used for iterating the view angle parameter by the next view angle parameter when the user observes the three-dimensional model next time, so that the terminal displays the view angle image corresponding to the target view angle according to the next view angle parameter sequence.
In the embodiment of the application, the adjustment operation of the user is recorded and saved, the visual angle parameter is adjusted according to the adjustment operation, and the next visual angle parameter is determined according to the adjusted visual angle parameter, so that when the visual angle image is displayed next time, the visual angle image corresponding to the target visual angle is displayed automatically according to the next visual angle parameter, the user does not need to manually adjust the visual angle parameter again, and the efficiency of browsing the visual angle image by the user is improved.
In addition, an embodiment of the present invention also proposes a readable storage medium having stored thereon a display control program of a three-dimensional model, the display control program of the three-dimensional model being executed by a processor as steps of the display control method of the three-dimensional model in any of the above embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. A display control method of a three-dimensional model, characterized in that the steps of the display control method of the three-dimensional model include:
identifying a target area in a three-dimensional model, wherein the three-dimensional model is a three-dimensional model determined by three-dimensional modeling of an image through a preset neural network algorithm;
acquiring attribute information of the target area, wherein the attribute information comprises at least one of short-long diameter information of the target area, volume information of the target area, projection area information of the target area, position information of the target area, association information of the target area and an adjacent area, property information of the target area and knowledge-graph relation information corresponding to the target area;
determining a target view angle of the three-dimensional model according to the mapping relation between the attribute information and the target view angle;
acquiring initial preset weights corresponding to the target visual angles;
comparing initial preset weights corresponding to the target view angles;
displaying view angle images corresponding to all target view angles in a preset display mode based on the sequence of the initial preset weights from large to small, and outputting prompt information, wherein the preset display mode comprises overlapping and displaying the view angle images and the three-dimensional model;
Recording and storing the current browsing time length corresponding to each view angle image and acquiring the historical browsing time length corresponding to each view angle image;
adjusting the weight corresponding to the target visual angle according to the current browsing time length and/or the historical browsing time length;
and determining the next preset weight corresponding to the target visual angle according to the adjusted weight, so that when a next user displays the visual angle image corresponding to the target visual angle, the visual angle images corresponding to the target visual angle are displayed in a sequence from large to small according to the next preset weight.
2. The display control method of a three-dimensional model according to claim 1, wherein the step of outputting the hint information includes:
determining prompt information according to the visual angle image, wherein the prompt information comprises operation information aiming at the visual angle image;
outputting the prompt information in a preset mode, wherein the preset mode comprises voice and marks.
3. The display control method of a three-dimensional model according to claim 1, wherein the step of displaying the view angle image corresponding to the target view angle according to the preset weight further comprises:
determining viewing angle parameters according to the three-dimensional model and a target viewing angle, wherein the viewing angle parameters comprise at least one of aperture, focal length and depth of field;
And displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight.
4. The method for controlling display of a three-dimensional model according to claim 3, wherein the step of displaying the view angle image corresponding to the target view angle according to the view angle parameter and the initial preset weight further comprises:
recording and storing current adjustment operations corresponding to the view angle images and acquiring historical adjustment operations corresponding to the view angle images;
adjusting the visual angle parameters of each visual angle image according to the current adjustment operation and/or the historical adjustment operation;
and determining a next view angle parameter according to the adjusted view angle parameter, so that when the view angle image corresponding to the target view angle is displayed next time, the view angle image corresponding to the target view angle is displayed according to the next view angle parameter.
5. The display control method of a three-dimensional model according to claim 1, wherein the step of identifying a target area in the three-dimensional model comprises:
and identifying the target area in the three-dimensional model according to a preset neural network machine learning algorithm.
6. An electronic device comprising a memory, a processor and a display control program of a three-dimensional model stored on the memory and executable on the processor, the display control program of the three-dimensional model, when executed by the processor, implementing the steps of the display control method of the three-dimensional model according to any one of claims 1 to 5.
7. A readable storage medium, wherein a display control program of a three-dimensional model is stored on the readable storage medium, which when executed by a processor, implements the steps of the display control method of a three-dimensional model according to any one of claims 1 to 5.
CN202110934092.0A 2021-08-13 2021-08-13 Display control method for three-dimensional model, electronic device and readable storage medium Active CN113793422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110934092.0A CN113793422B (en) 2021-08-13 2021-08-13 Display control method for three-dimensional model, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110934092.0A CN113793422B (en) 2021-08-13 2021-08-13 Display control method for three-dimensional model, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN113793422A CN113793422A (en) 2021-12-14
CN113793422B true CN113793422B (en) 2024-02-23

Family

ID=79181699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110934092.0A Active CN113793422B (en) 2021-08-13 2021-08-13 Display control method for three-dimensional model, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN113793422B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103124362A (en) * 2012-10-09 2013-05-29 友达光电股份有限公司 Multi-view three-dimensional display
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN109242978A (en) * 2018-08-21 2019-01-18 百度在线网络技术(北京)有限公司 The visual angle regulating method and device of threedimensional model
CN109745062A (en) * 2019-01-30 2019-05-14 腾讯科技(深圳)有限公司 Generation method, device, equipment and the storage medium of CT image
CN109978640A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Dress ornament tries method, apparatus, storage medium and mobile terminal on
CN110472085A (en) * 2019-07-19 2019-11-19 平安科技(深圳)有限公司 3-D image searching method, system, computer equipment and storage medium
CN111524232A (en) * 2020-04-23 2020-08-11 网易(杭州)网络有限公司 Three-dimensional modeling method and device and server
CN111599022A (en) * 2020-04-30 2020-08-28 北京字节跳动网络技术有限公司 House display method and device and electronic equipment
CN111815761A (en) * 2020-07-14 2020-10-23 杭州翔毅科技有限公司 Three-dimensional display method, device, equipment and storage medium
CN111862106A (en) * 2019-04-30 2020-10-30 曜科智能科技(上海)有限公司 Image processing method based on light field semantics, computer device and storage medium
CN113220251A (en) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 Object display method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10127721B2 (en) * 2013-07-25 2018-11-13 Hover Inc. Method and system for displaying and navigating an optimal multi-dimensional building model
US20150035823A1 (en) * 2013-07-31 2015-02-05 Splunk Inc. Systems and Methods for Using a Three-Dimensional, First Person Display to Convey Data to a User

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103124362A (en) * 2012-10-09 2013-05-29 友达光电股份有限公司 Multi-view three-dimensional display
CN109978640A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Dress ornament tries method, apparatus, storage medium and mobile terminal on
CN108765272A (en) * 2018-05-31 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN109242978A (en) * 2018-08-21 2019-01-18 百度在线网络技术(北京)有限公司 The visual angle regulating method and device of threedimensional model
CN109745062A (en) * 2019-01-30 2019-05-14 腾讯科技(深圳)有限公司 Generation method, device, equipment and the storage medium of CT image
CN111862106A (en) * 2019-04-30 2020-10-30 曜科智能科技(上海)有限公司 Image processing method based on light field semantics, computer device and storage medium
CN110472085A (en) * 2019-07-19 2019-11-19 平安科技(深圳)有限公司 3-D image searching method, system, computer equipment and storage medium
CN111524232A (en) * 2020-04-23 2020-08-11 网易(杭州)网络有限公司 Three-dimensional modeling method and device and server
CN111599022A (en) * 2020-04-30 2020-08-28 北京字节跳动网络技术有限公司 House display method and device and electronic equipment
CN111815761A (en) * 2020-07-14 2020-10-23 杭州翔毅科技有限公司 Three-dimensional display method, device, equipment and storage medium
CN113220251A (en) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 Object display method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于序列图像的三维重建算法研究;彭科举;中国博士学位论文全文数据库信息科技辑(第03期);I138-32 *

Also Published As

Publication number Publication date
CN113793422A (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN109567865B (en) Intelligent ultrasonic diagnosis equipment for non-medical staff
CN112075914B (en) Capsule endoscopy system
US20200155114A1 (en) Ultrasound diagnosis apparatus for determining abnormality of fetal heart, and operating method thereof
CN109044398A (en) Ultrasonic system imaging method, device and computer readable storage medium
CN112807025A (en) Ultrasonic scanning guiding method, device, system, computer equipment and storage medium
JP7442600B2 (en) System for determining guidance signals and providing guidance for handheld ultrasound transducers
US20150241685A1 (en) Microscope system and microscopy method using digital markers
JP2014178458A (en) Mobile display device for medical images
CN113133813A (en) Dynamic information display system and method based on puncture process
WO2006060373A2 (en) Ultrasonic image and visualization aid
CN113793422B (en) Display control method for three-dimensional model, electronic device and readable storage medium
JP7253152B2 (en) Information processing device, information processing method, and program
US11806192B2 (en) Guiding system and guiding method for ultrasound scanning operation
CN113662594B (en) Mammary gland puncture positioning/biopsy method, device, computer equipment and storage medium
KR20100007819A (en) Ultrasound system and method for performing a recovery function
JP5576041B2 (en) Ultrasonic diagnostic equipment
CN114830638A (en) System and method for telestration with spatial memory
WO2021140644A1 (en) Endoscopy assistance device, endoscopy assistance method, and computer-readable recording medium
JP2006280411A (en) Eye fundus image analysis method and eye fundus image analyzer
US11779308B2 (en) Ultrasound diagnosis apparatus and method of managing ultrasound image obtained thereby
EP4197443A1 (en) X-ray image processing system, x-ray imaging system and method for processing an x-ray image
EP3851051B1 (en) Ultrasound diagnosis apparatus and operating method thereof
US20240046600A1 (en) Image processing apparatus, image processing system, image processing method, and image processing program
CN113112560B (en) Physiological point region marking method and device
US20220405963A1 (en) Medical image processing device, ultrasonic diagnostic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant