CN111161429A - Model image display method, model image display device, computer equipment and storage medium - Google Patents

Model image display method, model image display device, computer equipment and storage medium Download PDF

Info

Publication number
CN111161429A
CN111161429A CN201911316469.5A CN201911316469A CN111161429A CN 111161429 A CN111161429 A CN 111161429A CN 201911316469 A CN201911316469 A CN 201911316469A CN 111161429 A CN111161429 A CN 111161429A
Authority
CN
China
Prior art keywords
displayed
matrix
rotation
model
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911316469.5A
Other languages
Chinese (zh)
Other versions
CN111161429B (en
Inventor
时季寒
杨飞
宋争
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Original Assignee
Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd filed Critical Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Priority to CN201911316469.5A priority Critical patent/CN111161429B/en
Publication of CN111161429A publication Critical patent/CN111161429A/en
Application granted granted Critical
Publication of CN111161429B publication Critical patent/CN111161429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a model display method, a model display device, computer equipment and a storage medium. The method comprises the following steps: acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking the target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user; acquiring an initial world transformation matrix of a model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation; fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction; and displaying the model to be displayed according to the matrix to be displayed. By adopting the method, the model to be displayed can be rotated and displayed based on the rotation matrix of any angle input by the user, so that the display angle of the model, namely surrounding tissues such as blood vessels and the like is richer, and the operation is more convenient.

Description

Model image display method, model image display device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying a model image, a computer device, and a storage medium.
Background
In order to facilitate the intuitive observation and monitoring of human tissues, people usually take pictures of human tissues by using equipment, so that images of the tissues are obtained to facilitate the observation of the tissues by people. Multiplanar reconstruction (MPR) images are widely used as a common image format.
Since the MPR image is an image of a plurality of slices obtained for a tissue, a conventional method for observing a human tissue manually switches different needle tracks by a user, thereby realizing observation of different slices of the human tissue.
However, the traditional observation method can only perform observation based on a fixed needle track, and the angle is single, so that the use is inconvenient.
Disclosure of Invention
In view of the above, it is necessary to provide a model image display method, apparatus, computer device, and storage medium that can facilitate observation in view of the above technical problems.
In a first aspect, an embodiment of the present application provides a model image display method, where the method includes:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal matrix, an initial sagittal matrix and an initial transverse matrix, and the matrix to be displayed comprises a coronal matrix to be displayed, a sagittal matrix to be displayed and a transverse matrix to be displayed;
the fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the rotation based on the rotation instruction, includes:
multiplying the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed;
multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed;
and multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed.
In one embodiment, the displaying the image of the model to be displayed according to the matrix to be displayed includes:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronal matrix to be displayed, the sagittal matrix to be displayed and the transverse matrix to be displayed.
In one embodiment, the obtaining of the rotation matrix corresponding to the target rotation angle includes:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the generating the target planned path according to the path generation instruction input by the user includes:
acquiring cranium entering point coordinates and target point coordinates input by a user;
and determining the target planning path according to a connecting line between the craniotomy point coordinate and the target point coordinate.
In one embodiment, the number of the intracranial point coordinates and the target point coordinates is multiple, and the intracranial point coordinates and the target point coordinates correspond to each other in a one-to-one manner;
determining the target planning path according to the connecting line between the cranium entering point coordinate and the target point coordinate, wherein the determining of the target planning path comprises the following steps:
using a connecting line between each cranium entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path point selection instruction input by a user;
and determining the target planning path from the plurality of initial planning paths according to the path selection instruction.
In one embodiment, the method further comprises: and updating the normal vector of the state corresponding to each display window according to the rotation matrix.
In a second aspect, an embodiment of the present application provides a model image display apparatus, including:
the generating module is used for acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking the target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
the acquisition module is used for acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose before the model to be displayed does not rotate;
the processing module is used for fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and the display module is used for displaying the image of the model to be displayed according to the matrix to be displayed.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
According to the model display method, the model display device, the computer equipment and the storage medium, the target rotation angle is the rotation angle determined by taking the target planning path as the rotation middle axis of the model to be displayed and based on the rotation instruction input by the user, so that the computer equipment obtains the rotation matrix corresponding to the target rotation angle, obtains the initial world transformation matrix representing the pose of the model to be displayed before the model to be displayed does not rotate in the initial state, fuses the rotation matrix and the initial world transformation matrix to obtain the matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction, and the matrix to be displayed can represent the pose of the model to be displayed after the model to be displayed rotates based on the target rotation angle required by the user. According to the method provided by the embodiment, the rotation matrix and the initial world transformation matrix are fused to obtain the matrix to be displayed corresponding to the pose of the model to be displayed after the model is rotated according to the rotation instruction, so that the problems of limited observation angle and complex operation caused by continuously switching needle tracks when the organization around the model is observed at multiple angles in the traditional technology are solved, the model to be displayed can be rotated and the image can be displayed based on the rotation matrix of any angle input by a user, and therefore the display angle of the model is richer, and the operation is more convenient.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flowchart illustrating a method for displaying a model image according to an embodiment;
FIG. 3 is a schematic flowchart of a model image display method according to another embodiment;
FIG. 3a is a schematic diagram of a single-path multi-slice MPR rotation interaction according to an embodiment;
fig. 3b is a multi-phase MPR rotation interaction diagram of multiple paths according to an embodiment;
fig. 4 is a schematic structural diagram of a model image display apparatus according to yet another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The model display method provided by the embodiment of the application can be applied to the computer equipment shown in fig. 1. The computer equipment can be MPR equipment, and comprises a processor, a memory, a network interface, a database, a display screen, an input device and a scanning module, wherein the processor, the memory, the network interface, the database, the display screen and the input device are connected through a system bus, and the scanning module is used for scanning an object to be displayed. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing the model to be displayed in the following embodiments, and specific descriptions about the model to be displayed refer to the specific descriptions in the following embodiments. The network interface of the computer device may be used to communicate with other devices outside over a network connection. Optionally, the computer device may be a server, a desktop, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like. Of course, the input device and the display screen may not belong to a part of the computer device, and may be external devices of the computer device.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
It should be noted that the execution subject of the method embodiments described below may be a model display device, and the device may be implemented as part of or all of the computer device described above by software, hardware, or a combination of software and hardware. The following method embodiments are described by taking the execution subject as the computer device as an example.
Fig. 2 is a schematic flowchart of a model image display method according to an embodiment. The embodiment relates to a specific process of rotating and displaying a model to be displayed by a computer device based on a rotation instruction input by a user, and as shown in fig. 2, the method comprises the following steps:
s10, obtaining a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking the target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user.
Specifically, the computer device obtains a rotation matrix corresponding to the target rotation angle, and may read the rotation matrix stored in the memory, receive a rotation matrix sent by another device, or obtain a rotation instruction input by a user, calculate the target rotation angle according to the rotation instruction, and then calculate the rotation matrix according to the target rotation angle. It should be noted that the rotation matrix can represent the rotation amount required to be rotated by the user input. For example, if the rotation amount represented by the rotation instruction input by the user is large, the target rotation angle is large, and therefore the rotation amount represented by the obtained rotation matrix is large; if the rotation amount represented by the rotation instruction input by the user is small, the target rotation angle is small, and therefore the rotation amount represented by the obtained rotation matrix is small.
S20, acquiring an initial world transformation matrix of the model to be displayed in an initial state; wherein the initial world transformation matrix is used for representing the pose before the model to be displayed does not rotate.
Specifically, the computer device may read an initial world transformation matrix of the model to be displayed stored in the memory in the initial state, or may receive the initial world transformation matrix of the model to be displayed. It should be noted that the initial world transformation matrix is used to represent the pose, i.e., the position and the orientation, of the model to be displayed in the initial state, i.e., before the model to be displayed is rotated.
And S30, fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction.
Specifically, the computer device fuses the rotation matrix and the initial world transformation matrix, and may perform vector operation, such as vector superposition or vector multiplication, on the rotation matrix and the initial world transformation matrix, so as to realize superposition of the rotation matrix on the initial world transformation matrix, and obtain a to-be-displayed matrix representing a pose of the to-be-displayed model after rotation according to the rotation amount corresponding to the rotation instruction.
And S40, displaying the image of the model to be displayed according to the matrix to be displayed.
Specifically, the matrix to be displayed can represent the pose of the model after the model to be displayed rotates according to the rotation instruction, so that the computer device displays the model to be displayed according to the pose represented by the matrix to be displayed, and the image display of the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction can be realized.
In this embodiment, because the target rotation angle is a rotation angle determined based on a rotation instruction input by a user and taking the target planning path as a rotation center axis of the model to be displayed, the computer device obtains a rotation matrix corresponding to the target rotation angle, obtains an initial world transformation matrix representing a pose of the model to be displayed before the model to be displayed does not rotate in an initial state, and fuses the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction, where the matrix to be displayed can represent the pose of the model to be displayed after the model to be displayed rotates based on the target rotation angle required by the user. According to the method provided by the embodiment, the rotation matrix and the initial world transformation matrix are fused to obtain the matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction, so that the problems of limited observation angle and complex operation caused by continuously switching needle tracks when surrounding tissues are observed at multiple angles in the traditional technology are solved, the model to be displayed can be rotated and image display is carried out based on the rotation matrix of any angle input by a user, and therefore the display angle of the model, namely surrounding tissues such as blood vessels is richer, and the operation is more convenient.
Optionally, on the basis of the foregoing embodiment, the initial world transformation matrix includes an initial coronal matrix, an initial sagittal matrix, and an initial transverse matrix, and the matrix to be displayed includes a coronal matrix to be displayed, a sagittal matrix to be displayed, and a transverse matrix to be displayed; one possible implementation manner of the step S30 may include: multiplying the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed; multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed; and multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed, so as to obtain a matrix to be displayed of the coronal position, the sagittal position and the constant position respectively, so that the model to be displayed can be described from three different positions, the rotated model to be displayed is more comprehensively expressed, and the observation and the use are convenient. Optionally, the computer device may further generate a matrix to be displayed of the state corresponding to the selection instruction according to the selection instruction input by the user, which is also not limited in this embodiment of the present application.
Optionally, the coronary bit matrix to be displayed, the sagittal bit matrix to be displayed, and the transverse bit matrix to be displayed respectively represent poses of the model to be displayed in three different states, and in a practical use process, a possible implementation manner of the step S40 may include displaying images of the model to be displayed in corresponding display windows according to the coronary bit matrix to be displayed, the sagittal bit matrix to be displayed, and the transverse bit matrix to be displayed. Specifically, each state can correspond to one display window, and different states of the model to be displayed can be displayed in different windows according to the rotated pose images, so that the poses of the different states of the model to be displayed can be conveniently displayed, the tissue distribution conditions under the different states can be conveniently observed, and the use of a user is facilitated.
Optionally, on the basis of the foregoing embodiments, a possible implementation manner of the foregoing step S10 may be as shown in fig. 3, and includes:
and S11, generating the target planning path according to the path generation instruction input by the user.
Specifically, a path generation instruction input by a user is obtained, the path generation instruction comprises a plurality of coordinate points input by the user, and a middle axis is generated according to at least two coordinate points and is used as a target planning path. For example, the computer device may use a line connecting two coordinate points as the target planned path.
Optionally, one possible implementation manner of this step may include: the implementation mode determines the target planning path by acquiring the cranium entering point coordinate and the target point coordinate input by the user and according to the connecting line between the cranium entering point coordinate and the target point coordinate, so that the target planning path matched with the cranium entering point coordinate input by the user is obtained, and the model to be displayed can rotate by the central axis expected by the user, so that the observation is more comprehensive and the use is more convenient. Optionally, the computer device may further update the normal vector of the status corresponding to each display window according to the rotation matrix, so as to ensure that the status of each display window can be displayed on the front side, which is convenient for observation and subsequent calculation. Alternatively, fig. 3a is a schematic diagram of single-path multi-state MPR rotation interaction, fig. 3b is a schematic diagram of multi-state MPR rotation interaction, and straight lines in the diagrams are different planned paths.
Optionally, the number of the cranium entering point coordinates and the number of the target point coordinates are multiple, and the cranium entering point coordinates correspond to the target point coordinates one by one; one possible implementation of the step of determining the target planning path according to the connection line between the craniotomy point coordinate and the target point coordinate may further include: using a connecting line between each cranium entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths; receiving a path point selection instruction input by a user; and determining the target planning path from the plurality of initial planning paths according to the path selection instruction. Specifically, the computer device may obtain a plurality of the craniotomy point coordinates and a plurality of the target point coordinates input by the user, the craniotomy point coordinates and the target point coordinates correspond to each other one by one, and a connection line between each craniotomy point coordinate and the corresponding target point coordinate may be used as an initial planning path. The computer device then receives a path selection instruction input by the user, for example, a selection instruction of clicking one of the initially planned paths with a mouse is received, and the selected one is taken as a target planned path. Alternatively, at this time, the icon of the mouse may be converted into a rotated icon representation to enter a state capable of being rotatably displayed. In the implementation mode, a connecting line between each cranium entering point coordinate and the corresponding target point coordinate is used as an initial planning path to obtain a plurality of initial planning paths, a path point selection instruction input by a user is received, and a target planning path is determined from the plurality of initial planning paths, so that the central axis expected by the user can be rotated, and therefore observation is more comprehensive and use is more convenient.
And S12, converting the rotation instruction input by the user into the target rotation angle.
Specifically, the computer device may receive a rotation instruction input by a user through the external device, for example, an operation of sliding the mouse on the screen, and convert the rotation instruction into a target rotation angle corresponding to the rotation instruction, for example, a target rotation angle that needs to be rotated by the model and corresponds to a distance of sliding the mouse on the screen is generated according to the distance of sliding the mouse on the screen.
And S13, generating the rotation matrix based on the target planning path according to the target rotation angle.
Specifically, the computer equipment generates a rotation matrix representing the pose variation of the model by taking the target planning path as a middle axis according to the target rotation angle. Optionally, in an actual use process, the user may click the model through a left mouse button and drag the model for a certain distance, so as to implement the display after the angle of the model to be displayed is converted. Alternatively, during the rotation of the model, multiple windows can synchronously display the images of all the positions at the angle, thereby facilitating multi-angle observation.
In this embodiment, the computer device may generate an instruction according to a path input by a user, generate a target planning path, convert a rotation instruction input by the user into a target rotation angle, and finally generate a rotation matrix representing a pose variation of the model to be rotated based on the target planning path according to the target rotation angle, thereby automatically determining the rotation matrix matching with the user expectation based on the target planning path and the rotation instruction determined by the user, and thus, the model can be rotated and displayed based on any axis and angle defined by the user, so that the display angle of the model is more various, and the operation is more convenient.
It should be understood that although the various steps in the flow charts of fig. 2-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a model image display apparatus including:
a generating module 100, configured to obtain a rotation matrix corresponding to a target rotation angle; the target rotation angle is a rotation angle determined by taking the target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
an obtaining module 200, configured to obtain an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose before the model to be displayed does not rotate;
the processing module 300 is configured to fuse the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and a display module 400, configured to display the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal matrix, an initial sagittal matrix, and an initial transverse matrix, and the to-be-displayed matrices comprise a to-be-displayed coronal matrix, a to-be-displayed sagittal matrix, and a to-be-displayed transverse matrix; a processing module 300, specifically configured to multiply the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed; multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed; and multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed.
In one embodiment, the display module 400 is specifically configured to: and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronal matrix to be displayed, the sagittal matrix to be displayed and the transverse matrix to be displayed.
In an embodiment, the generating module 100 is specifically configured to generate the target planning path according to a path generating instruction input by a user; converting a rotation instruction input by a user into the target rotation angle; and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the generating module 100 is specifically configured to obtain the craniotomy coordinates and the target point coordinates input by the user; and determining the target planning path according to a connecting line between the craniotomy point coordinate and the target point coordinate.
In one embodiment, the number of the intracranial point coordinates and the target point coordinates is multiple, and the intracranial point coordinates and the target point coordinates correspond to each other in a one-to-one manner; a generating module 100, configured to use a connection line between each craniotomy point coordinate and the corresponding target point coordinate as an initial planned path to obtain multiple initial planned paths; receiving a path point selection instruction input by a user; and determining the target planning path from the plurality of initial planning paths according to the path selection instruction.
In an embodiment, the processing module 300 is further configured to update a normal vector of a state corresponding to each display window according to the rotation matrix.
For specific definition of the model display device, reference may be made to the above definition of the model display method, which is not described herein again. The modules in the model display device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal matrix, an initial sagittal matrix, and an initial transverse matrix, and the to-be-displayed matrices comprise a to-be-displayed coronal matrix, a to-be-displayed sagittal matrix, and a to-be-displayed transverse matrix; the processor, when executing the computer program, further performs the steps of:
multiplying the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed;
multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed;
and multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronal matrix to be displayed, the sagittal matrix to be displayed and the transverse matrix to be displayed.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring cranium entering point coordinates and target point coordinates input by a user;
and determining the target planning path according to a connecting line between the craniotomy point coordinate and the target point coordinate.
In one embodiment, the number of the intracranial point coordinates and the target point coordinates is multiple, and the intracranial point coordinates and the target point coordinates correspond to each other in a one-to-one manner; the processor, when executing the computer program, further performs the steps of:
using a connecting line between each cranium entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path point selection instruction input by a user;
and determining the target planning path from the plurality of initial planning paths according to the path selection instruction.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and updating the normal vector of the state corresponding to each display window according to the rotation matrix.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal matrix, an initial sagittal matrix, and an initial transverse matrix, and the to-be-displayed matrices comprise a to-be-displayed coronal matrix, a to-be-displayed sagittal matrix, and a to-be-displayed transverse matrix; the computer program when executed by the processor further realizes the steps of:
multiplying the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed;
multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed;
and multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronal matrix to be displayed, the sagittal matrix to be displayed and the transverse matrix to be displayed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring cranium entering point coordinates and target point coordinates input by a user; and determining the target planning path according to a connecting line between the craniotomy point coordinate and the target point coordinate.
In one embodiment, the number of the intracranial point coordinates and the target point coordinates is multiple, and the intracranial point coordinates and the target point coordinates correspond to each other in a one-to-one manner; the computer program when executed by the processor further realizes the steps of:
using a connecting line between each cranium entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path point selection instruction input by a user;
and determining the target planning path from the plurality of initial planning paths according to the path selection instruction.
In one embodiment, the computer program when executed by the processor further performs the steps of: and updating the normal vector of the state corresponding to each display window according to the rotation matrix.
It should be clear that, in the embodiments of the present application, the process of executing the computer program by the processor is consistent with the process of executing the steps in the above method, and specific reference may be made to the description above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for displaying a model image, the method comprising:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
2. The method of claim 1, wherein the initial world transformation matrix comprises an initial coronal matrix, an initial sagittal matrix, and an initial transverse matrix, and wherein the matrices to be displayed comprise a coronal matrix to be displayed, a sagittal matrix to be displayed, and a transverse matrix to be displayed;
the fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the rotation based on the rotation instruction, includes:
multiplying the rotation matrix and the initial coronal matrix to obtain a coronal matrix to be displayed;
multiplying the rotation matrix and the initial sagittal position matrix to obtain a sagittal position matrix to be displayed;
and multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed.
3. The method according to claim 2, wherein the displaying the image of the model to be displayed according to the matrix to be displayed comprises:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronal matrix to be displayed, the sagittal matrix to be displayed and the transverse matrix to be displayed.
4. The method according to any one of claims 1 to 3, wherein the obtaining of the rotation matrix corresponding to the target rotation angle includes:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
5. The method of claim 4, wherein generating the target planned path according to the user-entered path generation instruction comprises:
acquiring cranium entering point coordinates and target point coordinates input by a user;
and determining the target planning path according to a connecting line between the craniotomy point coordinate and the target point coordinate.
6. The method according to claim 5, wherein the number of the craniotomy point coordinates and the target point coordinates is multiple, and the craniotomy point coordinates correspond to the target point coordinates one to one;
determining the target planning path according to the connecting line between the cranium entering point coordinate and the target point coordinate, wherein the determining of the target planning path comprises the following steps:
using a connecting line between each cranium entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path point selection instruction input by a user;
and determining the target planning path from the plurality of initial planning paths according to the path selection instruction.
7. The method of claim 5, further comprising: and updating the normal vector of the state corresponding to each display window according to the rotation matrix.
8. A model image display apparatus, characterized in that the apparatus comprises:
the generating module is used for acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined by taking a target planning path as a rotation middle axis of the model to be displayed and based on a rotation instruction input by a user;
the acquisition module is used for acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose before the model to be displayed does not rotate;
the processing module is used for fusing the rotation matrix and the initial world transformation matrix to obtain a to-be-displayed matrix corresponding to the pose of the to-be-displayed model after the to-be-displayed model rotates according to the rotation instruction;
and the display module is used for displaying the image of the model to be displayed according to the matrix to be displayed.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911316469.5A 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium Active CN111161429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911316469.5A CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911316469.5A CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111161429A true CN111161429A (en) 2020-05-15
CN111161429B CN111161429B (en) 2023-09-26

Family

ID=70557384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911316469.5A Active CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111161429B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741835A (en) * 2020-05-29 2021-12-03 北京嗨动视觉科技有限公司 Method and device for realizing effect of trotting horse lamp

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093502A (en) * 2012-12-21 2013-05-08 广州中大电讯科技有限公司 Three-dimensional model information obtaining method based on rotary three views
CN103810746A (en) * 2014-01-03 2014-05-21 Tcl集团股份有限公司 Method and device of rendering 3D (three dimensional) model in any orientation
CN104504761A (en) * 2014-12-15 2015-04-08 天脉聚源(北京)科技有限公司 Method and device for controlling rotation of 3D (three-dimensional) model
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
US20180255282A1 (en) * 2017-03-03 2018-09-06 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for generating image information
CN108921000A (en) * 2018-04-16 2018-11-30 深圳市深网视界科技有限公司 Head angle mark, prediction model training, prediction technique, equipment and medium
CN110087555A (en) * 2017-05-12 2019-08-02 深圳迈瑞生物医疗电子股份有限公司 Display transformation method, the system of a kind of ultrasonic device and its three-dimensional ultrasound pattern
CN110189407A (en) * 2019-05-31 2019-08-30 东北大学 A kind of human body three-dimensional reconstruction model system based on HOLOLENS

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093502A (en) * 2012-12-21 2013-05-08 广州中大电讯科技有限公司 Three-dimensional model information obtaining method based on rotary three views
CN103810746A (en) * 2014-01-03 2014-05-21 Tcl集团股份有限公司 Method and device of rendering 3D (three dimensional) model in any orientation
CN104504761A (en) * 2014-12-15 2015-04-08 天脉聚源(北京)科技有限公司 Method and device for controlling rotation of 3D (three-dimensional) model
US20180255282A1 (en) * 2017-03-03 2018-09-06 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for generating image information
CN110087555A (en) * 2017-05-12 2019-08-02 深圳迈瑞生物医疗电子股份有限公司 Display transformation method, the system of a kind of ultrasonic device and its three-dimensional ultrasound pattern
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
CN108921000A (en) * 2018-04-16 2018-11-30 深圳市深网视界科技有限公司 Head angle mark, prediction model training, prediction technique, equipment and medium
CN110189407A (en) * 2019-05-31 2019-08-30 东北大学 A kind of human body three-dimensional reconstruction model system based on HOLOLENS

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741835A (en) * 2020-05-29 2021-12-03 北京嗨动视觉科技有限公司 Method and device for realizing effect of trotting horse lamp
CN113741835B (en) * 2020-05-29 2024-05-24 北京嗨动视觉科技有限公司 Trotting horse lamp effect realization method and trotting horse lamp effect realization device

Also Published As

Publication number Publication date
CN111161429B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
JP4106058B2 (en) Information processing apparatus, frame rearrangement method, and program for rearranging frames in window
CN111737922A (en) Data processing method, device, equipment and medium based on recurrent neural network
CN111161429B (en) Model image display method, device, computer equipment and storage medium
CN111563460A (en) Cleaning path acquisition method and device for cleaning equipment and storage medium
CN113409333B (en) Three-dimensional image cutting method and electronic equipment
WO2024093763A1 (en) Panoramic image processing method and apparatus, computer device, medium and program product
WO2018228062A1 (en) Application interface display method under split-screen display state and mobile terminal
CN111522476A (en) Method, device, computer device and storage medium for monitoring window switching
CN112820385A (en) Medical image browsing method, client and system
CN111653330A (en) Medical image display and diagnosis information generation method, system, terminal and medium
CN110807112A (en) Data display method and device based on map
CN114283927A (en) Input function acquisition method, device, system, computer equipment and storage medium
CN108830928A (en) Mapping method, device, terminal device and the readable storage medium storing program for executing of threedimensional model
CN113885863A (en) Method, device, electronic equipment and storage medium for realizing SOA (service oriented architecture) service
CN112330707A (en) Image processing method, image processing device, computer equipment and storage medium
CN114664411A (en) Feedback method and computer equipment for medical image quality evaluation
CN114494666A (en) Data processing method and device, electronic equipment and readable medium
JP2024506879A (en) Methods and apparatus, computer equipment and storage media for presenting virtual representations
CN111461991B (en) Image drawing method, image drawing device, computer equipment and storage medium
CN115239911A (en) Sectioning method and device for three-dimensional model and storage medium
JP2009237605A (en) Cad system
CN112632301B (en) Pathological image display method, pathological image display device, electronic equipment and storage medium
CN109002163A (en) Three-dimension interaction gesture sample method, apparatus, computer equipment and storage medium
CN111651131B (en) Image display method and device and computer equipment
US20220283692A1 (en) Interactions with slices of medical data in augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant