CN111161429B - Model image display method, device, computer equipment and storage medium - Google Patents

Model image display method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111161429B
CN111161429B CN201911316469.5A CN201911316469A CN111161429B CN 111161429 B CN111161429 B CN 111161429B CN 201911316469 A CN201911316469 A CN 201911316469A CN 111161429 B CN111161429 B CN 111161429B
Authority
CN
China
Prior art keywords
matrix
displayed
rotation
model
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911316469.5A
Other languages
Chinese (zh)
Other versions
CN111161429A (en
Inventor
时季寒
杨飞
宋争
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Original Assignee
Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd filed Critical Wuhan Zhongke Medical Technology Industrial Technology Research Institute Co Ltd
Priority to CN201911316469.5A priority Critical patent/CN111161429B/en
Publication of CN111161429A publication Critical patent/CN111161429A/en
Application granted granted Critical
Publication of CN111161429B publication Critical patent/CN111161429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a model display method, a model display device, computer equipment and a storage medium. The method comprises the following steps: acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed; acquiring an initial world transformation matrix of a model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation; fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction; and displaying the model to be displayed according to the matrix to be displayed. The method can rotate and display the model to be displayed based on the rotation matrix of any angle input by the user, so that the display angle of the model, namely the surrounding blood vessel and other tissues is richer, and the operation is more convenient.

Description

Model image display method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for displaying a model image, a computer device, and a storage medium.
Background
In order to facilitate visual observation and monitoring of human tissues, people usually take images of human tissues by using equipment, so that images of the tissues are obtained to facilitate observation of the tissues. Multiplanar Reformation (multi-faceted reconstruction, MPR) images are widely used by people as a common image format.
Because the MPR image is an image of multiple sections obtained for the tissue, the traditional method for observing the human tissue is to manually switch different needle tracks by a user, so as to realize the observation of different sections of the human tissue.
However, the conventional observation method can only observe based on a fixed needle track, and has a single angle, so that the use is inconvenient.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a model image display method, apparatus, computer device, and storage medium that can facilitate observation.
In a first aspect, an embodiment of the present application provides a method for displaying a model image, where the method includes:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal bit matrix, an initial sagittal bit matrix and an initial transverse bit matrix, and the matrix to be displayed comprises a coronal bit matrix to be displayed, a sagittal bit matrix to be displayed and a transverse bit matrix to be displayed;
the fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after rotation based on the rotation instruction comprises the following steps:
multiplying the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed;
multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed;
multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
In one embodiment, the displaying the image of the model to be displayed according to the matrix to be displayed includes:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronary position matrix to be displayed, the sagittal position matrix to be displayed and the transverse position matrix to be displayed.
In one embodiment, the obtaining the rotation matrix corresponding to the target rotation angle includes:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the generating the target planned path according to the path generation instruction input by the user includes:
acquiring cranium point coordinates and target point coordinates input by a user;
and determining the target planning path according to the connecting line between the cranium point coordinates and the target point coordinates.
In one embodiment, the number of the cranium-entering point coordinates and the number of the target point coordinates are multiple, and the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence;
the determining the target planning path according to the connecting line between the cranium-entering point coordinate and the target point coordinate comprises the following steps:
taking the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path clicking instruction input by a user;
and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction.
In one embodiment, the method further comprises: and updating the normal vector of the corresponding state bit of each display window according to the rotation matrix.
In a second aspect, an embodiment of the present application provides a model image display apparatus, including:
the generation module is used for acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
the acquisition module is used for acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
the processing module is used for fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and the display module is used for displaying the image of the model to be displayed according to the matrix to be displayed.
In a third aspect, an embodiment of the present application provides a computer device, including a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
According to the model display method, device, computer equipment and storage medium, as the target rotation angle is the rotation center axis of the model to be displayed by taking the target planning path as the rotation center axis of the model to be displayed and is determined based on the rotation instruction input by the user, the computer equipment acquires the rotation matrix corresponding to the target rotation angle, acquires the initial world transformation matrix representing the pose of the model to be displayed before the model to be displayed does not rotate in the initial state, fuses the rotation matrix and the initial world transformation matrix, and obtains the pose corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction, and the pose of the model to be displayed after the model to be displayed rotates based on the target rotation angle required by the user can be represented by the matrix to be displayed. According to the method provided by the embodiment, the rotation matrix and the initial world transformation matrix are fused to obtain the matrix to be displayed corresponding to the pose of the model to be displayed after rotating according to the rotation instruction, so that the problems of limited observation angle and complex operation caused by continuously switching needle tracks when the tissue around the model is observed at multiple angles in the traditional technology are avoided.
Drawings
FIG. 1 is an internal block diagram of a computer device in one embodiment;
FIG. 2 is a flow chart of a method for displaying a model image according to an embodiment;
FIG. 3 is a flowchart of a method for displaying a model image according to another embodiment;
FIG. 3a is a schematic diagram of single-path multi-bit MPR rotation interaction provided by one embodiment;
FIG. 3b is a schematic diagram of multi-path multi-bit MPR rotation interactions according to one embodiment;
fig. 4 is a schematic structural diagram of a model image display device according to another embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The model display method provided by the embodiment of the application can be applied to the computer equipment shown in fig. 1. The computer equipment can be MPR equipment, and comprises a processor, a memory, a network interface, a database, a display screen and an input device which are connected through a system bus, and a scanning module for scanning an object to be displayed. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the model to be displayed in the following embodiments, and the specific description of the model to be displayed is referred to in the following embodiments. The network interface of the computer device may be used to communicate with other devices external through a network connection. Optionally, the computer device may be a server, a desktop, a personal digital assistant, other terminal devices, such as a tablet computer, a mobile phone, etc., or a cloud or remote server, and the embodiment of the present application is not limited to a specific form of the computer device. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like. Of course, the input device and the display screen may not be part of the computer device, and may be an external device of the computer device.
It will be appreciated by those skilled in the art that the architecture shown in fig. 1 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements may be implemented, as a particular computer device may include more or less components than those shown, or may be combined with some components, or may have a different arrangement of components.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The execution subject of the method embodiments described below may be a model display device, which may be implemented as part or all of the computer apparatus by software, hardware, or a combination of software and hardware. The following method embodiments are described taking an execution subject as a computer device as an example.
Fig. 2 is a flow chart of a model image display method according to an embodiment. The embodiment relates to a specific process that a computer device rotates and displays a model to be displayed based on a rotation instruction input by a user, as shown in fig. 2, and the method includes:
s10, acquiring a rotation matrix corresponding to a target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed.
Specifically, the computer device may obtain the rotation matrix corresponding to the target rotation angle, which may be the rotation matrix stored in the read memory, or may receive the rotation matrix sent by other devices, or may obtain a rotation instruction input by the user, calculate the target rotation angle according to the rotation instruction, and calculate the rotation matrix according to the target rotation angle. The rotation matrix can represent the rotation amount of the user input required to rotate. For example, if the rotation amount represented by the rotation instruction input by the user is large, the target rotation angle is large, and thus the rotation amount represented by the obtained rotation matrix is large; if the rotation quantity represented by the rotation instruction input by the user is small, the target rotation angle is small, and therefore the rotation quantity represented by the obtained rotation matrix is small.
S20, acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation.
Specifically, the computer device may read an initial world transformation matrix of the model to be displayed stored in the memory in an initial state, or may receive the initial world transformation matrix of the model to be displayed. It should be noted that, the above-mentioned initial world transformation matrix is used to represent the pose, i.e. the position and orientation, of the model to be displayed in the initial state, i.e. before the rotation of the model to be displayed has occurred.
And S30, fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction.
Specifically, the computer device fuses the rotation matrix and the initial world transformation matrix, and may perform vector operation, such as vector superposition, or vector multiplication, on the rotation matrix and the initial world transformation matrix, so as to realize superposition of the rotation matrix on the initial world transformation matrix, and obtain the to-be-displayed matrix representing the pose of the to-be-displayed model after rotating according to the rotation amount corresponding to the rotation instruction.
And S40, displaying the image of the model to be displayed according to the matrix to be displayed.
Specifically, since the matrix to be displayed can represent the pose of the model after the model to be displayed rotates according to the rotation instruction, the computer equipment displays the model to be displayed according to the pose represented by the matrix to be displayed, so that the image display of the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction can be realized.
In this embodiment, since the target rotation angle is a rotation angle determined based on a rotation instruction input by a user and taking a target planned path as a rotation center axis of the model to be displayed, the computer device obtains a rotation matrix corresponding to the target rotation angle, obtains an initial world transformation matrix representing a pose of the model to be displayed before rotation of the model to be displayed in an initial state, and fuses the rotation matrix and the initial world transformation matrix to obtain a pose corresponding to the pose of the model to be displayed after rotation according to the rotation instruction, where the pose to be displayed can represent the pose of the model to be displayed after rotation of the model to be displayed based on the target rotation angle required by the user. According to the method provided by the embodiment, the rotation matrix and the initial world transformation matrix are fused to obtain the matrix to be displayed corresponding to the pose of the model to be displayed after rotating according to the rotation instruction, so that the problems of limited observation angle and complex operation caused by continuously switching needle tracks when observing surrounding tissues at multiple angles in the traditional technology are avoided.
Optionally, on the basis of the foregoing embodiment, the initial world transformation matrix includes an initial coronal bit matrix, an initial sagittal bit matrix, and an initial transverse bit matrix, and the matrix to be displayed includes a coronal bit matrix to be displayed, a sagittal bit matrix to be displayed, and a transverse bit matrix to be displayed; one possible implementation manner of the step S30 may include: multiplying the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed; multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed; multiplying the rotation matrix and the initial transverse position matrix to obtain a transverse position matrix to be displayed, so as to obtain the coronal position, sagittal position and constant position matrixes to be displayed respectively, and describing the model to be displayed from three different positions, so that the rotated model to be displayed is more comprehensively expressed, and is convenient to observe and use. Optionally, the computer device may further generate a matrix to be displayed of a shape bit corresponding to the selection instruction according to the selection instruction input by the user, which is not limited in the embodiment of the present application.
Optionally, the displaying coronal position matrix, the displaying sagittal position matrix, and the displaying transverse position matrix respectively represent the pose of the model to be displayed in three different positions, and in the practical use process, one possible implementation manner of the step S40 may include displaying the image of the model to be displayed in the corresponding display window according to the coronal position matrix to be displayed, the displaying sagittal position matrix, and the displaying transverse position matrix to be displayed. Specifically, each shape position can correspond to one display window, and different shapes of the model to be displayed can be displayed in different windows according to the rotated pose images, so that the model to be displayed can be displayed conveniently in different poses, the tissue distribution conditions under different shapes can be observed conveniently, and the use of a user is facilitated.
Alternatively, on the basis of the above embodiments, one possible implementation manner of the above step S10 may be as shown in fig. 3, including:
s11, generating the target planning path according to a path generation instruction input by a user.
Specifically, a path generation instruction input by a user is obtained, the path generation instruction comprises a plurality of coordinate points input by the user, and a center axis is generated according to at least more than two coordinate points to serve as a target planning path. For example, the computer device may take a connection of two coordinate points as a target planned path.
Alternatively, one possible implementation of this step may include: the method comprises the steps of obtaining the coordinate of the cranium entering point and the coordinate of the target point input by a user, determining the target planning path according to the connecting line between the coordinate of the cranium entering point and the coordinate of the target point, and determining the target planning path according to the connecting line between the coordinate of the cranium entering point and the coordinate of the target point, so as to obtain the target planning path matched with the input of the user, and further, the model to be displayed can rotate around a central axis expected by the user, so that the observation is more comprehensive, and the use is more convenient. Optionally, the computer device may further update a normal vector of the position corresponding to each display window according to the rotation matrix, so as to ensure that the position of each display window can be displayed based on the front surface, so as to facilitate observation and subsequent calculation. Optionally, fig. 3a is a schematic diagram of multi-bit MPR rotation interaction of a single path, fig. 3b is a schematic diagram of multi-bit MPR rotation interaction of multiple paths, and straight lines in the diagram are different planned paths.
Optionally, the number of the cranium-entering point coordinates and the number of the target point coordinates are multiple, and the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence; one possible implementation manner of the step of determining the target planning path according to the connection line between the cranientry point coordinate and the target point coordinate may further include: taking the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths; receiving a path clicking instruction input by a user; and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction. Specifically, the computer device may obtain a plurality of the cranium-entering point coordinates and a plurality of the target point coordinates input by the user, where the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence, and a connection line between each cranium-entering point coordinate and the corresponding target point coordinate may be used as an initial planning path. The computer device then receives a path clicking instruction input by the user, for example, may receive a selection instruction of clicking one of the initial planned paths with a mouse, and regards the clicked one as the target planned path. Alternatively, at this time, the icon of the mouse may be converted into a rotated icon representation into a state capable of being displayed in a rotated manner. In the implementation manner, the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate is used as an initial planning path to obtain a plurality of initial planning paths, and the path point selection instruction input by the user is received, so that the target planning path is determined from the plurality of initial planning paths, and the target planning path can rotate along the central axis expected by the user, so that the observation is more comprehensive and the use is more convenient.
S12, converting the rotation instruction input by the user into the target rotation angle.
Specifically, the computer device may receive a rotation instruction input by a user through the external device, for example, an operation of sliding the mouse on the screen, and convert the rotation instruction into a target rotation angle corresponding to the rotation instruction, for example, according to a distance of sliding the mouse on the screen, to generate a target rotation angle corresponding to the distance and requiring rotation of the model.
S13, generating the rotation matrix based on the target planning path according to the target rotation angle.
Specifically, according to the target rotation angle, the computer equipment generates a rotation matrix representing the pose change quantity of the model by taking the target planning path as a central axis. Optionally, in the actual use process, the user can click the model through the left button of the mouse and drag a certain distance so as to realize the display after the conversion angle of the model to be displayed. Alternatively, multiple windows may simultaneously display images of various positions at that angle during the model rotation, thus facilitating multi-angle viewing.
In this embodiment, the computer device may generate an instruction according to a path input by a user, generate a target planned path, convert a rotation instruction input by the user into a target rotation angle, and finally generate a rotation matrix representing a pose variation amount of a model to be rotated according to the target rotation angle based on the target planned path, thereby implementing automatic determination of a rotation matrix which is expected to be matched with the user based on the target planned path and the rotation instruction determined by the user, so that rotation and display of the model based on any axis and angle defined by the user can be implemented, so that the display angle of the model is more various, and the operation is more convenient.
It should be understood that, although the steps in the flowcharts of fig. 2-3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or steps.
In one embodiment, as shown in fig. 4, there is provided a model image display apparatus including:
the generating module 100 is configured to obtain a rotation matrix corresponding to the target rotation angle; the target rotation angle is a rotation angle determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
the acquisition module 200 is configured to acquire an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
the processing module 300 is configured to fuse the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to a pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and the display module 400 is used for displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal bit matrix, an initial sagittal bit matrix, and an initial transverse bit matrix, and the matrix to be displayed comprises a coronal bit matrix to be displayed, a sagittal bit matrix to be displayed, and a transverse bit matrix to be displayed; the processing module 300 is specifically configured to multiply the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed; multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed; multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
In one embodiment, the display module 400 is specifically configured to: and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronary position matrix to be displayed, the sagittal position matrix to be displayed and the transverse position matrix to be displayed.
In one embodiment, the generating module 100 is specifically configured to generate the target planned path according to a path generating instruction input by a user; converting a rotation instruction input by a user into the target rotation angle; and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the generating module 100 is specifically configured to obtain the cranium point coordinates and the target point coordinates input by the user; and determining the target planning path according to the connecting line between the cranium point coordinates and the target point coordinates.
In one embodiment, the number of the cranium-entering point coordinates and the number of the target point coordinates are multiple, and the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence; the generating module 100 is specifically configured to use a connection line between each of the cranium-entering point coordinates and the corresponding target point coordinate as an initial planning path, so as to obtain a plurality of initial planning paths; receiving a path clicking instruction input by a user; and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction.
In one embodiment, the processing module 300 is further configured to update a normal vector of the corresponding status bit of each display window according to the rotation matrix.
For specific limitations of the model display device, reference may be made to the above limitations of the model display method, and no further description is given here. The respective modules in the above model display device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal bit matrix, an initial sagittal bit matrix, and an initial transverse bit matrix, and the matrix to be displayed comprises a coronal bit matrix to be displayed, a sagittal bit matrix to be displayed, and a transverse bit matrix to be displayed; the processor when executing the computer program also implements the steps of:
multiplying the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed;
multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed;
multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
In one embodiment, the processor when executing the computer program further performs the steps of:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronary position matrix to be displayed, the sagittal position matrix to be displayed and the transverse position matrix to be displayed.
In one embodiment, the processor when executing the computer program further performs the steps of:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring cranium point coordinates and target point coordinates input by a user;
and determining the target planning path according to the connecting line between the cranium point coordinates and the target point coordinates.
In one embodiment, the number of the cranium-entering point coordinates and the number of the target point coordinates are multiple, and the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence; the processor when executing the computer program also implements the steps of:
taking the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path clicking instruction input by a user;
and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction.
In one embodiment, the processor when executing the computer program further performs the steps of: and updating the normal vector of the corresponding state bit of each display window according to the rotation matrix.
It should be clear that the process of executing the computer program by the processor in the embodiment of the present application is consistent with the execution of the steps in the above method, and specific reference may be made to the foregoing description.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction;
and displaying the image of the model to be displayed according to the matrix to be displayed.
In one embodiment, the initial world transformation matrix comprises an initial coronal bit matrix, an initial sagittal bit matrix, and an initial transverse bit matrix, and the matrix to be displayed comprises a coronal bit matrix to be displayed, a sagittal bit matrix to be displayed, and a transverse bit matrix to be displayed; the computer program when executed by the processor also performs the steps of:
multiplying the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed;
multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed;
multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronary position matrix to be displayed, the sagittal position matrix to be displayed and the transverse position matrix to be displayed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring cranium point coordinates and target point coordinates input by a user; and determining the target planning path according to the connecting line between the cranium point coordinates and the target point coordinates.
In one embodiment, the number of the cranium-entering point coordinates and the number of the target point coordinates are multiple, and the cranium-entering point coordinates and the target point coordinates are in one-to-one correspondence; the computer program when executed by the processor also performs the steps of:
taking the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path clicking instruction input by a user;
and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction.
In one embodiment, the computer program when executed by the processor further performs the steps of: and updating the normal vector of the corresponding state bit of each display window according to the rotation matrix.
It should be clear that the process of executing the computer program by the processor in the embodiment of the present application is consistent with the execution of the steps in the above method, and specific reference may be made to the foregoing description.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A model image display method, characterized in that the method comprises:
acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation; the initial world transformation matrix comprises an initial coronal position matrix, an initial sagittal position matrix and an initial transverse position matrix;
fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction; the matrix to be displayed comprises a coronary matrix to be displayed, a sagittal matrix to be displayed and a transverse matrix to be displayed;
displaying the image of the model to be displayed according to the matrix to be displayed;
the method for obtaining the to-be-displayed matrix corresponding to the pose of the to-be-displayed model after rotating according to the rotating instruction comprises the following steps:
multiplying the rotation matrix and the initial coronal bit matrix to obtain a coronal bit matrix to be displayed;
multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed;
multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
2. The method of claim 1, wherein displaying the image of the model to be displayed in accordance with the matrix to be displayed comprises:
and respectively displaying the images of the model to be displayed in the corresponding display windows according to the coronary position matrix to be displayed, the sagittal position matrix to be displayed and the transverse position matrix to be displayed.
3. The method according to claim 1 or 2, wherein the obtaining a rotation matrix corresponding to the target rotation angle includes:
generating the target planning path according to a path generation instruction input by a user;
converting a rotation instruction input by a user into the target rotation angle;
and generating the rotation matrix based on the target planning path according to the target rotation angle.
4. A method according to claim 3, wherein said generating said target planned path according to a path generation instruction entered by a user comprises:
acquiring cranium point coordinates and target point coordinates input by a user;
and determining the target planning path according to the connecting line between the cranium point coordinates and the target point coordinates.
5. The method of claim 4, wherein the number of the cranium point coordinates and the target point coordinates are multiple, and the cranium point coordinates are in one-to-one correspondence with the target point coordinates;
the determining the target planning path according to the connecting line between the cranium-entering point coordinate and the target point coordinate comprises the following steps:
taking the connecting line between each cranium-entering point coordinate and the corresponding target point coordinate as an initial planning path to obtain a plurality of initial planning paths;
receiving a path clicking instruction input by a user;
and determining the target planning path from a plurality of initial planning paths according to the path clicking instruction.
6. The method according to claim 2, wherein the method further comprises: and updating the normal vector of the corresponding state bit of each display window according to the rotation matrix.
7. A model image display apparatus, characterized in that the apparatus comprises:
the generation module is used for acquiring a rotation matrix corresponding to the target rotation angle; the target rotation angle is determined based on a rotation instruction input by a user by taking a target planning path as a rotation center axis of a model to be displayed;
the acquisition module is used for acquiring an initial world transformation matrix of the model to be displayed in an initial state; the initial world transformation matrix is used for representing the pose of the model to be displayed before rotation; the initial world transformation matrix comprises an initial coronal position matrix, an initial sagittal position matrix and an initial transverse position matrix;
the processing module is used for fusing the rotation matrix and the initial world transformation matrix to obtain a matrix to be displayed corresponding to the pose of the model to be displayed after the model to be displayed rotates according to the rotation instruction; the matrix to be displayed comprises a coronary matrix to be displayed, a sagittal matrix to be displayed and a transverse matrix to be displayed;
the display module is used for displaying the image of the model to be displayed according to the matrix to be displayed;
the processing module is specifically configured to multiply the rotation matrix with the initial coronal bit matrix to obtain a coronal bit matrix to be displayed; multiplying the rotation matrix and the initial sagittal matrix to obtain a sagittal matrix to be displayed; multiplying the rotation matrix and the initial horizontal position matrix to obtain a horizontal position matrix to be displayed.
8. The device according to claim 7, wherein the display module is specifically configured to display the image of the model to be displayed in the corresponding display window according to the coronary bit matrix to be displayed, the sagittal bit matrix to be displayed, and the transverse bit matrix to be displayed.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN201911316469.5A 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium Active CN111161429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911316469.5A CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911316469.5A CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111161429A CN111161429A (en) 2020-05-15
CN111161429B true CN111161429B (en) 2023-09-26

Family

ID=70557384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911316469.5A Active CN111161429B (en) 2019-12-19 2019-12-19 Model image display method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111161429B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741835A (en) * 2020-05-29 2021-12-03 北京嗨动视觉科技有限公司 Method and device for realizing effect of trotting horse lamp

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093502A (en) * 2012-12-21 2013-05-08 广州中大电讯科技有限公司 Three-dimensional model information obtaining method based on rotary three views
CN103810746A (en) * 2014-01-03 2014-05-21 Tcl集团股份有限公司 Method and device of rendering 3D (three dimensional) model in any orientation
CN104504761A (en) * 2014-12-15 2015-04-08 天脉聚源(北京)科技有限公司 Method and device for controlling rotation of 3D (three-dimensional) model
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
CN108921000A (en) * 2018-04-16 2018-11-30 深圳市深网视界科技有限公司 Head angle mark, prediction model training, prediction technique, equipment and medium
CN110087555A (en) * 2017-05-12 2019-08-02 深圳迈瑞生物医疗电子股份有限公司 Display transformation method, the system of a kind of ultrasonic device and its three-dimensional ultrasound pattern
CN110189407A (en) * 2019-05-31 2019-08-30 东北大学 A kind of human body three-dimensional reconstruction model system based on HOLOLENS

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910210B (en) * 2017-03-03 2018-09-11 百度在线网络技术(北京)有限公司 Method and apparatus for generating image information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093502A (en) * 2012-12-21 2013-05-08 广州中大电讯科技有限公司 Three-dimensional model information obtaining method based on rotary three views
CN103810746A (en) * 2014-01-03 2014-05-21 Tcl集团股份有限公司 Method and device of rendering 3D (three dimensional) model in any orientation
CN104504761A (en) * 2014-12-15 2015-04-08 天脉聚源(北京)科技有限公司 Method and device for controlling rotation of 3D (three-dimensional) model
CN110087555A (en) * 2017-05-12 2019-08-02 深圳迈瑞生物医疗电子股份有限公司 Display transformation method, the system of a kind of ultrasonic device and its three-dimensional ultrasound pattern
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
CN108921000A (en) * 2018-04-16 2018-11-30 深圳市深网视界科技有限公司 Head angle mark, prediction model training, prediction technique, equipment and medium
CN110189407A (en) * 2019-05-31 2019-08-30 东北大学 A kind of human body three-dimensional reconstruction model system based on HOLOLENS

Also Published As

Publication number Publication date
CN111161429A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
US20160353055A1 (en) Simulated transparent display with augmented reality for remote collaboration
CN111031293B (en) Panoramic monitoring display method, device and system and computer readable storage medium
CN109901123A (en) Transducer calibration method, device, computer equipment and storage medium
WO2015139653A1 (en) Image zooming method and apparatus
CN111161429B (en) Model image display method, device, computer equipment and storage medium
CN111273772A (en) Augmented reality interaction method and device based on slam mapping method
JP4858625B2 (en) Information display device and program
CN116524088B (en) Jewelry virtual try-on method, jewelry virtual try-on device, computer equipment and storage medium
CN109743635B (en) Comment reply method, device, equipment and storage medium
CN115097976B (en) Method, apparatus, device and storage medium for image processing
CN111325348A (en) Quantum line display method and device, storage medium and electronic device
CN114329675A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN109814784B (en) Picture rotation method and device, computer equipment and storage medium
JP2019200558A (en) Image processing device and inspection system
CN114344894A (en) Scene element processing method, device, equipment and medium
CN108184150A (en) Vector control method, device and the signal processing system of long-range mark signal
CN112330707A (en) Image processing method, image processing device, computer equipment and storage medium
CN111580728A (en) Method and device for dynamically plotting multiple military standard types based on state mode
CN111325659A (en) Three-dimensional portrait acquisition method, video analysis method, device, equipment and medium
CN111461991B (en) Image drawing method, image drawing device, computer equipment and storage medium
US11429247B1 (en) Interactions with slices of medical data in augmented reality
CN112632301B (en) Pathological image display method, pathological image display device, electronic equipment and storage medium
CN117876626B (en) Three-dimensional visualization method, system, equipment and storage medium for electric power data
CN115601512B (en) Interactive three-dimensional reconstruction method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant