CN116958392A - Graphic image processing method, graphic image processing device and medical graphic image application system - Google Patents

Graphic image processing method, graphic image processing device and medical graphic image application system Download PDF

Info

Publication number
CN116958392A
CN116958392A CN202310936080.0A CN202310936080A CN116958392A CN 116958392 A CN116958392 A CN 116958392A CN 202310936080 A CN202310936080 A CN 202310936080A CN 116958392 A CN116958392 A CN 116958392A
Authority
CN
China
Prior art keywords
rendering
target
image
image processing
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310936080.0A
Other languages
Chinese (zh)
Inventor
王莹珑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202310936080.0A priority Critical patent/CN116958392A/en
Publication of CN116958392A publication Critical patent/CN116958392A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

The present application relates to a graphic image processing method, an apparatus, a computer device, a storage medium, a computer program product and a medical graphic image application system. The method comprises the following steps: receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection device for a target part of the target object; executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part; rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image; and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal. By adopting the method, the intelligence of the graphic application can be improved.

Description

Graphic image processing method, graphic image processing device and medical graphic image application system
Technical Field
The present application relates to the field of graphics image processing technology, and in particular, to a graphics image processing method, apparatus, computer device, storage medium, computer program product, and medical graphics image application system.
Background
With the development of computer visualization, higher demands are being placed on three-dimensional rendering. The three-dimensional scene rendering is a process in which a computer acquires basic information of an object to be rendered from a three-dimensional scene and outputs an image with high sense of reality through complex calculation.
In the aspect of graphic rendering application aiming at a medical robot operation equipment terminal, in the related technology, when software development is carried out on embedded equipment, basically processed services are equipment parameters, human factor parameters and other aspects of software development, but basically are single architecture, and the design is traditional; the scene requirements of graphics applications by different terminals on the same surgical equipment system cannot be met.
Therefore, the graphic application in the related art has a problem of poor intelligence.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a graphics image processing method, apparatus, computer device, computer-readable storage medium, computer program product, and medical graphics image application system that can improve the intelligence of graphics applications.
In a first aspect, the present application provides a graphic image processing method. The method comprises the following steps:
Receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
In one embodiment, the performing the image processing operation matched with the image processing command generates a target rendering change parameter for a target portion model corresponding to the target portion, including:
analyzing the image processing command to obtain an analyzed image processing command;
and executing corresponding image processing operation according to the parsed image processing command, and generating the target rendering change parameter.
In one embodiment, the executing the corresponding image processing operation according to the parsed image processing command, generating the target rendering change parameter includes:
According to the analyzed image processing command, corresponding image processing operation is executed in a world coordinate system, and rendering change parameters aiming at the target part model under the world coordinate system are obtained;
according to the mapping relation between the world coordinates and an engine coordinate system where a rendering engine is located, converting rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system;
and taking the rendering change parameter under the engine coordinate system as the target rendering change parameter.
In one embodiment, the rendering the target portion model according to the target rendering change parameter to obtain a graphics rendering image includes:
rendering the target part model according to the target rendering change parameters to obtain a rendered three-dimensional target part model;
and compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain the graphic rendering image.
In one embodiment, the rendering the target portion model according to the target rendering change parameter, to obtain a rendered three-dimensional target portion model, includes:
Generating a rendering task according to the target rendering change parameter; the rendering task is used for transmitting the target rendering change parameters to a rendering engine; the rendering engine is used for rendering the target part model according to the target rendering change parameters.
In one embodiment, the passing the target rendering change parameter to a rendering engine includes:
passing, by the rendering task, the target rendering change parameter to a rendering engine state machine based on a rendering processing pipeline;
and setting parameters of the rendering engine according to the target rendering change parameters through the rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine.
In one embodiment, the rendering the target portion model according to the target rendering change parameter, to obtain a rendered three-dimensional target portion model, includes:
executing voxel loading state synchronization operation through the rendering engine state machine; the voxel loading state synchronously operates engine data for loading volume data;
executing plane information state synchronization operation through the rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role;
Executing multi-role rendering operation through the rendering engine state machine; and the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain the rendered three-dimensional target part model.
In one embodiment, the method further comprises:
receiving, by the rendering pipeline, a rendering task generated by the image processing command;
and adding each rendering processing task to a corresponding rendering processing sub-pipeline in the rendering processing pipeline for execution through the rendering processing pipeline.
In one embodiment, the rendering processing sub-pipeline includes at least one of an instant rendering pipeline, a runtime pipeline, and a compression pipeline; the rendering processing task comprises at least one of a rendering task, a non-image information task, an initialization task, a compression task, a complex processing task and an image sending task;
the instant rendering pipeline is used for executing at least one of the complex processing task, the non-image information task and the image sending task;
the runtime pipeline is used for executing at least one of the initialization task and the rendering task;
The compression pipeline is to perform the compression task.
In one embodiment, the method further comprises:
transmitting image processing guide parameters corresponding to the graphic rendering image to a front end through the non-image information task in the instant rendering pipeline; the front end is used for synchronizing the graphic rendering image and the corresponding image processing guide parameter so as to synchronously display the graphic rendering image and the corresponding image processing guide parameter.
In a second aspect, the present application also provides a graphics image processing apparatus. The device comprises:
a receiving module for receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
the generation module is used for executing the image processing operation matched with the image processing command and generating a target rendering change parameter of a target part model corresponding to the target part;
the rendering module is used for rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and the display module is used for displaying the target image obtained by overlapping the graphic rendering image and the video frame image in the target video signal.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
Executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
In a sixth aspect, the present application further provides a medical graphic image application system. The medical graphic image application system includes: a front end and a rear end;
the front end is used for responding to the graphic rendering operation of the target video signal and sending an image processing command to the back end; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
the back end is used for responding to the image processing command, executing the image processing operation matched with the image processing command and generating a target rendering change parameter for a target part model corresponding to the target part;
the back end is further used for rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image, and sending the graphic rendering image to the front end;
the front end is further configured to display a target image obtained by overlapping the graphics rendering image and a video frame image in the target video signal.
The above-described graphic image processing method, apparatus, computer device, storage medium, computer program product, and medical graphic image application system, by receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object; executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part; rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image; and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
Therefore, based on the method for carrying out superposition display on the image rendering image corresponding to the target part and the corresponding video frame image, the image application software framework based on the technical equipment at the upper edge of the terminal of the operation equipment can be constructed, so that a set of proper software framework is constructed on various operation systems of the terminal of the operation equipment capable of supporting the high-end edge technology, the requirement of image rendering on the terminal of the operation equipment can be supported, the software service on the terminal can be used as a service end, and the capabilities of image processing and image rendering application can be provided for other equipment or terminals in the same set of system. For example, an image processing command for a target video signal including a video frame image sent by a front-end device in the same system may be received, an image processing operation matching the received image processing command for the target video signal is performed, a target site model is rendered by rendering a target rendering change parameter, a graphics rendering image is obtained, and a target image obtained by superimposing the graphics rendering image and the video frame image in the target video signal is displayed by the front-end device. The method solves the problem that in the related art, graphics application is difficult to solve through the design of graphics application software on high-end equipment capable of supporting edge technology, adopts a single stand-alone architecture to design the traditional problem, meets the requirements of basic application and high-level application of medical image processing and graphics rendering, realizes the application of graphics image types on embedded terminal equipment, and effectively improves the intelligence of graphics application.
Drawings
FIG. 1 is a flow chart of a graphical image processing method in one embodiment;
FIG. 2 is a schematic diagram of the components of a rendering engine state machine in one embodiment;
FIG. 3 is a schematic diagram of the composition of a rendering processing pipeline in one embodiment;
FIG. 4 is a schematic illustration of an application of a medical graphical image application system in a laparoscopic scene in one embodiment;
FIG. 5 is a schematic diagram of a software solution based on a medical graphical image application system in one embodiment;
FIG. 6 is a schematic diagram of video frame data output to a front end in one embodiment;
FIG. 7 is a schematic illustration of a registration process for a video frame image and a graphics rendered image in one embodiment;
FIG. 8 is a schematic diagram of a composition of a front end and a back end in one embodiment;
FIG. 9 is a schematic diagram of an initialization flow of a medical graphical image application system in one embodiment;
FIG. 10 is a schematic diagram of the creation of a data structure in one embodiment;
FIG. 11 is a schematic diagram of the creation of an operation command pool in one embodiment;
FIG. 12 is a schematic illustration of an initialization flow of a simplified medical graphical image application system in one embodiment;
FIG. 13 is a schematic workflow diagram of a medical graphics image application system in one embodiment;
FIG. 14 is a block diagram of a graphics image processing apparatus in one embodiment;
fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In one embodiment, as shown in fig. 1, a graphic image processing method is provided, and it is understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server, for example, the system may be a medical graphic image application system. The server may be an independent server or a server cluster formed by a plurality of servers. The embodiment is exemplified by the method applied to a medical graphic image application system, and the method comprises the following steps:
step S110, an image processing command for a target video signal is received.
The target video signal is a video signal output by the inspection device aiming at a target part of a target object, and comprises a video frame image corresponding to the target part.
The inspection device is used for inspecting the target part in the corresponding inspection scene. The examination scene may include, but is not limited to, a laparoscope scene, a gastrointestinal tract examination scene, and the like.
In a specific implementation, in a target inspection scene, the corresponding inspection device can inspect a target part of a target object, output a video signal as a target video signal, and a user can perform image operation on the target video signal so that the medical graphic image application system can receive an image processing command for the target video signal.
Step S120, performing an image processing operation matched with the image processing command, and generating a target rendering change parameter for the target portion model corresponding to the target portion.
The target site model may include, but is not limited to, a pre-stored three-dimensional virtual model that matches the target site, a pre-operative planning model, and the like.
In a specific implementation, the medical graphic image application system may generate the target rendering change parameters for the target portion model corresponding to the target portion by analyzing the image processing command, performing an image processing operation matched with the image processing command.
The medical graphic image application system can generate target rendering change parameters for a target part model aiming at the current video frame image.
And step S130, rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image.
The target rendering change parameters are used for changing parameters of a rendering engine so as to render the target part model according to the corresponding target rendering change parameters for each video frame image in the target video signal, and a graph rendering image matched with each video frame image can be obtained so as to output the graph rendering image matched with the target video signal in real time.
In the specific implementation, the medical graphic image application system can render the target part model according to the current target rendering change parameters to obtain a graphic rendering image matched with the current video frame image.
Step S140, displaying a target image obtained by superimposing the graphics-rendered image and the video frame image in the target video signal.
In the specific implementation, the target image obtained by overlapping the current graphic rendering image and the current video frame image can be displayed through the medical graphic image application system so as to be used for a user to refer to and guide the user to perform related operations.
In the graphic image processing method, an image processing command for a target video signal is received; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object; executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part; rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image; and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
Therefore, based on the method for carrying out superposition display on the image rendering image corresponding to the target part and the corresponding video frame image, the image application software framework based on the technical equipment at the upper edge of the terminal of the operation equipment can be constructed, so that a set of proper software framework is constructed on various operation systems of the terminal of the operation equipment capable of supporting the high-end edge technology, the requirement of image rendering on the terminal of the operation equipment can be supported, the software service on the terminal can be used as a service end, and the capabilities of image processing and image rendering application can be provided for other equipment or terminals in the same set of system. For example, an image processing command for a target video signal including a video frame image sent by a front-end device in the same system may be received, an image processing operation matching the received image processing command for the target video signal is performed, a target site model is rendered by rendering a target rendering change parameter, a graphics rendering image is obtained, and a target image obtained by superimposing the graphics rendering image and the video frame image in the target video signal is displayed by the front-end device. The method solves the problem that in the related art, graphics application is difficult to solve through the design of graphics application software on high-end equipment capable of supporting edge technology, adopts a single stand-alone architecture to design the traditional problem, meets the requirements of basic application and high-level application of medical image processing and graphics rendering, realizes the application of graphics image types on embedded terminal equipment, and effectively improves the intelligence of graphics application.
In one embodiment, performing an image processing operation that matches an image processing command, generating a target rendering change parameter for a target site model corresponding to a target site, includes: analyzing the image processing command to obtain an analyzed image processing command; and executing corresponding image processing operation according to the analyzed image processing command, and generating a target rendering change parameter.
The image processing command may include registration points for the target portion, and the registration points may be feature points for registration.
In a specific implementation, the medical graphic image application system may analyze the image processing command to obtain an analyzed image processing command including the registration point for the target part in the process of executing the image processing operation matched with the image processing command and generating the target rendering change parameter of the target part model corresponding to the target part. In this way, the medical graphic image application system can execute corresponding image processing operations according to the parsed image processing command to generate the target rendering change parameters for the target portion model corresponding to the target portion.
According to the technical scheme of the embodiment, the image processing command after analysis is obtained by analyzing the image processing command; and executing corresponding image processing operation according to the analyzed image processing command, and generating a target rendering change parameter. Therefore, by analyzing the image processing command aiming at the target video signal, the image processing operation matched with the image processing command can be accurately executed to generate the target rendering change parameter, so that the image rendering image obtained by the target rendering change parameter can be matched with the video frame image in the target video signal, the alignment degree of the image rendering image and the video frame image in space can be improved, the display effect of the target image obtained by superposition is further improved, and the intelligence of image application is effectively improved.
In one embodiment, according to the parsed image processing command, performing a corresponding image processing operation to generate a target rendering change parameter, including: according to the analyzed image processing command, corresponding image processing operation is executed in a world coordinate system, and rendering change parameters aiming at the target part model under the world coordinate system are obtained; according to the mapping relation between the world coordinates and the engine coordinate system where the rendering engine is located, converting the rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system; and taking the rendering change parameters under the engine coordinate system as target rendering change parameters.
The world coordinate system is a coordinate system constructed by the medical graphic image application system.
The engine coordinate system where the rendering engine is located may be named as the coordinate system of the graphics engine. In practical applications, the coordinate system of openGL may be used.
Among other things, the rendering change parameters may include, but are not limited to, a panning zoom parameter, a virtual camera parameter of the rendering engine. For example, the virtual camera parameters may include, but are not limited to, at least one of parameters that are illumination parameters, color parameters, hidden display parameters, transparency parameters, and the like.
In a specific implementation, in the process of executing corresponding image processing operation according to the parsed image processing command and generating the target rendering change parameter, the medical graphic image application system may determine a registration relationship between the target location model and the video frame image in the world coordinate system according to the parsed image processing command including the registration point for the target location, so as to execute corresponding image processing operation for the target location model in the world coordinate system, and generate a variable for the target location model, where the variable is used as the rendering change parameter for the target location model in the world coordinate system.
Then, the medical graphic image application system can convert the rendering change parameters of the target part model under the world coordinate system into the rendering change parameters under the engine coordinate system according to the mapping relation between the world coordinate and the engine coordinate system where the rendering engine is located, and uses the rendering change parameters under the engine coordinate system as target rendering change parameters.
According to the technical scheme of the embodiment, corresponding image processing operation is executed in a world coordinate system according to the analyzed image processing command, so that rendering change parameters of a target part model in the world coordinate system are obtained; according to the mapping relation between the world coordinates and the engine coordinate system where the rendering engine is located, converting the rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system; and taking the rendering change parameters under the engine coordinate system as target rendering change parameters. Therefore, the rendering change parameters of the target part model in the world coordinate system are converted into the rendering change parameters of the engine coordinate system, so that the target rendering change parameters are obtained, and the target part model can be accurately rendered in the engine coordinate system where the rendering engine is located.
In one embodiment, rendering the target site model according to the target rendering change parameter to obtain a graphics rendering image includes: rendering the target part model according to the target rendering change parameters to obtain a rendered three-dimensional target part model; and compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain a graphic rendering image.
The preset image format may be JPG (Joint Photographic Experts Group,), PNG (Portable Network Graphics ) or the like.
In the specific implementation, in the process that the medical graphic image application system renders the target part model according to the target rendering change parameters to obtain the graphic rendering image, the medical graphic image application system can render the target part model according to the target rendering change parameters to obtain the rendered three-dimensional target part model. Because the rendered three-dimensional target part model is a set of three-dimensional array data and cannot be directly displayed, in order to facilitate the superposition display with the video frame image, the medical graphic image application system can compress the rendered three-dimensional target part model into a two-dimensional image with a preset image format, and the two-dimensional image is used as a graphic rendering image.
According to the technical scheme, the target part model is rendered according to the target rendering change parameters, so that a rendered three-dimensional target part model is obtained; and compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain a graphic rendering image. Therefore, the rendered three-dimensional target part model is a set of three-dimensional array data, so that the three-dimensional target part model cannot be directly displayed, and a two-dimensional graph rendering image is obtained by compressing the three-dimensional target part model, so that the three-dimensional target part model can be directly displayed, and the intelligence of graph application in the aspect of image display is effectively improved.
In one embodiment, rendering the target site model according to the target rendering change parameter to obtain a rendered three-dimensional target site model, including: generating a rendering task according to the target rendering change parameter; the rendering task is used for transmitting the target rendering change parameters to the rendering engine; the rendering engine is used for rendering the target part model according to the target rendering change parameters.
In the specific implementation, in the process that the medical graphic image application system renders the target part model according to the target rendering change parameters to obtain the rendered three-dimensional target part model, the medical graphic image application system can generate a rendering task of graphics according to the target rendering change parameters, the target rendering change parameters are transmitted to the rendering engine through the rendering task, and the rendering engine renders the target part model according to the target rendering change parameters to obtain the rendered three-dimensional target part model.
In this way, the rendering task is created by the target rendering change parameter, so that the target portion model rendering operation can be efficiently and accurately performed based on the rendering task.
Wherein, in the process of transmitting the target rendering change parameter to the rendering engine through the rendering task, the rendering task can transmit the target rendering change parameter to the rendering engine state machine based on the rendering processing pipeline; and setting parameters of the rendering engine according to the target rendering change parameters through a rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine, so that the rendering engine can render the target part model through the target rendering change parameters.
According to the technical scheme of the embodiment, through a rendering task, a target rendering change parameter is transmitted into a rendering engine state machine based on a rendering processing pipeline; and setting parameters of the rendering engine according to the target rendering change parameters by a rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine. In this way, the rendering processing pipeline receives the rendering task to transmit the target rendering change parameter to the rendering engine state machine, and then the rendering engine is further parameter set based on the rendering engine state machine, so that the target rendering change parameter can be accurately transmitted to the rendering engine.
In one embodiment, rendering the target site model according to the target rendering change parameter to obtain a rendered three-dimensional target site model, including: executing voxel loading state synchronization operation through a rendering engine state machine; voxel loading state synchronization operates on engine data for loading volume data; performing plane information state synchronization operations by a rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role; executing multi-role rendering operation through a rendering engine state machine; the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain a rendered three-dimensional target part model.
In a specific implementation, for the convenience of understanding of those skilled in the art, fig. 2 provides a schematic diagram of a rendering engine state machine, as shown in fig. 2, the rendering behavior of the graphics engine may be categorized into three steps, that is, the rendering engine renders the target portion model according to the target rendering change parameters, and the process of obtaining the rendered three-dimensional target portion model may be divided into the following steps:
1. voxel loading state synchronization, performing voxel loading state synchronization operations by a rendering engine state machine, wherein the voxel loading state synchronization operations are used to load engine data of the volume data. Specifically, engine data loading of the volume data can be performed by the voxel roles, and the Mask (Mask) switching display is controlled by the Mask roles.
2. Plane information state synchronization, which is performed by a rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role. Wherein each rendering role may include a voxel rendering role, a multi-planar reconstruction rendering role, a curved reconstruction rendering role, and a mesh rendering role. Setting rendering attributes of the body by the voxel rendering roles, adjusting and controlling rendering related attributes by the multi-plane reconstruction rendering roles, controlling rendering related attributes by the curved surface reconstruction rendering roles, and controlling rendering attributes of the grid by the grid rendering roles.
3. Performing multi-role rendering, namely performing multi-role rendering operation through a rendering engine state machine; the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute corresponding to each rendering role, and a rendered three-dimensional target part model is obtained. Specifically, by setting the rendering state attribute of each rendering role, scene rendering is performed for each rendering role related to the rendering, and a rendering result corresponds to a rendering scene corresponding to each rendering role, so that the drawing of rendering data is completed.
The rendering scene comprises a voxel rendering scene corresponding to the voxel rendering role, a multi-plane reconstruction scene corresponding to the multi-plane reconstruction rendering role, a curved surface reconstruction scene corresponding to the curved surface reconstruction rendering role and a grid scene corresponding to the grid rendering role.
According to the technical scheme of the embodiment, voxel loading state synchronization operation is executed through a rendering engine state machine; voxel loading state synchronization operates on engine data for loading volume data; performing plane information state synchronization operations by a rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role; executing multi-role rendering operation through a rendering engine state machine; the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain a rendered three-dimensional target part model. In this way, after engine data of the volume data are loaded, setting rendering state attributes of each rendering role, and performing scene rendering on each rendering role according to the set rendering state attributes corresponding to each rendering role, so that rendering scenes corresponding to each rendering role correspond to rendering results, and a rendered three-dimensional target part model can be obtained more accurately according to the rendering results corresponding to each rendering scene.
In one embodiment, the method further comprises: receiving, by a rendering pipeline, a rendering task generated by an image processing command; each rendering task is added to a corresponding rendering sub-pipeline in the rendering pipeline for execution by the rendering pipeline.
Wherein the rendering processing sub-pipeline may include at least one of an instant rendering pipeline, a runtime pipeline, and a compression pipeline.
The rendering task may include at least one of a rendering task, a non-image information task, an initialization task, a compression task, a complex processing task, and an image transmission task.
Wherein the instant rendering pipeline may be used to perform at least one of complex processing tasks, non-image information tasks, and image sending tasks.
Wherein the runtime pipeline may be configured to perform at least one of an initialization task, a rendering task.
Wherein the compression pipeline may be used to perform compression tasks.
The non-image information task is used for acquiring non-image information and sending the non-image information to the front end for display. The non-image information may be an image processing guide parameter corresponding to the graphics rendering image.
The initialization task is used for executing initialization operation of rendering the target part model.
The compression task is used for compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain a graphic rendering image.
The image sending task is used for sending the graphic rendering image to the front end.
The front end can also be used for receiving the graphic rendering image, taking the graphic rendering image as a foreground image, taking the video frame image as a background image, and displaying a target image obtained by overlapping the graphic rendering image and the video frame image.
The front end may be disposed in another medical graphic image application system, and the medical graphic image application system is used as a back end, where the back end may be local, remote, or cloud, and is not specifically limited herein. It should be understood that the front end may also be directly deployed in the medical graphic image application system, so that the execution subject of the graphic image processing method is the back end of the medical graphic image application system.
The complex processing task may also be named as a large image processing task or a background task, and is used for performing time-consuming and complex operations, such as operations of removing a bed board, dividing a received video frame image, and the like, so as to more accurately generate a target rendering change parameter based on registration points in the processed video frame image.
In particular implementations, FIG. 3 provides a schematic diagram of the composition of a rendering processing pipeline for ease of understanding by those skilled in the art. As shown in fig. 3, the rendering processing pipeline may include three types of rendering processing sub-pipelines: the three types of rendering sub-pipelines receive rendering tasks generated by the image processing commands, and the rendering tasks are added into the corresponding rendering sub-pipelines for execution according to the types of the rendering tasks and can be divided into: rendering tasks, non-image information tasks, initializing tasks, compressing tasks, complex processing tasks, and image sending tasks. In this way, the rendering tasks of the corresponding types are managed through the different rendering sub-pipelines, so that each rendering task can be efficiently managed, and the operations corresponding to the different rendering tasks can be efficiently and accurately executed.
In one embodiment, the method further comprises: and sending the image processing guide parameters corresponding to the image rendering image to the front end through the non-image information task in the instant rendering pipeline.
The front end is used for synchronizing the graphic rendering image and the corresponding image processing guide parameters so as to synchronously display the graphic rendering image and the corresponding image processing guide parameters.
The image processing guiding parameters may include, but are not limited to, viewing angle parameters, layer thickness parameters, scale parameters of image zooming in and out, minimum unit parameters, and the like.
In a specific implementation, the non-image information task may be added to the rendering pipeline for execution, and the image processing guiding parameter corresponding to the image rendering image may be sent to the front end through the non-image information task, so that the front end may synchronize the image rendering image and the corresponding image processing guiding parameter, and display the image rendering image and the corresponding image processing guiding parameter in synchronization through refreshing.
According to the technical scheme, the image processing guide parameters corresponding to the image rendering image are sent to the front end through the non-image information task in the instant rendering pipeline, the image rendering image and the corresponding image processing guide parameters are synchronized through the front end, so that the image rendering image and the corresponding image processing guide parameters are synchronously displayed, the situation that the currently displayed image rendering image and the currently displayed image processing guide parameters are not matched is avoided, and the intelligence of the image application in the aspect of synchronous display of the guide parameters is effectively improved.
In one embodiment, a medical graphics image application system based on a mobile chip and embedded system environment is provided, the application of the set of medical graphics image application system in a laparoscopy scene is shown in fig. 4, and the medical graphics image application system is deployed on a microcomputer or run on a mobile chip or embedded system to provide the capability of image processing and graphics operations (i.e., image processing operations).
The inspection device under the abdominal cavity inspection scene can be an abdominal cavity 4K device, the target video signal can be a cavity mirror video signal, and the video frame image in the target video signal can be a cavity mirror video frame image.
The microcomputer receives the endoscope video signal, simultaneously takes an endoscope video frame image in the endoscope video signal as a background, simultaneously outputs preoperative image processing and image rendering results, such as image rendering images based on preoperative planning, and displays the image rendering images and the endoscope video frame image in a superimposed manner through the display based on the image segmentation image rendering images. As shown in fig. 4, the display displays a target image obtained by superimposing the endoscopic video frame image with the graphics rendering image for the preoperative planning model.
The software solution of the above features can be represented in fig. 5, and two main subsystems, i.e. a medical graphic image application system consisting of a Front End (Image Application FE (Front End), an image application Front End) and a Back End (Image Application BE (Back End), an image application Back End) are designed. The method comprises the steps that a video stream workflow (stream workflow) is used for completing access of a cavity mirror 4K signal (namely a cavity mirror video signal output by abdominal cavity 4K equipment), video frame data (frame date) is obtained, and the video frame data is output to the front end; outputting a video frame image (one frame of registration) of one frame in the video frame data for registration to a back end by a video stream workflow; the back end and the front end communicate and interact with each other through the same set of communication protocols to transmit images and related information (< Protocol > Image and relation message).
The flow chart of the video frame data output to the front end is shown in fig. 6, a cavity mirror 4K signal containing the video frame data is input to a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) computer interface, a video stream workflow obtains the video frame data in YUV format through the PCI computer interface, and the video frame data is transmitted to a video rendering scheduler (Video render Schedule) for the front end to obtain the video frame data in YUV format.
The video stream workflow realizes that the video frame image of one frame for registration in the video frame data is output to the back end, namely, the registration flow of the video frame image and the graphic rendering image is shown in fig. 7, and the method comprises the following steps: 7.1, packaging information into a command by selecting a plurality of registration points at the front end and sending the command to the rear end; 7.2, requesting 1 frame of video frame image in RGB format by the rear end through the video stream workflow; 7.3, the back end registers and changes the target part model, and sends the graphic rendering image to the front end; and 7.4, the front end updates the view according to the graphic rendering image.
The medical graphic image application system can be used as an image processing system for medical workers in the clinical operation process, so that users can browse images, view preoperative planning and relevant image processing functions; the front end can be used as a client and the rear end can be used as a server; the front end is used for drawing and displaying a graphic UI (User Interface), interacting logic of a workflow and interacting with instructions and data communication of the back end; the back end is used for receiving the instruction request of the front end, processing the graphics application aspect (comprising image processing and rendering work of image data) according to the request type and related instruction information, and transmitting the processing result to the front end for refreshing and displaying of the client through interaction with the front end.
Fig. 8 provides a schematic diagram of the composition of the front and back ends for ease of understanding by those skilled in the art. As shown in fig. 8, the front end is composed of a user interface framework (UI framework), a front end workflow (Work Flow), and a Client/Server-Side Stub (Client/Server-Side Stub); the user interface framework realizes a graphical interface of the client based on a QT (Graphical User Interface ), and comprises a Status Bar (Status Bar), a keyboard shortcut (Keyboard Shortcuts), a button directory (button/Context Menu), a Control Panel (Control Panel), a Cell (a UI Control) with each Cell corresponding to each data being a Cell), a Mouse Event (Mouse Event) and the like.
The front-end workflow is responsible for logic of the front-end workflow, and mainly comprises two operations, namely 1) organizing signals and control information on a graphical interface, organizing front-end data of front-end graphic images, and filtering messages, and 2) analyzing rendered images and non-image information received from a rear end into the front-end data, and synchronously updating the front-end data and the front-end data on the graphical interface for display.
Wherein, the client/server stub uses RPC (Remote Procedure Call ) to communicate between the front end (client) and the back end. Wherein, the client stub is responsible for sending and the server stub is responsible for monitoring the received message.
The back end consists of a Model layer (Model layer), a Controller layer (Controller layer) and a Strategy mode layer (Strategy layer).
The model layer defines, among other things, the primary data structures, operational objects, and communication protocols in the image graphics application. The Model layer includes a Model (Model), an operation command Pool (Operations Pool), a Protocol (Protocol), an abstract object (StaObject), and a three-dimensional operation (3 DArithmetic), among others. Wherein:
1) The Model is used as a collection of data objects, and the operation of the data objects can be performed according to the Model;
2) The operation command pool creates command objects according to the operation type of the current application and manages the command objects;
3) The protocol is used for defining a communication protocol of information interaction of front and back ends, and comprises a key information data format and a field of the defined communication;
4) The abstract object is an abstract object, multiple data objects are derived under the abstract object, each object has own specific attribute, and a method for modifying the attribute is provided;
5) Three-dimensional operations define the primary data structure objects of image processing, including basic attributes and methods such as matrices, vectors, and the like.
Wherein the controller layer comprises a flow control unit (AppEntryController) and a resource control unit (ResourceController); the flow control unit is used for controlling the whole flow of the back-end image processing rendering; the resource control unit is used for managing key objects required by the back-end service processing.
The policy schema layer defines the main command object and the graphics image processing method for executing the whole back-end command, including DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) related resolution (DICOM conversion), graphics image processing method (Image Processing Function) and command flow object (Operation), task scheduling management (Task Schedule), rendering Engine and rendering Engine state machine (Engine and Engine States Machine). Among other things, the rendering engine may employ Vt, VTK, OSG or other rendering engines.
The above is the main modules of the front end and the back end, and the main initialization flow after the front end and the back end are started is as follows in fig. 9:
1. the front end triggers initialization, and a user interface frame initiates an initialization request;
2. combining data of the user interface into an initialization command structure format by the front-end workflow;
3. the front-end workflow uses an RPC Client SDK (Software Development Kit ) to send a command communication protocol to a front-end Client Stub (front-end Client-Side Stub);
4. the client Stub of the front end invokes a remote interface, and transmits a command communication protocol to a Server Stub (background Server-Side Stub) of the back end;
5. The flow control unit is used for controlling the execution of the initialization flow and determining resources to be loaded through the configuration file;
6. the flow control unit loads the Dicom data to be loaded to Dicom-related analysis;
7. completing data analysis by DICOM-related analysis, converting DICOM data into 3D image data (adapter engine and framework);
8. the flow control unit creates a unique resource management control object (i.e., resource control unit) for the graphical image application and populates the image data to the StaObject;
9. the resource control unit creates a Model Tree (Model Tree), and hangs a Staobject object required by the application as a child node on the Model Tree (Model Tree) to obtain a Model (Model);
10. the Model (Model) generates and initializes a number of types of StaObjects to complete the creation of the data structure;
11. the resource control unit creates application operation to complete the creation of the operation command pool;
12. the resource control unit completes the creation of task scheduling management;
the creation of the data structure is shown in fig. 10, in which a modal node is created, and is managed by an object container, and a series of data structure objects (including medical image sequence data information (sequence data information), plane type data and its extension type objects (plane data objects), volume data, tissue segmentation data, interest data, layout data and image mask data) required by graphic application software are defined based on the attribute characteristics of the abstract data structure objects, and placed in the container.
Wherein, based on the plane data object, a voxel rendering object, a multi-plane reconstruction object and a curved surface reconstruction object can be obtained.
The method comprises the steps of creating an operation command pool, as shown in the following figure 11, of creating an image operation command pool to manage a series of abstract command objects, and creating an abstract command object to define a complete image processing command flow, wherein the flow 1 firstly analyzes a command message body sent by a front end to obtain key parameters 2, performs image processing related calculation and changes a corresponding data structure 3 under a modal node, and extracts the change amount of the data structure related to the command operation object to generate a rendering task and a non-image information task to enter a rendering processing pipeline for execution; execution of all image processing commands extended under the abstract command attribute follows this flow. Then the task of creating the data structure of the whole system is undertaken by the modality nodes when the resource control unit is creating the data structure, and the creation of the command objects required by the whole system is completed by the image manipulation command pool.
The created series of abstract command objects may include, among other things, a rotate command, a zoom command, a region of interest determination command, a transparency adjustment command, a pan command, an initialization command, a cross-hair command, a clip command, and the like.
13. Task scheduling manages a worker thread that starts a plurality of tasks including: a graphics rendering task, a non-image information task, an image transmitting task, a compression task, a background task (complex processing task);
14. the task scheduling management creates an initialization task, executes an initialization command, transmits rendering change parameters to a rendering engine state machine, and then places parameters by setting the rendering engine;
15. the rendering engine renders the graph and outputs a set of three-dimensional image data;
16. the task scheduling management generates a compression task, and the compression task compresses the three-dimensional image data into a two-dimensional image in a JPG or PNG format;
17. generating an image transmission task;
18. and transmitting the two-dimensional image in the JPG or PNG format to the front end by an image transmitting task based on a defined communication protocol.
The above flow is application initialization, and the flow can be briefly represented by fig. 12, and the back end starts the application entry control initialization after receiving the command request from the front end, including the following steps: 1. the method comprises the steps of initializing resource control, creating a data structure required by application through a mode node, initializing an image operation command pool, and creating a required command object; 2. initializing a rendering pipeline, creating a rendering pipeline, and creating a rendering engine state machine 3. Executing initialization, performing corresponding image processing, calculating, generating rendering change parameters of graphics, transmitting the parameters to the rendering engine state machine through the rendering pipeline, and then setting parameters put into the rendering engine for rendering the graphics.
By initializing the operation, the medical graphic image application system can render and output the first image, so that the subsequent image processing command can be normally processed. After the initialization task is completed, the application framework can also perform application work on the aspect of graphic images, so that the application framework creates a set of efficient data structure, operates a command pool, applies a command processing mechanism and renders a processing pipeline, meets the requirements of basic application and advanced application of medical image processing and graphic rendering, and solves the application of graphic image types on the embedded terminal equipment.
The workflow of the front-end and back-end is shown in fig. 13 as follows:
1. when the front end performs image-wise operation by a user, acquiring data and a request on an interface by a User Interface (UI);
the user can select a certain part as a target part on the front end interface, so that the rear end can select a part model corresponding to the target part in the part model library as a target part model. Or, the user can perform model input operation on the front end interface, and directly input the target part model corresponding to the target part through the front end interface. The user may select a plurality of feature points as registration points for the target portion at the front-end interface.
2. The data transmitted by the front-end workflow combined user interface forms a command structure body format of image processing to obtain an image processing instruction;
3. the front-end workflow uses an RPC client SDK to send a command communication protocol to a client stub of the front-end based on an image processing instruction;
4. the client stub of the front end invokes the remote interface and transmits a command communication protocol to the server stub of the rear end;
5. after the rear end receives the image processing command, the resource control unit processes the image processing command sent by the front end;
6. the resource control unit analyzes the command and notifies the command corresponding to the operation command Chi Kelong according to the type of the command protocol;
7. the operation command pool prepares cloned operation objects according to the command id;
8. cloning specified operations in an operation command pool;
9. performing image processing work by the cloned command object, modifying metadata, calculating changes;
10. the cloned command object collects the changes on the data object of the core to obtain the target rendering change parameters;
11. the task scheduling management creates a non-image information task to acquire non-image information;
11.1. the non-image information task directly sends the non-image information to the front end after finishing the organization of the non-image;
12. Creating a rendering task for model rendering by task scheduling management;
13. inputting target rendering change parameters to a rendering engine state machine by a rendering task, and finishing parameter setting of the rendering engine by the rendering engine state machine;
14. rendering the map using a rendering engine API (Application Programming Interface );
15. the rendering engine outputs a set of three-dimensional image data (rendered three-dimensional target site model);
16. generating a compression task, and compressing the rendered three-dimensional target part model into a two-dimensional image in a JPG or PNG format by the compression task to obtain a graphic rendering image;
17. generating an image transmission task;
18. transmitting a graphic rendering image in a JPG or PNG format to the front end by an image transmitting task based on a defined communication protocol;
19. the client stub of the front end synchronizes the graphic rendering image and the corresponding non-image information;
20. and finally refreshing the client display.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a graphic image processing device for realizing the above related graphic image processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the graphics image processing device provided below may refer to the limitation of one graphics image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 14, there is provided a graphic image processing apparatus including: a receiving module 1410, a generating module 1420, a rendering module 1430, and a display module 1440, wherein:
a receiving module 1410 for receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection device for a target portion of a target object.
A generating module 1420, configured to perform an image processing operation matched with the image processing command, and generate a target rendering change parameter for a target portion model corresponding to the target portion.
And a rendering module 1430, configured to render the target portion model according to the target rendering change parameter, so as to obtain a graphics rendering image.
The display module 1440 is configured to display a target image obtained by overlapping the graphics rendering image and a video frame image in the target video signal.
In one embodiment, the generating module 1420 is specifically configured to parse the image processing command to obtain a parsed image processing command; and executing corresponding image processing operation according to the parsed image processing command, and generating the target rendering change parameter.
In one embodiment, the generating module 1420 is specifically configured to perform a corresponding image processing operation in a world coordinate system according to the parsed image processing command, so as to obtain a rendering change parameter for the target portion model in the world coordinate system; according to the mapping relation between the world coordinates and an engine coordinate system where a rendering engine is located, converting rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system; and taking the rendering change parameter under the engine coordinate system as the target rendering change parameter.
In one embodiment, the rendering module 1430 is specifically configured to render the target portion model according to the target rendering change parameter, so as to obtain a rendered three-dimensional target portion model; and compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain the graphic rendering image.
In one embodiment, the rendering module 1430 is specifically configured to generate a rendering task according to the target rendering change parameter; the rendering task is used for transmitting the target rendering change parameters to a rendering engine; the rendering engine is used for rendering the target part model according to the target rendering change parameters.
In one embodiment, the rendering module 1430 is specifically configured to pass, by the rendering task, the target rendering change parameter into a rendering engine state machine based on a rendering processing pipeline; and setting parameters of the rendering engine according to the target rendering change parameters through the rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine.
In one embodiment, the rendering module 1430 is specifically configured to perform voxel loading state synchronization operations by the rendering engine state machine; the voxel loading state synchronously operates engine data for loading volume data; executing plane information state synchronization operation through the rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role; executing multi-role rendering operation through the rendering engine state machine; and the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain the rendered three-dimensional target part model.
In one embodiment, the apparatus further comprises: a task receiving module for receiving, by the rendering pipeline, a rendering task generated by the image processing command; and adding each rendering processing task to a corresponding rendering processing sub-pipeline in the rendering processing pipeline for execution through the rendering processing pipeline.
In one embodiment, the rendering processing sub-pipeline includes at least one of an instant rendering pipeline, a runtime pipeline, and a compression pipeline; the rendering processing task comprises at least one of a rendering task, a non-image information task, an initialization task, a compression task, a complex processing task and an image sending task; the instant rendering pipeline is used for executing at least one of the complex processing task, the non-image information task and the image sending task; the runtime pipeline is used for executing at least one of the initialization task and the rendering task; the compression pipeline is to perform the compression task.
In one embodiment, the task receiving module is further configured to send, to a front end, an image processing instruction parameter corresponding to the graphics rendering image through the non-image information task in the instant rendering pipeline; the front end is used for synchronizing the graphic rendering image and the corresponding image processing guide parameter so as to synchronously display the graphic rendering image and the corresponding image processing guide parameter.
The respective modules in the above-described graphic image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 15. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing target site model data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a graphical image processing method.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
In one embodiment, the processor when executing the computer program further performs the steps of:
analyzing the image processing command to obtain an analyzed image processing command;
and executing corresponding image processing operation according to the parsed image processing command, and generating the target rendering change parameter.
In one embodiment, the processor when executing the computer program further performs the steps of:
according to the analyzed image processing command, corresponding image processing operation is executed in a world coordinate system, and rendering change parameters aiming at the target part model under the world coordinate system are obtained;
according to the mapping relation between the world coordinates and an engine coordinate system where a rendering engine is located, converting rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system;
and taking the rendering change parameter under the engine coordinate system as the target rendering change parameter.
In one embodiment, the processor when executing the computer program further performs the steps of:
rendering the target part model according to the target rendering change parameters to obtain a rendered three-dimensional target part model;
And compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain the graphic rendering image.
In one embodiment, the processor when executing the computer program further performs the steps of:
generating a rendering task according to the target rendering change parameter; the rendering task is used for transmitting the target rendering change parameters to a rendering engine; the rendering engine is used for rendering the target part model according to the target rendering change parameters.
In one embodiment, the processor when executing the computer program further performs the steps of:
passing, by the rendering task, the target rendering change parameter to a rendering engine state machine based on a rendering processing pipeline;
and setting parameters of the rendering engine according to the target rendering change parameters through the rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine.
In one embodiment, the processor when executing the computer program further performs the steps of:
executing voxel loading state synchronization operation through the rendering engine state machine; the voxel loading state synchronously operates engine data for loading volume data;
Executing plane information state synchronization operation through the rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role;
executing multi-role rendering operation through the rendering engine state machine; and the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain the rendered three-dimensional target part model.
In one embodiment, the processor when executing the computer program further performs the steps of:
receiving, by the rendering pipeline, a rendering task generated by the image processing command;
and adding each rendering processing task to a corresponding rendering processing sub-pipeline in the rendering processing pipeline for execution through the rendering processing pipeline.
In one embodiment, the rendering processing sub-pipeline includes at least one of an instant rendering pipeline, a runtime pipeline, and a compression pipeline; the rendering processing task comprises at least one of a rendering task, a non-image information task, an initialization task, a compression task, a complex processing task and an image sending task;
the instant rendering pipeline is used for executing at least one of the complex processing task, the non-image information task and the image sending task;
The runtime pipeline is used for executing at least one of the initialization task and the rendering task;
the compression pipeline is to perform the compression task.
In one embodiment, the processor when executing the computer program further performs the steps of:
transmitting image processing guide parameters corresponding to the graphic rendering image to a front end through the non-image information task in the instant rendering pipeline; the front end is used for synchronizing the graphic rendering image and the corresponding image processing guide parameter so as to synchronously display the graphic rendering image and the corresponding image processing guide parameter.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (15)

1. A method of graphic image processing, the method comprising:
receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
executing the image processing operation matched with the image processing command, and generating a target rendering change parameter of a target part model corresponding to the target part;
Rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and displaying a target image obtained by superposing the graphic rendering image and the video frame image in the target video signal.
2. The method of claim 1, wherein the performing an image processing operation that matches the image processing command to generate a target rendering change parameter for a target site model corresponding to the target site comprises:
analyzing the image processing command to obtain an analyzed image processing command;
and executing corresponding image processing operation according to the parsed image processing command, and generating the target rendering change parameter.
3. The method of claim 2, wherein the performing a corresponding image processing operation according to the parsed image processing command to generate the target rendering change parameters includes:
according to the analyzed image processing command, corresponding image processing operation is executed in a world coordinate system, and rendering change parameters aiming at the target part model under the world coordinate system are obtained;
according to the mapping relation between the world coordinates and an engine coordinate system where a rendering engine is located, converting rendering change parameters under the world coordinate system into rendering change parameters under the engine coordinate system;
And taking the rendering change parameter under the engine coordinate system as the target rendering change parameter.
4. The method of claim 1, wherein rendering the target site model according to the target rendering change parameter results in a graphics rendering image, comprising:
rendering the target part model according to the target rendering change parameters to obtain a rendered three-dimensional target part model;
and compressing the rendered three-dimensional target part model into a two-dimensional image with a preset image format to obtain the graphic rendering image.
5. The method of claim 4, wherein rendering the target site model according to the target rendering change parameter results in a rendered three-dimensional target site model, comprising:
generating a rendering task according to the target rendering change parameter; the rendering task is used for transmitting the target rendering change parameters to a rendering engine; the rendering engine is used for rendering the target part model according to the target rendering change parameters.
6. The method of claim 5, wherein the passing the target rendering change parameter to a rendering engine comprises:
Passing, by the rendering task, the target rendering change parameter to a rendering engine state machine based on a rendering processing pipeline;
and setting parameters of the rendering engine according to the target rendering change parameters through the rendering engine state machine so as to transmit the target rendering change parameters to the rendering engine.
7. The method of claim 6, wherein rendering the target site model according to the target rendering change parameter results in a rendered three-dimensional target site model, comprising:
executing voxel loading state synchronization operation through the rendering engine state machine; the voxel loading state synchronously operates engine data for loading volume data;
executing plane information state synchronization operation through the rendering engine state machine; the plane information state synchronization operation is used for setting rendering state attributes through each rendering role;
executing multi-role rendering operation through the rendering engine state machine; and the multi-role rendering operation is used for performing scene rendering on each rendering role according to the set rendering state attribute to obtain the rendered three-dimensional target part model.
8. The method of claim 6, wherein the method further comprises:
receiving, by the rendering pipeline, a rendering task generated by the image processing command;
and adding each rendering processing task to a corresponding rendering processing sub-pipeline in the rendering processing pipeline for execution through the rendering processing pipeline.
9. The method of claim 8, wherein the rendering processing sub-pipeline comprises at least one of an instant rendering pipeline, a runtime pipeline, and a compression pipeline; the rendering processing task comprises at least one of a rendering task, a non-image information task, an initialization task, a compression task, a complex processing task and an image sending task;
the instant rendering pipeline is used for executing at least one of the complex processing task, the non-image information task and the image sending task;
the runtime pipeline is used for executing at least one of the initialization task and the rendering task;
the compression pipeline is to perform the compression task.
10. The method according to claim 9, wherein the method further comprises:
transmitting image processing guide parameters corresponding to the graphic rendering image to a front end through the non-image information task in the instant rendering pipeline; the front end is used for synchronizing the graphic rendering image and the corresponding image processing guide parameter so as to synchronously display the graphic rendering image and the corresponding image processing guide parameter.
11. A graphics image processing apparatus, the apparatus comprising:
a receiving module for receiving an image processing command for a target video signal; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
the generation module is used for executing the image processing operation matched with the image processing command and generating a target rendering change parameter of a target part model corresponding to the target part;
the rendering module is used for rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image;
and the display module is used for displaying the target image obtained by overlapping the graphic rendering image and the video frame image in the target video signal.
12. A medical graphics image application system, the system comprising: a front end and a rear end;
the front end is used for responding to the graphic rendering operation of the target video signal and sending an image processing command to the back end; the target video signal is a video signal output by the inspection equipment aiming at a target part of a target object;
the back end is used for responding to the image processing command, executing the image processing operation matched with the image processing command and generating a target rendering change parameter for a target part model corresponding to the target part;
The back end is further used for rendering the target part model according to the target rendering change parameters to obtain a graphic rendering image, and sending the graphic rendering image to the front end;
the front end is further configured to display a target image obtained by overlapping the graphics rendering image and a video frame image in the target video signal.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 10 when the computer program is executed.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 10.
15. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 10.
CN202310936080.0A 2023-07-27 2023-07-27 Graphic image processing method, graphic image processing device and medical graphic image application system Pending CN116958392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310936080.0A CN116958392A (en) 2023-07-27 2023-07-27 Graphic image processing method, graphic image processing device and medical graphic image application system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310936080.0A CN116958392A (en) 2023-07-27 2023-07-27 Graphic image processing method, graphic image processing device and medical graphic image application system

Publications (1)

Publication Number Publication Date
CN116958392A true CN116958392A (en) 2023-10-27

Family

ID=88447317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310936080.0A Pending CN116958392A (en) 2023-07-27 2023-07-27 Graphic image processing method, graphic image processing device and medical graphic image application system

Country Status (1)

Country Link
CN (1) CN116958392A (en)

Similar Documents

Publication Publication Date Title
Wheeler et al. Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity
JP2020523659A (en) Easy switching between native 2D and reconstructed 3D medical images
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
Doellner et al. Server-based rendering of large 3D scenes for mobile devices using G-buffer cube maps
TW201015488A (en) Mapping graphics instructions to associated graphics data during performance analysis
Hachaj Real time exploration and management of large medical volumetric datasets on small mobile devices—evaluation of remote volume rendering approach
Mindek et al. Visualization multi-pipeline for communicating biology
CN111932663A (en) Parallel drawing method based on multi-level asymmetric communication management
CN106845477B (en) Method and device for establishing region of interest based on multiple reconstructed images
CN115080886A (en) Three-dimensional medical model GLB file analysis and display method based on mobile terminal
US10296713B2 (en) Method and system for reviewing medical study data
CN114863061A (en) Three-dimensional reconstruction method and system for remote monitoring medical image processing
CN112820385A (en) Medical image browsing method, client and system
CN116958392A (en) Graphic image processing method, graphic image processing device and medical graphic image application system
CN112862981B (en) Method and apparatus for presenting a virtual representation, computer device and storage medium
CN110837297B (en) Information processing method and AR equipment
CN110349254B (en) C/S architecture-oriented adaptive medical image three-dimensional reconstruction method
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
Chouiten et al. Distributed Augmented Reality systems: how much performance is enough?
Bader Design and Implementation of Collaborative Software Visualization for Program Comprehension
Kohlmann et al. Remote visualization techniques for medical imaging research and image-guided procedures
Mao Three-dimensional tree visualization of computer image data based on Louvain algorithm
CN110083357A (en) Interface construction method, device, server and storage medium
CN116681818B (en) New view angle reconstruction method, training method and device of new view angle reconstruction network
EP3929702A1 (en) Extended reality-based user interface add-on, system and method for reviewing 3d or 4d medical image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination