CN107157588B - Data processing method of image equipment and image equipment - Google Patents

Data processing method of image equipment and image equipment Download PDF

Info

Publication number
CN107157588B
CN107157588B CN201710318186.9A CN201710318186A CN107157588B CN 107157588 B CN107157588 B CN 107157588B CN 201710318186 A CN201710318186 A CN 201710318186A CN 107157588 B CN107157588 B CN 107157588B
Authority
CN
China
Prior art keywords
view
eye
output
matrix
environment information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710318186.9A
Other languages
Chinese (zh)
Other versions
CN107157588A (en
Inventor
刘雯卿
王帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN201710318186.9A priority Critical patent/CN107157588B/en
Publication of CN107157588A publication Critical patent/CN107157588A/en
Application granted granted Critical
Publication of CN107157588B publication Critical patent/CN107157588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges

Abstract

The invention relates to a data processing method of image equipment, which comprises the following steps: acquiring volume data; carrying out post-processing and volume rendering according to the volume data to obtain a view to be output; directly sending the view to be output to the expansion equipment; and acquiring the environment information of the expansion equipment, and outputting the view to be output according to the environment information. The data processing method of the image equipment can perform volume rendering and post-processing on the image data on the image equipment, directly send the image data to the expansion equipment for synchronous display, perform further post-processing at any time according to the requirement or the display effect, and greatly improve the efficiency. The invention also relates to an imaging device.

Description

Data processing method of image equipment and image equipment
Technical Field
The present invention relates to the field of medical devices, and in particular, to a data processing method for an imaging device and an imaging device.
Background
The image examination is an important clinical examination means, and with the improvement of the informatization level of hospitals, medical imaging equipment is established in more and more hospitals, and is equipment for post-processing the results obtained by the medical imaging equipment.
The conventional expansion device runs through a single software developed for the expansion device, and does not establish a connection with the video workstation. Because the software for the expansion device has limited functions, if the image data needs to be post-processed, the image data needs to be copied to the expansion device for display after being post-processed by another image processing device. The data processing process is complex, which is not beneficial to improving the efficiency.
Disclosure of Invention
Therefore, it is necessary to provide a data processing method of an image device and an image device, aiming at the problem that the traditional image workstation cannot establish connection with the traditional expansion device, and the data processing process is complex, thereby causing the efficiency to be reduced.
A data processing method of a video device, wherein the method comprises the following steps:
acquiring volume data;
carrying out post-processing and volume rendering according to the volume data to obtain a view to be output;
directly sending the view to be output to the expansion equipment; and
and acquiring the environment information of the expansion equipment and outputting the view to be output according to the environment information.
The data processing method of the image equipment can perform volume rendering and post-processing on the image data on the image equipment, directly send the image data to the expansion equipment for synchronous display, perform further post-processing at any time according to the requirement or the display effect, and greatly improve the efficiency.
As an embodiment, the environment information includes a plurality of calibration information, and the outputting the view to be output includes: and identifying different information of the view to be output through environment information so as to enable the view to be output to reappear from a plurality of spaces to a single space.
As one embodiment, the environment information of the expansion device includes a left-eye matrix and a right-eye matrix;
the expansion device comprises a left eye frame buffer and a right eye frame buffer;
the outputting the view to be output includes: and according to the left eye matrix and the right eye matrix in the environment information, respectively caching the view to be output in the left eye frame cache and the right eye frame cache in the expansion equipment.
As one embodiment, the views to be output include a left-eye view and a right-eye view; the caching the view to be output in the left-eye frame cache and the right-eye frame cache in the expansion device respectively according to the left-eye matrix and the right-eye matrix in the environment information comprises: acquiring a visual angle parameter; calculating according to the left eye matrix and the visual angle parameters to obtain the current left eye visual angle; obtaining a left eye view of the view to be output according to the current left eye view; calculating according to the right-eye matrix and the view angle parameters to obtain a current right-eye view angle; obtaining a right-eye view of the view to be output according to the current right-eye view; sending the left-eye view to a left-eye frame buffer; and sending the right-eye view to a right-eye frame buffer.
A visualization device, wherein the visualization device comprises a processor, a memory, and computer instructions stored on the memory, which when executed by the processor, implement the steps of any of the methods described above.
An imaging device, wherein the imaging device comprises an imaging workstation and an expansion device, the imaging workstation comprises: the data acquisition module is used for acquiring volume data; the post-processing module is used for performing post-processing and volume rendering according to the volume data to obtain a view to be output; the view sending module is connected with the expansion equipment and used for directly sending the view to be output to the expansion equipment; the expansion device includes: and the output module is used for acquiring the environment information and outputting the view to be output according to the environment information.
According to the image equipment provided by the invention, the volume drawing and post-processing are carried out on the image data, the image data are directly sent to the expansion equipment to be synchronously displayed, and the further post-processing is carried out at any time according to the requirement or the display effect, so that the efficiency is greatly improved.
As one embodiment, the expansion device further includes: the visual angle acquisition module is used for acquiring visual angle parameters; the left eye visual angle calculation module is used for calculating and obtaining a current left eye visual angle according to the left eye matrix and the visual angle parameter; the left eye view acquisition module is used for obtaining a left eye view of the view to be output according to the current left eye viewing angle; the right eye visual angle calculation module is used for calculating and obtaining a current right eye visual angle according to the right eye matrix and the visual angle parameters; the right eye view acquisition module is used for obtaining a right eye view of the view to be output according to the current right eye view; the left-eye view sending module is used for sending the left-eye view to a left-eye frame buffer; and the right-eye view sending module is used for sending the right-eye view to a right-eye frame buffer.
As one embodiment, the imaging apparatus further includes: and the view calibration module is used for identifying different information of the view to be output so as to enable the view to be output to reappear from a plurality of spaces to a single space.
The expansion equipment of the invention establishes connection with the video workstation. The to-be-output view obtained after the post-processing of the image workstation is directly transmitted to the expansion equipment without being copied through a storage medium or being transferred to the expansion equipment through a transfer medium. When the views to be output are subjected to a plurality of post-processing according to different requirements, the image workstation performs a plurality of post-processing, and the expansion equipment can output post-processing results in real time. Meanwhile, an operator can perform further post-processing at any time according to the output result of the expansion equipment. Simple operation and is beneficial to improving the efficiency.
Drawings
Fig. 1 is a diagram illustrating an application scenario of a data processing method of an image device according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a data processing method of an image device according to an embodiment;
fig. 3 is a partial flowchart of a data processing method of an image device according to an embodiment;
fig. 4 is a schematic structural diagram of an imaging apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is an application scenario diagram of a data processing method of an image device according to an embodiment of the present invention.
Specifically, the medical imaging apparatus 101 is communicatively connected to the imaging apparatus 103, and the medical imaging apparatus 101 transmits the acquired volume data of different modalities to the imaging apparatus 103 for processing. The image device 103 comprises an image workstation 105 and an expansion device 107, wherein the image workstation 105 is in communication connection with the expansion device 107, and the expansion device outputs the views to be output. Further, the expansion device 107 may include at least one of a virtual reality device (VR), an augmented reality device (AR).
Referring to fig. 2, fig. 2 is a flowchart illustrating a data processing method of an image device according to an embodiment. The method comprises the following steps:
and S202, acquiring volume data.
The volume data may be volume data of different modalities acquired by the medical imaging apparatus 101. The medical imaging apparatus 101 includes, but is not limited to, a CT (computed tomography) system, a magnetic resonance imaging system. The volume data is acquired by the medical imaging apparatus 101 and then transmitted to the imaging apparatus 103. It will be appreciated that the transmission may be by way of a wired or wireless transmission of information, etc.
And S204, performing post-processing and volume rendering according to the volume data to obtain a view to be output.
The post-processing refers to performing post-processing on medical images meeting standards, such as DICOM3.0 standard, by using medical image technology and computer software and hardware technology, and using the obtained result as the basis of image diagnosis or scientific research process to provide auxiliary information for clinical image diagnosis. Post-processing includes, but is not limited to, the following items. Patient management: managing and displaying patient examination data stored in the system, specifically comprising examination record query, examination information correction, examination merging/splitting, examination data protection/cancellation protection, examination data import and export, network transmission and archiving of relevant examination data and the like; two-dimensional browsing: the DICOM format image browser embedded in the system mainly provides various image processing and measuring tools, and has the functions of window width and window level adjustment, angle measurement, arrow marking and the like; three-dimensional browsing: the volume data processing method is mainly used for carrying out multi-azimuth viewing and three-dimensional image browsing on volume data meeting requirements, and mainly comprises the functions of volume reproduction, multi-plane reconstruction, batch processing and the like. Printing a film: the method is used for printing images and mainly comprises window layout/image lattice layout adjustment, film typesetting, sequence comparison and the like; and (3) post-processing function configuration: according to the actual requirements of users and License configuration, a post-processing application module of an independent process is embedded, and an image post-processing function is supported, so that the post-processing application module can be used for performing blood vessel analysis, emphysema analysis, pulmonary nodule assessment, dental application, cardiovascular analysis, calcification and the like.
Specifically, the imaging device 103 performs volume rendering and post-processing according to the volume data obtained in step S202, and obtains a view to be output through the volume rendering and post-processing. It can be understood that the steps of performing volume rendering and post-processing according to the volume data may be repeated according to specific requirements, and meanwhile, the order of the volume rendering and post-processing may not be limited, that is, the volume rendering may be performed first and then the post-processing is performed, or the post-processing may be performed first and then the volume rendering is performed.
S206, directly sending the view to be output to the expansion equipment.
The expansion device 105 may include at least one of a VR device and an AR device. It is understood that different expansion devices 105 have different expansion device environment information.
Specifically, the view to be output obtained in step S204 is directly sent to the expansion device 105 without passing through a floppy disk, a U disk, or other storage media, or through other intermediate conversion interfaces, and is displayed by the expansion device 105 or processed in the next step.
In the data processing method of the video device provided in the above embodiment, the video device performs post-processing on the video data, and directly sends the post-processing to the extension device for synchronous display. Therefore, further post-processing can be carried out at any time according to requirements or display effects, the data processing process is simple, and the efficiency is greatly improved.
S208, obtaining the environment information of the expansion equipment and outputting the view to be output according to the environment information.
The environment information of the expansion device includes, but is not limited to, a left-eye matrix and a right-eye matrix.
Specifically, the expansion device 105 may output the view to be output at any time according to the specific needs of the user.
In a data processing method of an image device according to one embodiment of the present invention, the environment information includes a plurality of calibration information; the method further comprises: and identifying different information of the view to be output through environment information so as to enable the view to be output to reappear from a plurality of spaces to a single space. This approach is called augmented reality, ar (augmented reality).
Specifically, the calibration information may be information pre-stored in the image device 103, or information input by the user through the original IO device. The calibration information may be used to calibrate volume data according to actual requirements, and by calibrating volume data with different calibration information, the finally generated view to be output may be reproduced from multiple spaces to a single space. The calibration information may include at least one of color, transparency, and illumination information. Further, the calibration information may be stored in the imaging device 103 in the form of a database, and when acquiring the calibration information, the volume data may be firstly divided according to different human tissues, and different calibration information may be selected in the database for different human tissues. The different information of the view to be output is identified through a plurality of calibration information in the environment information, so that the view to be output reappears from a plurality of spaces to a single space, and different human tissues can be observed from different angles. It is understood that the calibration information may also be obtained by receiving information input by a user.
In one embodiment of the data processing method of the image device, the expansion device may include a virtual reality technology, i.e., vr (virtual reality). The environment information of the expansion device in this embodiment includes a left-eye matrix and a right-eye matrix;
the expansion device comprises a left eye frame buffer and a right eye frame buffer;
the step of outputting the view to be output includes:
and respectively and directly sending the view to be output to the left eye frame buffer memory and the right eye frame buffer memory in the expansion equipment according to the left eye matrix and the right eye matrix in the environment information.
Specifically, according to a left-eye matrix and a right-eye matrix in the environment information, the view to be output is directly sent to the left-eye frame buffer and the right-eye frame buffer in the expansion device 105, respectively, and is used for displaying in the expansion device 105.
Referring to fig. 3, fig. 3 is a partial flowchart of a data processing method of an image device in a VR scene, where the directly sending a view to be output to the left-eye frame buffer and the right-eye frame buffer in the expansion device 105 respectively according to a left-eye matrix and a right-eye matrix in environment information specifically includes:
s302, acquiring the view angle parameters.
Specifically, the view angle parameters may include view angle parameters input by a user using an original IO device, including but not limited to a rotation parameter, a translation parameter, and a zoom parameter. The user can rotate the volume data by adjusting the rotation parameters, can translate the volume data by adjusting the translation parameters, and can zoom the volume data by adjusting the zooming parameters.
And S304, obtaining the current left-eye visual angle according to the left-eye matrix and the visual angle parameter.
Specifically, the current left-eye viewing angle may be obtained according to the left-eye matrix and the viewing angle parameter. Because the left-eye matrix is continuously adjusted according to the real-time dynamic state of the user, the current left-eye viewing angle needs to be continuously calculated according to the left-eye matrix.
S306, obtaining the left eye view of the view to be output according to the current left eye view.
Specifically, a left eye view of the view to be output is obtained according to a current left eye viewing angle.
And S308, calculating according to the right-eye matrix and the view angle parameters to obtain the current right-eye view angle.
Specifically, the current right-eye viewing angle may be obtained accordingly according to the right-eye matrix and the viewing angle parameter.
And S310, obtaining the right eye view of the view to be output according to the current right eye view.
Specifically, a right-eye view is obtained according to the current right-eye viewing angle.
S312, writing the left-eye view into the left-eye frame buffer.
Specifically, the generated left-eye view is written to the left-eye frame buffer.
And S314, writing the right-eye view into a right-eye frame buffer.
Specifically, the generated right-eye view is written into the right-eye frame buffer. Different views are written to different frame buffers and presented in the expansion device 105 for an immersive or interactive experience.
In one embodiment, the imaging device includes a processor, a memory, and computer instructions stored on the memory, which when executed by the processor, implement the steps of:
acquiring volume data;
carrying out post-processing and volume rendering according to the volume data to obtain a view to be output;
directly sending the view to be output to the expansion equipment;
and acquiring the environment information of the expansion equipment, and outputting the view to be output according to the environment information.
In the video device provided in the above embodiment, the video data is post-processed in the video device and directly sent to the expansion device for synchronous display. Therefore, further post-processing can be carried out at any time according to the requirements or display effects, and the efficiency is greatly improved.
In one embodiment, the computer instructions, when executed by the processor, further perform the steps of:
the method comprises the steps of obtaining a plurality of calibration information, wherein the calibration information is used for identifying different information of a view to be output, so that the view to be output presents different display effects.
Through the identification of the different information of the views to be output, the views to be output are reproduced from a plurality of spaces to a single space.
In one embodiment of the present invention, the step executed by the processor of directly sending the view to be output to the extension device according to the environment information includes:
the environment information of the expansion equipment comprises a left eye matrix and a right eye matrix;
the expansion device comprises a left eye frame buffer and a right eye frame buffer;
and according to the left eye matrix and the right eye matrix in the environment information, respectively caching the view to be output in the left eye frame cache and the right eye frame cache in the expansion equipment.
In one embodiment, the to-be-output view includes a left-eye view and a right-eye view; the step, executed by the processor, of respectively buffering the view to be output in the left-eye frame buffer and the right-eye frame buffer in the expansion device according to the left-eye matrix and the right-eye matrix in the environment information includes:
acquiring a visual angle parameter;
calculating according to the left eye matrix and the visual angle parameters to obtain the current left eye visual angle;
obtaining a left eye view of the view to be output according to the current left eye view;
calculating according to the right-eye matrix and the view angle parameters to obtain a current right-eye view angle;
obtaining a right-eye view of the view to be output according to the current right-eye view;
sending the left-eye view to a left-eye frame buffer;
and sending the right-eye view to a right-eye frame buffer.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an image device according to an embodiment, wherein the image device includes an image workstation and an expansion device, the image device includes:
a data obtaining module 501, configured to obtain volume data;
a post-processing module 503, configured to perform post-processing and volume rendering according to the volume data to obtain a view to be output;
a view sending module 505, connected to the expansion device, configured to directly send the view to be output to the expansion device;
the output module 507 is configured to obtain environment information of the expansion device, and output the view to be output according to the environment information.
In the video device provided in the above embodiment, the video data is post-processed in the video device and directly sent to the expansion device for synchronous display. Therefore, further post-processing can be carried out at any time according to the requirements or display effects, and the efficiency is greatly improved.
As a specific implementation manner in the VR scenario, the expansion device may include:
the visual angle acquisition module is used for acquiring visual angle parameters;
the left eye visual angle calculation module is used for calculating and obtaining a current left eye visual angle according to the left eye matrix and the visual angle parameter;
the left eye view acquisition module is used for obtaining a left eye view of the view to be output according to the current left eye viewing angle;
the right eye visual angle calculation module is used for calculating and obtaining a current right eye visual angle according to the right eye matrix and the visual angle parameters;
the right eye view acquisition module is used for obtaining a right eye view of the view to be output according to the current right eye view;
the left-eye view sending module is used for sending the left-eye view to a left-eye frame buffer;
and the right-eye view sending module is used for sending the right-eye view to a right-eye frame buffer.
As a specific implementation in the AR scene, the imaging device further includes:
and the view calibration module is used for identifying different information of the view to be output so as to enable the view to be output to reappear from a plurality of spaces to a single space.
The expansion device may implement either or both of AR/VR.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A data processing method of an imaging device is applied to the imaging device of an imaging display system, wherein the imaging display system comprises a medical imaging device and the imaging device, the medical imaging device is in communication connection with the imaging device, the imaging device comprises an imaging workstation and an expansion device which are in communication connection with each other, and the method comprises the following steps:
the image workstation acquires volume data sent by the medical radiography equipment;
the image workstation performs post-processing and volume rendering according to the volume data to obtain a view to be output;
the image workstation directly sends the view to be output to the expansion equipment without a floppy disk, a U disk or other storage media or other intermediate conversion interfaces; the expansion device comprises at least one of a VR device and an AR device; when the expansion device is a VR device, the environment information includes a left-eye matrix, a right-eye matrix, and,
the expansion equipment acquires the environment information of the expansion equipment and outputs the view to be output according to the environment information; the views to be output comprise a left eye view and a right eye view;
when the extension device is an AR device, the environment information includes a plurality of calibration information, and the acquiring the environment information of the extension device and outputting the view to be output according to the environment information includes:
acquiring calibration information corresponding to each human tissue in the volume data, and identifying different human tissues of the view to be output according to the calibration information corresponding to each human tissue, so that the view to be output reappears from a plurality of spaces to a single space; the calibration information is stored in the image equipment in a database form; the calibration information comprises at least one of color, transparency and illumination information.
2. The method of claim 1, wherein when the expansion device is the VR device, the expansion device includes a left-eye frame buffer and a right-eye frame buffer;
the outputting the view to be output includes: and according to the left eye matrix and the right eye matrix in the environment information, respectively caching the view to be output in the left eye frame cache and the right eye frame cache in the expansion equipment.
3. The method of claim 2,
the caching the view to be output in the left-eye frame cache and the right-eye frame cache in the expansion device respectively according to the left-eye matrix and the right-eye matrix in the environment information comprises:
acquiring a visual angle parameter;
calculating according to the left eye matrix and the visual angle parameters to obtain the current left eye visual angle;
obtaining a left eye view of the view to be output according to the current left eye view;
calculating according to the right-eye matrix and the view angle parameters to obtain a current right-eye view angle;
obtaining a right-eye view of the view to be output according to the current right-eye view;
sending the left-eye view to a left-eye frame buffer;
and sending the right-eye view to a right-eye frame buffer.
4. An imaging device comprising a processor, a memory, and computer instructions stored on the memory, which when executed by the processor, implement the steps of the method of any one of claims 1-3.
5. An imaging device, which is characterized in that the imaging device is arranged to comprise an imaging workstation and an expansion device,
the image workstation includes:
the data acquisition module is used for acquiring volume data sent by medical radiography equipment;
the post-processing module is used for performing post-processing and volume rendering according to the volume data to obtain a view to be output;
the view sending module is connected with the expansion device and used for directly sending the view to be output to the expansion device without passing through a floppy disk, a U disk or other storage media or other intermediate conversion interfaces; the expansion device comprises at least one of a VR device and an AR device; when the expansion device is a VR device, the environment information comprises a left eye matrix and a right eye matrix;
the expansion device includes: the output module is used for acquiring environment information and outputting the view to be output according to the environment information; the views to be output comprise a left eye view and a right eye view;
when the extension device is an AR device, the environment information includes a plurality of calibration information, the output module is specifically configured to obtain calibration information corresponding to each human tissue in the volume data, and identify different human tissues of the view to be output according to the calibration information corresponding to each human tissue, so that the view to be output reappears from multiple spaces to a single space, and the calibration information is stored in the image device in a form of a database; the calibration information comprises at least one of color, transparency and illumination information.
6. The imaging device of claim 5, wherein the expansion device further comprises:
the visual angle acquisition module is used for acquiring visual angle parameters;
the left eye visual angle calculation module is used for calculating and obtaining a current left eye visual angle according to the left eye matrix and the visual angle parameter;
the left eye view acquisition module is used for obtaining a left eye view of the view to be output according to the current left eye viewing angle;
the right eye visual angle calculation module is used for calculating and obtaining a current right eye visual angle according to the right eye matrix and the visual angle parameters;
the right eye view acquisition module is used for obtaining a right eye view of the view to be output according to the current right eye view;
the left-eye view sending module is used for sending the left-eye view to a left-eye frame buffer;
and the right-eye view sending module is used for sending the right-eye view to a right-eye frame buffer.
7. The imaging device of claim 5, further comprising:
and the view calibration module is used for identifying different information of the view to be output so as to enable the view to be output to reappear from a plurality of spaces to a single space.
CN201710318186.9A 2017-05-08 2017-05-08 Data processing method of image equipment and image equipment Active CN107157588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710318186.9A CN107157588B (en) 2017-05-08 2017-05-08 Data processing method of image equipment and image equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710318186.9A CN107157588B (en) 2017-05-08 2017-05-08 Data processing method of image equipment and image equipment

Publications (2)

Publication Number Publication Date
CN107157588A CN107157588A (en) 2017-09-15
CN107157588B true CN107157588B (en) 2021-05-18

Family

ID=59812563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710318186.9A Active CN107157588B (en) 2017-05-08 2017-05-08 Data processing method of image equipment and image equipment

Country Status (1)

Country Link
CN (1) CN107157588B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07154829A (en) * 1993-11-25 1995-06-16 Matsushita Electric Ind Co Ltd Spectacles video display device
JP2002014300A (en) * 2000-06-28 2002-01-18 Seiko Epson Corp Head mount type display device
WO2006043238A1 (en) * 2004-10-22 2006-04-27 Koninklijke Philips Electronics N.V. Real time stereoscopic imaging apparatus and method
JP2010232718A (en) * 2009-03-25 2010-10-14 Olympus Corp Head-mounted image display apparatus
CN102833570A (en) * 2011-06-15 2012-12-19 株式会社东芝 Image processing system, apparatus and method
CN103108208A (en) * 2013-01-23 2013-05-15 哈尔滨医科大学 Method and system of enhancing display of computed tomography (CT) postprocessing image
CN103380625A (en) * 2011-06-16 2013-10-30 松下电器产业株式会社 Head-mounted display and misalignment correction method thereof
CN103513421A (en) * 2012-06-29 2014-01-15 索尼电脑娱乐公司 Image processing device, image processing method, and image processing system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1861035A1 (en) * 2005-03-11 2007-12-05 Bracco Imaging S.P.A. Methods and apparati for surgical navigation and visualization with microscope
FR3032282B1 (en) * 2015-02-03 2018-09-14 Francois Duret DEVICE FOR VISUALIZING THE INTERIOR OF A MOUTH
CN106109015A (en) * 2016-08-18 2016-11-16 秦春晖 A kind of wear-type medical system and operational approach thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07154829A (en) * 1993-11-25 1995-06-16 Matsushita Electric Ind Co Ltd Spectacles video display device
JP2002014300A (en) * 2000-06-28 2002-01-18 Seiko Epson Corp Head mount type display device
WO2006043238A1 (en) * 2004-10-22 2006-04-27 Koninklijke Philips Electronics N.V. Real time stereoscopic imaging apparatus and method
JP2010232718A (en) * 2009-03-25 2010-10-14 Olympus Corp Head-mounted image display apparatus
CN102833570A (en) * 2011-06-15 2012-12-19 株式会社东芝 Image processing system, apparatus and method
CN103380625A (en) * 2011-06-16 2013-10-30 松下电器产业株式会社 Head-mounted display and misalignment correction method thereof
CN103513421A (en) * 2012-06-29 2014-01-15 索尼电脑娱乐公司 Image processing device, image processing method, and image processing system
CN103108208A (en) * 2013-01-23 2013-05-15 哈尔滨医科大学 Method and system of enhancing display of computed tomography (CT) postprocessing image

Also Published As

Publication number Publication date
CN107157588A (en) 2017-09-15

Similar Documents

Publication Publication Date Title
JP5909055B2 (en) Image processing system, apparatus, method and program
US10692272B2 (en) System and method for removing voxel image data from being rendered according to a cutting region
JP5818531B2 (en) Image processing system, apparatus and method
JP6768862B2 (en) Medical image processing method, medical image processing device, medical image processing system and medical image processing program
JP2020506452A (en) HMDS-based medical image forming apparatus
JP5946029B2 (en) Tooth graph cut-based interactive segmentation method in 3D CT volumetric data
JP2013016153A (en) Image processing system and method
JP2013017577A (en) Image processing system, device, method, and medical image diagnostic device
JP6430149B2 (en) Medical image processing device
JP2008259699A (en) Image processing method and apparatus, and program
JP2015513945A6 (en) Tooth graph cut-based interactive segmentation method in 3D CT volumetric data
Bouaoud et al. DIVA, a 3D virtual reality platform, improves undergraduate craniofacial trauma education
Gsaxner et al. Facial model collection for medical augmented reality in oncologic cranio-maxillofacial surgery
Abou El-Seoud et al. An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries
AU2022200601B2 (en) Apparatus and method for visualizing digital breast tomosynthesis and anonymized display data export
CN108877897B (en) Dental diagnosis and treatment scheme generation method and device and diagnosis and treatment system
JP2012217591A (en) Image processing system, device, method and program
JP5974238B2 (en) Image processing system, apparatus, method, and medical image diagnostic apparatus
CN107157588B (en) Data processing method of image equipment and image equipment
US20200219329A1 (en) Multi axis translation
WO2020173054A1 (en) Vrds 4d medical image processing method and product
JP2009247502A (en) Method and apparatus for forming intermediate image and program
US10074198B2 (en) Methods and apparatuses for image processing and display
CN112950774A (en) Three-dimensional modeling device, operation planning system and teaching system
Loja et al. Using 3D anthropometric data for the modelling of customised head immobilisation masks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant after: Shanghai Lianying Medical Technology Co., Ltd

Address before: 201807 Shanghai City, north of the city of Jiading District Road No. 2258

Applicant before: SHANGHAI UNITED IMAGING HEALTHCARE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant