CN114388056A - Protein cross section generation method based on AR - Google Patents

Protein cross section generation method based on AR Download PDF

Info

Publication number
CN114388056A
CN114388056A CN202210037365.6A CN202210037365A CN114388056A CN 114388056 A CN114388056 A CN 114388056A CN 202210037365 A CN202210037365 A CN 202210037365A CN 114388056 A CN114388056 A CN 114388056A
Authority
CN
China
Prior art keywords
protein
cross
plane
dimensional image
observed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210037365.6A
Other languages
Chinese (zh)
Other versions
CN114388056B (en
Inventor
成生辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Westlake University
Original Assignee
Westlake University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Westlake University filed Critical Westlake University
Priority to CN202210037365.6A priority Critical patent/CN114388056B/en
Publication of CN114388056A publication Critical patent/CN114388056A/en
Application granted granted Critical
Publication of CN114388056B publication Critical patent/CN114388056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B15/00ICT specially adapted for analysing two-dimensional or three-dimensional molecular structures, e.g. structural or functional relations or structure alignment
    • G16B15/20Protein or domain folding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses a protein cross section generation method, a protein cross section generation device, protein cross section generation equipment and a computer readable storage medium based on AR, wherein the protein cross section generation method based on AR comprises the following steps: displaying a three-dimensional image of a protein to be observed in a real scene through AR head-mounted equipment; acquiring a user instruction through AR head-mounted equipment; establishing a reference surface in the real scene, and adjusting the reference surface according to the user instruction; and after receiving the confirmation instruction, generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference. The method has the advantages of convenience in operation and high operation freedom degree.

Description

Protein cross section generation method based on AR
Technical Field
The invention relates to the technical field of protein cross section generation, in particular to an AR-based protein cross section generation method, an AR-based protein cross section generation device, AR-based protein cross section generation equipment and a computer-readable storage medium.
Background
In order to understand the microstructure of a spatial structure in which proteins are stacked spatially and disorderly from polypeptide chains, a computer three-dimensional imaging technique of proteins is widely used.
However, only the arrangement of the basic amino acid functional units is focused on, and the dense distribution and porosity of the protein are not well understood.
In order to visually display the conditions of dense distribution, porosity and the like in the protein, the conventional method is to perform a boolean reduction operation on the protein by using a virtual plane on a computer so as to obtain a corresponding sectional view. The section operation needs to select a plurality of references and set a plurality of parameters, so that the operation is complex and the requirement on the professional ability of a user is high.
Disclosure of Invention
The embodiment of the application aims to simplify the generation mode of the protein section by providing the AR-based protein section generation method.
In order to achieve the above object, an embodiment of the present application provides a method for generating an AR-based protein cross section, including:
displaying a three-dimensional image of a protein to be observed in a real scene through AR head-mounted equipment;
acquiring a user instruction through AR head-mounted equipment;
establishing a reference surface in the real scene, and adjusting the reference surface according to the user instruction;
and after receiving the confirmation instruction, generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference.
In one embodiment, establishing a reference plane in the real scene includes:
determining the type of a datum plane to be established from a user instruction;
and establishing a reference plane matched with the type in the real scene.
In an embodiment, if the type of the reference plane defined by the user instruction is a spherical reference plane, establishing a reference plane matched with the type in the real scene includes:
acquiring a geometric center of the three-dimensional image;
acquiring a distance from a geometric center of the three-dimensional image to the outermost edge of the three-dimensional image;
and establishing the spherical datum plane by taking the geometric center as a spherical center and the distance as a spherical radius.
In an embodiment, if the type of the reference plane defined by the user instruction is a plane reference plane, establishing a reference plane matched with the type in the real scene includes:
acquiring a geometric center of the three-dimensional image;
establishing a preset reference axis in the reality scene;
and establishing the plane reference plane by taking the geometric center as a plane center and the preset reference axis as a plane normal.
In one embodiment, adjusting the reference plane according to the user instruction includes:
and adjusting at least one of the size, the position and the angle of the reference surface according to the user instruction.
In one embodiment, the user instruction is a voice instruction.
In one embodiment, generating a cross-sectional view of the protein to be observed with reference to the reference plane comprises:
taking the intersecting surface of the reference surface and the three-dimensional image as a cross-sectional position, and performing Boolean subtraction on the three-dimensional image of the protein to be observed to generate a cross-sectional view of the protein to be observed;
displaying the cross-sectional view in the real scene.
In order to achieve the above object, an embodiment of the present application further provides an AR-based protein cross-section generating apparatus, including:
the AR display module is used for displaying a three-dimensional image of the protein to be observed in a real scene;
the acquisition module is used for acquiring a user instruction;
the control module is used for establishing a reference plane in the real scene according to the user instruction and adjusting the reference plane according to the user instruction;
and the control module is also used for generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference after receiving the confirmation instruction.
In order to achieve the above object, an AR-based protein cross-section generating device is further provided in an embodiment of the present application, and includes a memory, a processor, and an AR-based protein cross-section generating program stored in the memory and executable on the processor, where the processor implements the AR-based protein cross-section generating method according to any one of the above items when executing the AR-based protein cross-section generating program.
In order to achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, where an AR-based protein cross-section generation program is stored on the computer-readable storage medium, and when executed by a processor, the AR-based protein cross-section generation program implements the AR-based protein cross-section generation method according to any one of the above items.
According to the AR-based protein section generation method, the three-dimensional image of the protein to be observed is displayed in a real scene through the AR head-mounted equipment, the user instruction is obtained through the AR head-mounted equipment to adjust the datum plane established in the real scene, and finally the section of the protein to be observed is generated by taking the datum plane as a reference, so that any required protein section can be generated without inputting complex parameters, and the technical difficulty of obtaining the protein section is reduced; moreover, the AR head-mounted equipment is used for displaying the three-dimensional image of the protein to be observed, the limitation of a traditional display can be broken through, a user can observe the three-dimensional image of the protein more freely and adjust the position of the reference plane, and therefore the protein sectional view which meets the requirements of the user more easily can be obtained. Therefore, compared with the traditional mode of generating the protein sectional view by setting complex parameters, the method has the advantages of convenience in operation and high operation freedom.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a block diagram of an embodiment of an AR-based protein cross-section generation apparatus according to the present invention;
FIG. 2 is a schematic flow chart of an embodiment of the AR-based protein cross-section generation method of the present invention;
FIG. 3 is a schematic flow chart of an embodiment of the AR-based protein cross-section generation method of the present invention;
FIG. 4 is a schematic flow chart of an embodiment of the AR-based protein cross-section generation method of the present invention;
FIG. 5 is a block diagram of an embodiment of an apparatus for generating an AR-based protein cross-section according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
For a better understanding of the above technical solutions, exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of "first," "second," and "third," etc. do not denote any order, and such words are to be interpreted as names.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a server 1 (also called an AR-based protein cross-section generating device) in a hardware operating environment according to an embodiment of the present invention.
The server in the embodiment of the invention is equipment with a display function, such as Internet of things equipment, AR/VR equipment with a networking function, a PC, a smart phone, a tablet personal computer, a portable computer and the like.
As shown in fig. 1, the server 1 includes: memory 11, processor 12, and network interface 13.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the server 1, for example a hard disk of the server 1. The memory 11 may also be an external storage device of the server 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the server 1.
Further, the memory 11 may also include an internal storage unit of the server 1 and also an external storage device. The memory 11 may be used not only to store application software installed in the server 1 and various types of data, such as the code of the AR-based protein section generation program 10, but also to temporarily store data that has been output or is to be output.
Processor 12, which in some embodiments may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, is configured to execute program code or process data stored in memory 11, such as executing AR-based protein profiling program 10.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used for establishing a communication connection between the server 1 and other electronic devices.
The network may be the internet, a cloud network, a wireless fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), and/or a Metropolitan Area Network (MAN). Various devices in the network environment may be configured to connect to the communication network according to various wired and wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, at least one of: transmission control protocol and internet protocol (TCP/IP), User Datagram Protocol (UDP), hypertext transfer protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, IEEE 802.11, optical fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communications, wireless Access Points (APs), device-to-device communications, cellular communication protocol, and/or bluetooth (Blue Tooth) communication protocol, or a combination thereof.
Optionally, the server may further comprise a user interface, which may include a Display (Display), an input unit such as a Keyboard (Keyboard), and an optional user interface may also include a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is used for displaying information processed in the server 1 and for displaying a visualized user interface.
While fig. 1 only shows the server 1 with the components 11-13 and the AR-based protein profiling procedure 10, it will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the server 1 and may comprise fewer or more components than shown, or some components may be combined, or a different arrangement of components.
In this embodiment, the processor 12 may be configured to call the AR-based protein section generation program stored in the memory 11, and perform the following operations:
displaying a three-dimensional image of a protein to be observed in a real scene through AR head-mounted equipment;
acquiring a user instruction through AR head-mounted equipment;
establishing a reference surface in the real scene, and adjusting the reference surface according to the user instruction;
and after receiving the confirmation instruction, generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
determining the type of a datum plane to be established from a user instruction;
and establishing a reference plane matched with the type in the real scene.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
acquiring a geometric center of the three-dimensional image;
acquiring a distance from a geometric center of the three-dimensional image to the outermost edge of the three-dimensional image;
and establishing the spherical datum plane by taking the geometric center as a spherical center and the distance as a spherical radius.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
acquiring a geometric center of the three-dimensional image;
establishing a preset reference axis in the reality scene;
and establishing the plane reference plane by taking the geometric center as a plane center and the preset reference axis as a plane normal.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
and adjusting at least one of the size, the position and the angle of the reference surface according to the user instruction.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
the user instruction is a voice instruction.
In one embodiment, the processor 12 may be configured to call the AR-based protein cross-section generation program stored in the memory 11 and perform the following operations:
taking the intersecting surface of the reference surface and the three-dimensional image as a cross-sectional position, and performing Boolean subtraction on the three-dimensional image of the protein to be observed to generate a cross-sectional view of the protein to be observed;
displaying the cross-sectional view in the real scene.
Based on the hardware architecture of the AR-based protein cross section generation device, the embodiment of the AR-based protein cross section generation method is provided. The AR-based protein cross-section generation method of the present invention is intended to simplify the protein cross-section generation method.
Referring to fig. 2, fig. 2 is a diagram illustrating an embodiment of the method for generating an AR-based protein cross-section according to the present invention, wherein the method for generating an AR-based protein cross-section includes the following steps:
and S10, displaying the three-dimensional image of the protein to be observed in the real scene through the AR head-mounted device.
The AR is Augmented Reality, which is a technology for fusing virtual information with a real world, and can apply virtual information such as characters, images, three-dimensional models, music, and videos generated by a computer to the real world after analog simulation. The AR device is an AR device, and common AR head-mounted devices include AR head displays, AR glasses, and the like. Generally, an AR headset has display lenses capable of displaying AR objects such as three-dimensional models. Because the display lens has the light transmissivity, when the AR object is displayed on the display lens, the user can see the real scene and the AR object simultaneously, and thus the AR object can be displayed in the real scene. It should be noted that the AR headset usually further includes an image acquisition module and a calculation module, the image acquisition module can acquire depth information of a real scene, and the calculation module can model the real scene according to the depth information acquired by the image acquisition module, so that newly loaded AR objects such as a three-dimensional image can be fixed in the real scene with the selected real scene coordinates as anchor points. In other words, the coordinates of the three-dimensional image of the protein to be observed in the real scene are fixed after the display.
Specifically, after the three-dimensional image data of the protein to be observed is imported into the AR headset, if the user selects the protein to be observed through the menu options of the AR headset, the three-dimensional image of the protein to be observed can be displayed on the display module, and further, the user can display the protein to be observed in a real scene.
And S20, acquiring the user instruction through the AR headset.
In particular, AR head-mounted devices are typically provided with an image acquisition module, which may be a camera, and a sound acquisition module, which may be a microphone. The gesture action of the user can be collected through the image collection module, and further the gesture instruction of the user can be collected; the voice acquisition module can acquire the voice of the user so as to acquire the voice instruction of the user. The user instruction referred to herein may be at least one of a gesture instruction or a voice instruction of the user.
And S30, establishing a reference plane in the real scene, and adjusting the reference plane according to the user instruction.
Specifically, after loading the three-dimensional image of the protein to be observed in the real scene, a reference plane is established in the real scene, the reference plane is used for generating the three-dimensional image of the protein to be observed, and the reference plane can move, rotate and zoom in the real scene based on the adjustment instruction sent by the user.
After the datum plane is established, the datum plane can be adjusted according to the acquired user instruction so as to change the position of the datum plane relative to the three-dimensional image of the protein to be observed.
And S40, generating a cross-sectional view of the protein to be observed by taking the reference surface as a reference after receiving the confirmation instruction.
Specifically, when the reference plane is adjusted in the three-dimensional space, if the reference plane moves to any cross-sectional position required by the user, a confirmation instruction can be sent to the calculation module of the AR headset, and after receiving the corresponding confirmation instruction, the calculation module of the AR headset can use the interface between the reference plane and the three-dimensional image as a cross-section to generate a cross-sectional view of the protein to be observed.
The method for generating the protein section based on the AR has the advantages that the three-dimensional image of the protein to be observed is displayed in the real scene through the AR headset, the user instruction is obtained through the AR headset so as to adjust the datum plane established in the real scene, and finally the section of the protein to be observed is generated by taking the datum plane as a reference, so that any required protein section can be generated without inputting complex parameters, and the technical difficulty of obtaining the protein section is reduced; moreover, the AR head-mounted equipment is used for displaying the three-dimensional image of the protein to be observed, the limitation of a traditional display can be broken through, a user can observe the three-dimensional image of the protein more freely and adjust the position of the reference plane, and therefore the protein sectional view which meets the requirements of the user more easily can be obtained. Therefore, compared with the traditional mode of generating the protein sectional view by setting complex parameters, the method has the advantages of convenience in operation and high operation freedom.
In one embodiment, establishing a reference plane in the real scene includes:
and S110, determining the type of the datum plane to be established from the user instruction.
And S120, establishing a reference plane matched with the type in the real scene.
Exemplary types of datum planes include, but are not limited to, a planar datum plane, a curved datum plane, and a spherical datum plane, where a planar datum plane refers to a planar datum plane having dimensions X, Y, Z in any two of three dimensions. The curved reference surface is a curved reference surface which is similar to a plane reference surface but has the size of any three dimensions of X, Y, Z at the same time; illustratively, the curved reference surface may be an arc reference surface, a wave reference surface, or the like. And spherical reference plane refers to a three-dimensional reference plane having a size of X, Y, Z three dimensions.
Specifically, the reference surfaces of the corresponding types are established according to different user instructions, so that the users can select different types of reference surfaces according to their own requirements, thereby meeting different requirements of the users and expanding the applicability of the technical scheme. Of course, the design of the present application is not so limited, and in other embodiments, only one type of reference surface may be provided.
As shown in fig. 3, in an embodiment, if the type of the reference plane defined by the user instruction is a spherical reference plane, establishing a reference plane matched with the type in the real scene includes:
and S210, acquiring the geometric center of the three-dimensional image.
The geometric center of the three-dimensional image is the geometric center of the protein to be observed.
Specifically, when calculating the geometric center of the protein to be observed, the three-dimensional image may be equivalent to a regular polyhedron, such as a regular tetrahedron, a regular pentahedron, a regular hexahedron, or the like, and then the geometric center of the protein to be observed may be obtained based on the equivalent regular polyhedron.
S220, obtaining the distance from the geometric center of the three-dimensional image to the outermost edge of the three-dimensional image.
Specifically, the distance from the geometric center of the three-dimensional image to the outermost edge of the three-dimensional image can be obtained by acquiring the three-dimensional coordinates of the geometric center and the three-dimensional coordinates of a point at the outermost edge of the three-dimensional image.
And S230, establishing the spherical reference surface by taking the geometric center as a spherical center and the distance as a spherical radius.
Specifically, after the sphere center and the sphere radius of the spherical reference surface are determined, a desired spherical reference surface can be established in a corresponding real scene, and the spherical reference surface can be regarded as a circumscribed sphere of the three-dimensional image of the protein because the sphere radius of the spherical reference surface is consistent with the distance from the geometric center to the outermost edge of the three-dimensional image.
It can be understood that, the arrangement can completely accommodate the three-dimensional image of the protein in the spherical datum plane, so that the relative position between the spherical datum plane and the three-dimensional image of the protein can be conveniently observed by a user, and the user can conveniently obtain a required protein section. In addition, by establishing the spherical reference surface in the real scene, the observation angle of the user can be increased according to the cross-sectional structure of the protein to be observed in all directions.
As shown in fig. 4, in an embodiment, if the type of the reference plane defined by the user instruction is a plane reference plane, establishing a reference plane matched with the type in the real scene includes:
and S410, acquiring the geometric center of the three-dimensional image.
The geometric center of the three-dimensional image is the geometric center of the protein to be observed.
Specifically, when calculating the geometric center of the protein to be observed, the three-dimensional image may be equivalent to a regular polyhedron, such as a regular tetrahedron, a regular pentahedron, a regular hexahedron, or the like, and then the geometric center of the protein to be observed may be obtained based on the equivalent regular polyhedron.
And S420, establishing a preset reference axis in the real scene.
Specifically, the preset reference axis may be any axis in a real scene, and for convenience of calculation, any coordinate axis in a reference coordinate system (i.e., an XYZ coordinate system) in the real scene may be taken as the preset reference axis. For example, the X axis is taken as the preset reference axis.
And S430, establishing the plane reference plane by taking the geometric center as a plane center and the preset reference axis as a plane normal.
Specifically, after the center and normal of the plane reference plane are established, the desired plane reference plane can be established accordingly.
It will be appreciated that establishing a planar reference plane through the geometric centre of the protein to be observed may facilitate the user in cutting a planar section of the protein to be observed.
In one embodiment, adjusting the reference plane according to the user instruction includes:
and adjusting at least one of the size, the position and the angle of the reference surface according to the user instruction.
Specifically, when the established reference plane is a spherical reference plane, the user can zoom in and zoom out the spherical reference plane with the center of the sphere of the spherical reference plane as the center; and when the established reference plane is the plane reference plane, the user can adjust the angle and the position of the plane reference plane.
In an embodiment, the method further comprises:
and acquiring the confirmation instruction from the user instruction.
That is, after the reference plane is adjusted to the desired position, the user may send a voice or a gesture corresponding to the confirmation instruction, and after the AR headset collects the related voice or gesture, the user may consider that the confirmation instruction is received. It can be understood that, the setting user can complete the adjustment of the reference plane and the confirmation of the protein section only through voice or gesture, so that the operation mode of the user can be more uniform, and the user can operate the protein section conveniently. Of course, the design of the present application is not limited thereto, and in other embodiments, the confirmation command may come from a separate switch module, such as a separate manual switch, a foot switch, or the like.
In one embodiment, generating a cross-sectional view of the protein to be observed with reference to the reference plane comprises:
and S510, taking the intersecting surface of the reference surface and the three-dimensional image as a cross-sectional position, and performing Boolean subtraction on the three-dimensional image of the protein to be observed to generate a cross-sectional view of the protein to be observed.
Among these, the boolean reduction operation is a boul reduction operation. Specifically, when a protein cross section is generated, a cross section of the protein to be observed at the current position can be obtained by performing a boolean subtraction operation on the three-dimensional image of the protein to be observed, using the intersection of the current reference plane and the three-dimensional image of the protein to be observed as the cross section position.
And S520, displaying the section in the real scene.
Specifically, after the cross-sectional view of the protein to be observed is generated, the cross-sectional view is displayed by the display module, so that the user can observe the cut cross-sectional view of the protein in real time. The user may then determine whether to re-acquire a new protein profile based on the current protein profile.
Illustratively, the adjustment instruction is based on the voice instruction of the user. When the user selects the spherical reference surface, when the reference surface is adjusted, the user only needs to say the voice instructions containing the two keywords of 'increasing' and 'decreasing', and the reference sphere can be enlarged or reduced in real time. When the user selects the plane-type reference surface, when adjusting the reference surface, the user only needs to say that the keywords containing the direction + the action + the number (such as moving left by 0.5, translating by 0.2, turning left by 30 degrees, rotating left by 20 degrees and the like) can adjust the position and the angle of the reference surface in real time. When the user reaches a satisfactory position, the user needs to say that the sentence containing the keywords such as 'OK' or 'good', and the like is confirmed, after the AR head-mounted equipment receives the voice confirmation signal, the AR head-mounted equipment performs a bol subtraction operation on the three-dimensional image, and then the desired sectional graph can be generated, and the generation and the display of the sectional graph are completed.
In addition, referring to fig. 5, an embodiment of the present invention further provides an AR-based protein cross-section generating apparatus, including:
a display module 110, configured to display a three-dimensional image of a protein to be observed in a real scene;
an obtaining module 120, configured to obtain a user instruction;
the control module 130 is configured to establish a reference plane in the real scene according to the user instruction, and adjust the reference plane according to the user instruction;
and the control module is also used for generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference after receiving the confirmation instruction.
The steps implemented by each functional module of the AR-based protein cross-section generation apparatus may refer to each embodiment of the AR-based protein cross-section generation method of the present invention, and are not described herein again.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, which may be any one of or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer-readable storage medium includes the program 10 for generating an AR-based protein cross-section, and the specific embodiment of the computer-readable storage medium of the present invention is substantially the same as the specific embodiment of the method for generating an AR-based protein cross-section and the server 1, and will not be described herein again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for generating an AR-based protein cross section, comprising:
displaying a three-dimensional image of a protein to be observed in a real scene through AR head-mounted equipment;
acquiring a user instruction through AR head-mounted equipment;
establishing a reference surface in the real scene, and adjusting the reference surface according to the user instruction;
and after receiving the confirmation instruction, generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference.
2. The AR-based protein cross-section generation method of claim 1, wherein establishing a reference plane in the real scene comprises:
determining the type of a datum plane to be established from a user instruction;
and establishing a reference plane matched with the type in the real scene.
3. The AR-based protein section generation method of claim 2, wherein if the type of the reference plane defined by the user instruction is a spherical reference plane, establishing a reference plane matching the type in the real scene comprises:
acquiring a geometric center of the three-dimensional image;
acquiring a distance from a geometric center of the three-dimensional image to the outermost edge of the three-dimensional image;
and establishing the spherical datum plane by taking the geometric center as a spherical center and the distance as a spherical radius.
4. The AR-based protein section generation method of claim 2, wherein if the type of the reference plane defined by the user instruction is a planar reference plane, establishing a reference plane matching the type in the real scene comprises:
acquiring a geometric center of the three-dimensional image;
establishing a preset reference axis in the reality scene;
and establishing the plane reference plane by taking the geometric center as a plane center and the preset reference axis as a plane normal.
5. The AR-based protein cross-section generation method of claim 1, wherein adjusting the reference plane according to the user instruction comprises:
and adjusting at least one of the size, the position and the angle of the reference surface according to the user instruction.
6. The AR-based protein cross-section generation method of claim 1, wherein said user command is a voice command.
7. The method for generating an AR-based protein cross-section according to claim 1, wherein generating a cross-sectional view of the protein to be observed with reference to the reference plane comprises:
taking the intersecting surface of the reference surface and the three-dimensional image as a cross-sectional position, and performing Boolean subtraction on the three-dimensional image of the protein to be observed to generate a cross-sectional view of the protein to be observed;
displaying the cross-sectional view in the real scene.
8. An AR-based protein cross-section generation apparatus, comprising:
the AR display module is used for displaying a three-dimensional image of the protein to be observed in a real scene;
the acquisition module is used for acquiring a user instruction;
the control module is used for establishing a reference plane in the real scene according to the user instruction and adjusting the reference plane according to the user instruction;
and the control module is also used for generating a cross-sectional view of the protein to be observed by taking the datum plane as a reference after receiving the confirmation instruction.
9. An AR-based protein cross-section generation apparatus comprising a memory, a processor, and an AR-based protein cross-section generation program stored on the memory and executable on the processor, wherein the processor implements the AR-based protein cross-section generation method according to any one of claims 1 to 7 when executing the AR-based protein cross-section generation program.
10. A computer-readable storage medium, on which an AR-based protein cross-section generation program is stored, which when executed by a processor implements the AR-based protein cross-section generation method according to any one of claims 1 to 7.
CN202210037365.6A 2022-01-13 2022-01-13 AR-based protein section generation method Active CN114388056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210037365.6A CN114388056B (en) 2022-01-13 2022-01-13 AR-based protein section generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210037365.6A CN114388056B (en) 2022-01-13 2022-01-13 AR-based protein section generation method

Publications (2)

Publication Number Publication Date
CN114388056A true CN114388056A (en) 2022-04-22
CN114388056B CN114388056B (en) 2023-06-16

Family

ID=81201946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210037365.6A Active CN114388056B (en) 2022-01-13 2022-01-13 AR-based protein section generation method

Country Status (1)

Country Link
CN (1) CN114388056B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005024756A1 (en) * 2003-09-07 2005-03-17 Yiyu Cai Molecular studio for virtual protein lab
JP2005169070A (en) * 2003-06-27 2005-06-30 Toshiba Corp Image processing and displaying device and method for controlling the same
JP2011120825A (en) * 2009-12-14 2011-06-23 Fujifilm Corp Medical image display device, method, and program
US20160042248A1 (en) * 2014-08-11 2016-02-11 Canon Kabushiki Kaisha Image processing apparatus, image processing method, medical image diagnostic system, and storage medium
US20170270705A1 (en) * 2016-03-15 2017-09-21 Siemens Healthcare Gmbh Model-based generation and representation of three-dimensional objects
US20180018815A1 (en) * 2015-04-24 2018-01-18 Hewlett-Packard Development Company, L.P. Three-dimensional object representation
US20180020992A1 (en) * 2015-02-16 2018-01-25 Dimensions And Shapes, Llc Systems and methods for medical visualization
US20180164434A1 (en) * 2014-02-21 2018-06-14 FLIR Belgium BVBA 3d scene annotation and enhancement systems and methods
US20200326814A1 (en) * 2019-12-05 2020-10-15 Intel Corporation System and methods for human computer interface in three dimensions
CN111949113A (en) * 2019-05-15 2020-11-17 阿里巴巴集团控股有限公司 Image interaction method and device applied to virtual reality VR scene

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005169070A (en) * 2003-06-27 2005-06-30 Toshiba Corp Image processing and displaying device and method for controlling the same
WO2005024756A1 (en) * 2003-09-07 2005-03-17 Yiyu Cai Molecular studio for virtual protein lab
JP2011120825A (en) * 2009-12-14 2011-06-23 Fujifilm Corp Medical image display device, method, and program
US20180164434A1 (en) * 2014-02-21 2018-06-14 FLIR Belgium BVBA 3d scene annotation and enhancement systems and methods
US20160042248A1 (en) * 2014-08-11 2016-02-11 Canon Kabushiki Kaisha Image processing apparatus, image processing method, medical image diagnostic system, and storage medium
US20180020992A1 (en) * 2015-02-16 2018-01-25 Dimensions And Shapes, Llc Systems and methods for medical visualization
US20180018815A1 (en) * 2015-04-24 2018-01-18 Hewlett-Packard Development Company, L.P. Three-dimensional object representation
US20170270705A1 (en) * 2016-03-15 2017-09-21 Siemens Healthcare Gmbh Model-based generation and representation of three-dimensional objects
CN111949113A (en) * 2019-05-15 2020-11-17 阿里巴巴集团控股有限公司 Image interaction method and device applied to virtual reality VR scene
US20200326814A1 (en) * 2019-12-05 2020-10-15 Intel Corporation System and methods for human computer interface in three dimensions

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHRISTOPH MÜLLER ET.AL: "Interactive Molecular Graphics for Augmented Reality Using HoloLens", pages 1 - 13 *
LAURA E. REEVES ET.AL: "Use of augmented reality (AR) to aid bioscience education and enrich student experience", vol. 29, pages 1 - 15 *
MIN ZHENG ET.AL: "ChemPreview: an augmented reality-based molecular interface", vol. 73, pages 18 - 23, XP029962239, DOI: 10.1016/j.jmgm.2017.01.019 *
李明等: "三维网格模型局部骨架交互式提取方法研究", 《河北工程大学学报(自然科学版)》, vol. 32, no. 1, pages 86 - 90 *

Also Published As

Publication number Publication date
CN114388056B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CA3090747C (en) Automatic rig creation process
CN108594999B (en) Control method and device for panoramic image display system
JP7268071B2 (en) Virtual avatar generation method and generation device
WO2022143179A1 (en) Virtual character model creation method and apparatus, electronic device, and storage medium
CN113262465A (en) Virtual reality interaction method, equipment and system
KR20230113370A (en) face animation compositing
CN112138386A (en) Volume rendering method and device, storage medium and computer equipment
CN112206517A (en) Rendering method, device, storage medium and computer equipment
CN114388056B (en) AR-based protein section generation method
US20220319059A1 (en) User-defined contextual spaces
US20220319125A1 (en) User-aligned spatial volumes
US11562548B2 (en) True size eyewear in real time
WO2022212144A1 (en) User-defined contextual spaces
CN111524240A (en) Scene switching method and device and augmented reality equipment
CN114388059B (en) Protein section generation method based on three-dimensional force feedback controller
CN114388060B (en) Round controller-based protein spherical section generation method
US20230342100A1 (en) Location-based shared augmented reality experience system
CN113722644B (en) Method and device for selecting browsing point positions in virtual space based on external equipment
KR102528581B1 (en) Extended Reality Server With Adaptive Concurrency Control
US20240020920A1 (en) Incremental scanning for custom landmarkers
US20240094861A1 (en) Configuring a 3d model within a virtual conferencing system
US20230300176A1 (en) Web calling system
US20230418062A1 (en) Color calibration tool for see-through augmented reality environment
WO2023142945A1 (en) 3d model generation method and related apparatus
US20230419530A1 (en) Augmented reality image reproduction assistant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant