CN111862333A - Content processing method and device based on augmented reality, terminal equipment and storage medium - Google Patents

Content processing method and device based on augmented reality, terminal equipment and storage medium Download PDF

Info

Publication number
CN111862333A
CN111862333A CN201910352571.4A CN201910352571A CN111862333A CN 111862333 A CN111862333 A CN 111862333A CN 201910352571 A CN201910352571 A CN 201910352571A CN 111862333 A CN111862333 A CN 111862333A
Authority
CN
China
Prior art keywords
virtual
information
content
cutting plane
dimensional object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910352571.4A
Other languages
Chinese (zh)
Inventor
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910352571.4A priority Critical patent/CN111862333A/en
Publication of CN111862333A publication Critical patent/CN111862333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the application discloses a content processing method and device based on augmented reality, a terminal device and a storage medium, and relates to the technical field of display. The method comprises the following steps: acquiring model data of a three-dimensional object; determining six-degree-of-freedom (6DoF) information of an interactive device according to an acquired marker image, wherein the marker image comprises a marker arranged on the interactive device; determining the spatial position of the virtual cutting plane according to the 6DoF information; acquiring cutting information of the virtual cutting plane on the three-dimensional object according to the spatial position; obtaining internal virtual sectioning contents of the three-dimensional object according to the sectioning information and the model data; and displaying the internal virtual cut content. The method can obtain the internal virtual sectioning content of the three-dimensional object to the virtual cutting plane through the 6DoF information virtual cutting plane of the interactive device, so that a user can directly observe and control the information of the sectioned three-dimensional object through the interactive device, and the convenience of the user in checking the cutting information of the three-dimensional object is improved.

Description

Content processing method and device based on augmented reality, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a content processing method and device based on augmented reality, a terminal device, and a storage medium.
Background
In daily life, people usually observe an object only on the surface features of the object, for example, when observing a building in a real environment, only the external structure of the building can be seen, or when observing an animal, only the external shape of the animal can be seen. The same is true for virtual objects displayed with an electronic device, and typically the user can only see the surface features of the virtual object presented by the electronic device, resulting in inefficiencies in the user viewing the object.
Disclosure of Invention
The embodiment of the application provides a content processing method, a processing device, a terminal device and a storage medium based on augmented reality, which can check the internal structure of a three-dimensional object in an augmented reality manner, improve observation efficiency and have high interactivity.
In a first aspect, an embodiment of the present application provides a content processing method based on augmented reality, which is applied to a terminal device, and the processing method includes: acquiring model data of a three-dimensional object; determining six-degree-of-freedom (6DoF) information of the interactive device according to the acquired marker image, wherein the marker image comprises a marker arranged on the interactive device; determining the spatial position of the virtual cutting plane according to the 6DoF information; acquiring cutting information of the virtual cutting plane on the three-dimensional object according to the spatial position; obtaining internal virtual sectioning contents of the three-dimensional object according to the sectioning information and the model data; and displaying the internal virtual cut content.
In a second aspect, an embodiment of the present application provides an augmented reality-based content processing apparatus, which includes a model data obtaining module, an information determining module, a virtual cutting plane determining module, a cutting content obtaining module, and a display module. The model data acquisition module is used for acquiring model data of the three-dimensional object, the information determination module is used for determining six-degree-of-freedom (6DoF) information of the interaction device according to the acquired marker image, the marker image comprises a marker, and the marker is arranged on the interaction device. The virtual cutting plane determining module is used for determining the space position of the virtual cutting plane according to the 6DoF information, and the sectioning content obtaining module is used for obtaining the cutting information of the virtual cutting plane on the three-dimensional object and obtaining the internal virtual sectioning content of the three-dimensional object according to the cutting information and the model data. The display module is used for displaying the internal virtual sectioning content.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the augmented reality based content processing method provided by the first aspect described above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the method for processing augmented reality-based content according to the first aspect.
According to the content processing method and device based on augmented reality, the terminal device and the storage medium, the internal virtual sectioning content of the three-dimensional object is obtained and displayed according to the virtual cutting plane according to the space position of the 6DoF information virtual cutting plane of the interaction device, so that a user can control the information of the three-dimensional object after being sectioned by using the interaction device, the internal structure of the three-dimensional object is observed in an augmented reality mode, the convenience of the user for checking the internal structure of the three-dimensional object is improved, the observation efficiency is improved, and the interactivity is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic application diagram of an augmented reality-based content processing system according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a framework of an augmented reality-based content processing system according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a content processing method based on augmented reality according to an embodiment of the present application.
Fig. 4 is a schematic flowchart of another augmented reality-based content processing method according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of creating a three-dimensional virtual object in the method shown in fig. 4.
FIG. 6 is a schematic illustration of a process of creating a three-dimensional virtual object as shown in FIG. 5.
FIG. 7 is a schematic diagram of another process for creating a three-dimensional virtual object as shown in FIG. 5.
FIG. 8 is a schematic diagram of another process for creating a three-dimensional virtual object as shown in FIG. 5.
Fig. 9 is a schematic flow chart illustrating the process of determining the virtual cutting plane in the method shown in fig. 4.
Fig. 10 is a block diagram illustrating a structure of an augmented reality-based content processing apparatus according to an embodiment of the present application.
Fig. 11 is a block diagram of a terminal device according to an embodiment of the present application.
Fig. 12 is a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the technical scheme of the present application better understood, the technical scheme of some embodiments will be clearly and completely described below with reference to the attached drawings of some embodiments.
In recent years, with the development of Augmented Reality (AR) technology, AR-related electronic devices have gradually come into daily life of people. AR is a technology for increasing the user's perception of the real world through information provided by a computer system, and superimposes computer-generated content objects such as virtual objects, scenes, or system cues into a real scene to enhance or modify the perception of the real world environment or data representing the real world environment. In a traditional AR scene, a user generally needs to wear AR equipment such as AR glasses or an AR helmet, and acquire and recognize a Marker (also called Marker or Tag) in a real environment, so as to see a scene in which a virtual object displayed by the AR equipment is superimposed on the real environment. However, conventional AR generally only simply displays a virtual object, and a user can only see surface features of the virtual object presented by the electronic device.
Through research, the inventor provides a content processing method, a content processing device, a terminal device and a storage medium based on augmented reality in some embodiments, so that the internal structure of a three-dimensional object can be viewed in an augmented reality mode, the observation efficiency is improved, and the interactivity is high.
Referring to fig. 1, fig. 1 illustrates a schematic diagram of a framework of an augmented reality based content processing system 10 according to some embodiments, where the augmented reality based content processing system 10 includes: terminal device 100, interaction apparatus 200, and three-dimensional object 300. The terminal device 100 is configured to determine a virtual cutting plane 401 according to the spatial position information of the interaction apparatus 200, determine internal virtual sectioning content corresponding to the three-dimensional object 100 based on the virtual cutting plane 401, and control display of the internal virtual sectioning content according to a control instruction generated by the interaction apparatus 200 to allow a user to view an internal structure of the three-dimensional object 300 in an augmented reality manner.
Referring to fig. 2, in some embodiments, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone and a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device, or may be an external/access head-mounted display device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to the external head-mounted display device, that is, the terminal device 100 may be used as a processing and storage device of the head-mounted display device to be inserted or connected to the external head-mounted display device.
In some embodiments, the interaction device 200 is a tablet-shaped electronic device having a tag 210 disposed thereon. The specific configuration of the interactive apparatus 200 is not limited, and may be various shapes, such as square and circular. Among them, the number of the markers 210 on the interactive device 200 may be one or more. In one embodiment, the marker 210 is disposed on a surface of the interaction device 200, and in this case, the interaction device 200 is an electronic device integrated with the marker 210. In some embodiments, the interaction device 200 may be held and manipulated by a user, the marker 210 may be integrated into the interaction device 200, or may be attached to the interaction device 200 in a pasting manner, and if the first interaction device 200 is provided with a display screen, the marker 210 may be further displayed on the display screen of the interaction device 200. In some embodiments, the tag 210 can be connected to the interaction device 200 by means of an accessory, for example, the tag 210 can be disposed on a protective cover of the interaction device 200, or the tag 210 can be plugged into the interaction device 200 through a USB (Universal Serial Bus) interface.
When the interactive device 200 is used, the marker 210 can be located in the visual field of the terminal device 100, so that the terminal device 100 can acquire an image containing the marker 210 arranged on the interactive device 200 to recognize and track the marker 210, and further realize the positioning tracking of the interactive device 200 or the recognition of the spatial position information and the spatial attitude information of the interactive device 200. Further, the interactive device 200 and the terminal device 100 may be connected through communication methods such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (ZigBee technology), and the like, or may be connected through communication methods such as a data line and the like in a wired manner, and of course, the connection method between the terminal device 100 and the interactive device 200 may not be limited in this embodiment of the application.
In some embodiments, the marker 210 is a pattern having a topology, which refers to the connectivity between the sub-markers and the feature points, etc. in the marker 210. When the marker 210 is within the visual field of the terminal device 100, the terminal device 100 may use the marker 210 within the visual field as a target marker and acquire an image including the target marker. After the image including the target marker is acquired, the acquired image of the target marker can be recognized to obtain spatial position information such as the position and the posture of the target marker relative to the terminal device 100 and recognition results such as the identity information of the target marker, so as to obtain spatial position information such as the position and the posture of the interaction device 200 relative to the terminal device 100, that is, Six Degrees of Freedom (6 DoF) information of the interaction device 200, thereby realizing the positioning and tracking of the interaction device 200.
In some embodiments, the marker 210 may also be a light point type marker, and the terminal device tracks the light point to obtain spatial position information such as relative position and posture. In a specific embodiment, a light spot and an Inertial Measurement Unit (IMU) may be disposed on the first interaction device, and the terminal device may acquire a light spot image on the first interaction device through the image sensor, acquire measurement data through the Inertial measurement unit, and determine, according to the light spot image and the measurement data, relative spatial position information between the first interaction device and the terminal device, thereby implementing positioning and tracking of the first interaction device. The light spots arranged on the first interaction device can be visible light spots or infrared light spots, and the number of the light spots can be one or a light spot sequence consisting of a plurality of light spots.
The 6DoF information of the interactive device 200 means that the interactive device has six degrees of freedom information in space, namely, the degree of freedom of movement in X, Y, Z directions along three orthogonal coordinate axes in a space coordinate system and the degree of freedom of rotation around the X, Y, Z three orthogonal coordinate axes respectively. In some embodiments, the 6DoF information of the interaction device 200 includes at least: the moving direction, moving distance, rotating direction and rotating angle of the interaction device 200 relative to the terminal device 100.
After the terminal device 100 acquires the 6DoF information of the interaction apparatus 200, the spatial position of the virtual cutting plane 401 may be determined according to the 6DoF information of the interaction apparatus 200, and the virtual cutting plane 401 and the interaction apparatus 200 may have a certain relative positional relationship, for example, a plane where the interaction apparatus 200 is located, or a certain included angle between the virtual cutting plane and the interaction apparatus 200 may be formed. In some embodiments, after the terminal device 100 determines the spatial position of the virtual cutting plane 401, the virtual cutting plane 401 may be displayed according to the spatial position, and the user may see, through the terminal device 100, that the virtual cutting plane 401 is displayed in an overlapping manner with the interaction apparatus 200, or in other embodiments, the terminal device 100 may not display the virtual cutting plane 401. The user may move the interaction device 200 to cut the three-dimensional object 300 with the virtual cutting plane 401.
The terminal device 100 may acquire cutting information of the virtual cutting plane 401 for the three-dimensional object 300 according to the spatial position of the virtual cutting plane 401, so that the internal virtual cutting content 403 of the three-dimensional object 300 may be acquired and displayed according to the cutting information. Therefore, a user can directly observe and control the information (such as a section, an internal structure and the like) of the three-dimensional object 300 after being cut through the interaction device 200, and the convenience of the user in viewing the cutting information of the three-dimensional object 300 is improved.
In some embodiments, the three-dimensional object 300 is a third-party object, that is, the three-dimensional object 300 is an object outside the terminal device 100 and the interaction apparatus 200, and may be any physical entity in the real world, for example, the three-dimensional object 300 may be any simulation model, conceptual model physical entity, building, vehicle, or animal, plant, etc. with life, and is not limited herein. In some embodiments, the three-dimensional object 300 may also be any three-dimensional virtual object displayed by the terminal device 100.
When the specified relative position relationship is formed between the interaction device 200 and the three-dimensional object 300, the terminal device 100 may obtain the spatial position of the virtual cutting plane 401 according to the spatial position information of the interaction device 200, and obtain the cutting information of the three-dimensional object 300 corresponding to the virtual cutting plane 401 based on the virtual cutting plane 403, the terminal device 100 may further obtain the internal virtual cutting content 403 of the three-dimensional object 300 according to the cutting information, and the internal virtual cutting content 403 may be displayed based on the spatial position information of the marker 210 relative to the terminal device, and may be provided for the user to process through the interaction device 200.
In some embodiments, at least one manipulation region is provided on the interactive device 200, so that a user can perform a control operation on the manipulation region to process the internal virtual cutting content 403. Wherein, the manipulation area may include at least one of a key, a touch pad and a touch screen. The interactive device 200 may acquire control data corresponding to the control operation through the control operation detected by the manipulation area, and send the control data to the terminal device 100. When the terminal device 100 receives the control data transmitted by the interaction means 200, the internal virtual cut content 403 may be processed according to the control data. For example, the terminal device 100 may change attributes of contours, lines, colors, and the like of the internal virtual cut content 403 or reconstruct the internal virtual cut content 403 according to the control data.
In some embodiments, the interaction device 200 may also be a mobile terminal with a touch screen, such as a smart phone, a tablet computer, and the like, and the interaction device 200 may have a touch screen that can display a picture and can be manipulated.
Based on the processing system, some embodiments provide an augmented reality-based content processing method, which is applied to the terminal device and the interaction device of the augmented reality-based content processing system. A specific augmented reality-based content processing method is described below.
Referring to fig. 3, fig. 3 illustrates an augmented reality-based content processing method according to some embodiments, which can be applied to the terminal device. According to the content processing method based on augmented reality, a user can determine the space position of a virtual cutting plane through 6DoF information of an interaction device, and the terminal equipment acquires and displays the internal virtual cutting content of the three-dimensional object according to the virtual cutting plane. Therefore, the user can utilize the interaction device to control the information of the cut three-dimensional object, and observe the internal structure (such as a section, an internal structure and the like) of the three-dimensional object in an augmented reality mode, so that the convenience of the user in viewing the internal structure of the three-dimensional object is improved. Once triggered, the augmented reality-based content processing method provided in some embodiments may enable a flow of the method in the embodiments to be dynamically run through the terminal device, where each step may be performed sequentially according to an order in the flowchart, or may be performed simultaneously according to a plurality of steps in an actual situation, which is not limited herein. In some embodiments, the augmented reality-based content processing method may include steps S110 to S160.
Step S110: model data of a three-dimensional object is acquired.
In some embodiments, the three-dimensional object may include at least one of a physical object in real space and a virtual object displayed by the terminal device.
The physical object in real space may be any object, for example, in the embodiment shown in fig. 2, the physical three-dimensional object is a simulated heart model 300 held in the hand of a user. The terminal device may acquire an image of the entity object through the image acquisition device, and identify the entity object in the image to obtain identity information of the entity object, so as to obtain model data of the entity object according to the identity information, where the identity information may include a name, a model number, an object number, and the like of the entity object, and is not limited herein.
The virtual object displayed by the terminal device may refer to a three-dimensional virtual object displayed by using AR (such as virtual object 400 in fig. 2), or the virtual object displayed by the terminal device may be a virtual object displayed by the terminal device using a display technology of mixed reality (such as a holographic projection technology). The user can view the virtual object and the scene in the real space in front of the lens through the lens of the terminal device, so that the user can view the virtual object such as a virtual human body, a virtual organ, a virtual animal, a virtual house and the like which are displayed in the real space in an overlapping manner. As an implementation manner, the terminal device may directly obtain the model data according to the identity information such as the number corresponding to the virtual object.
In some embodiments, the model data of the three-dimensional object includes at least external structure information and internal structure information of the three-dimensional object, in some embodiments, the external structure information may be appearance information of the three-dimensional object, such as lines, shapes, colors, sizes, and the like, and the internal structure information may be internal structure information of the three-dimensional object, such as fillers, internal architectures, colors, and the like. For example, in the embodiment shown in fig. 2, the internal structure information of the three-dimensional object may include the external shape, color, veins of blood vessels, and the structural orientation information of the arteries and veins of the simulated heart model 300, and the internal structure information of the three-dimensional object may include the structural information of the left and right atria, the left and right ventricles, and the arteries and veins inside the simulated heart model 300.
Further, the model data of the three-dimensional object is obtained by using the identity information of the three-dimensional object, and may be downloaded from a server by the terminal device according to the identity information of the three-dimensional object, or obtained from other devices by the terminal device according to the identity information of the three-dimensional object, or obtained from a local storage by the terminal device according to the identity information of the three-dimensional object.
Step S120: six degree of freedom (6DoF) information of the interactive device is determined from the acquired marker image, the marker image including a marker disposed on the interactive device.
In some embodiments, the image of the marker including the marker may be captured by an image capturing device of the terminal device, wherein the marker may be integrated into the interactive device, or may be attached to the interactive device by being pasted on the interactive device, or may be a pattern selectively presented on the interactive device (e.g., a pattern displayed after the interactive device is powered on).
In some embodiments, an image of a marker on the interaction device may be acquired by the image acquisition device, the terminal device identifies the marker in the image, and according to the identification result of the marker, position and posture information of the interaction device relative to the terminal device, that is, Six Degrees of Freedom (6DoF) information of the interaction device, is acquired.
Step S130: and determining the spatial position of the virtual cutting plane according to the 6DoF information.
In some embodiments, the terminal device may determine a spatial position of a virtual cutting plane according to the acquired 6DoF information of the interactive apparatus, where the virtual cutting plane refers to a virtual plane used to cut the three-dimensional object.
The terminal device may be preset with a relative position relationship between the virtual cutting plane and the interaction device, and the spatial position of the virtual cutting plane may be determined based on the relative position relationship. Further, the spatial position of the virtual cutting plane may be obtained by determining a coordinate set of the virtual cutting plane in the virtual space according to the acquired 6DoF information of the interactive device and based on the relative position relationship. The virtual cutting plane may be a plane where the interaction device is located, or a plane determined by the terminal device based on a spatial position of the interaction device, for example, a preset included angle, such as 90 degrees or 45 degrees, may be formed between the virtual cutting plane and the plane where the interaction device is located. For example, referring to fig. 2, the virtual cutting plane 401 is a plane determined based on the space of the interactive device 200, and is substantially at an angle of 90 degrees with respect to the plane of the interactive device 200.
Step S140: and acquiring cutting information of the virtual cutting plane on the three-dimensional object.
In some embodiments, the cutting information corresponding to the virtual cutting plane refers to information data corresponding to the three-dimensional object after being cut by the virtual cutting plane, the information data may include content formed after model data of the three-dimensional object is cut by the virtual cutting plane in a virtual space, for example, a coordinate set corresponding to a cross section formed after the three-dimensional object is cut, a shape and an appearance of the cross section, an internal structure of the three-dimensional object based on the cross section, and the information data may also be corresponding data of the content when the three-dimensional object is cut by the virtual cutting plane, such as, but not limited to, a component parameter, an attribute, a property description, and the like in the virtual cutting plane.
Further, the cutting information of the virtual cutting plane for the three-dimensional object is determined, and the cutting information may be downloaded from a server by the terminal device according to the correspondence between the virtual cutting plane and the three-dimensional object and the model data of the three-dimensional object, or may be acquired from another device by the terminal device according to the correspondence between the virtual cutting plane and the three-dimensional object and the model data of the three-dimensional object, or may be acquired from a local storage by the terminal device according to the correspondence and the model data.
Step S150: and obtaining the internal virtual sectioning content of the three-dimensional object according to the sectioning information and the model data.
Since the terminal device has determined the cutting information of the virtual cutting plane for the three-dimensional object, the terminal device may obtain, according to the cutting information, a model coordinate set corresponding to a surface of the three-dimensional object to be cut (i.e., the cross section described above), where the model coordinate set may be used to represent a position of the cross section in the model of the three-dimensional object, so that the internal virtual cutting content of the three-dimensional object may be obtained according to the model coordinate information and the model data. The virtual cutting content may be a three-dimensional model content of the three-dimensional object cut by the virtual cutting plane (for example, the three-dimensional object is cut into two parts of a three-dimensional virtual model in the virtual space), or the virtual cutting content may be a section model content of the three-dimensional object cut by the virtual cutting plane, that is, the internal virtual cutting content may be a content such as a shape and a contour of the three-dimensional object model in the section.
Step S160: displaying the inner virtual cut content.
In some embodiments, after the terminal device obtains the internal virtual cut content, the internal virtual cut content may be displayed.
In some embodiments, if the interactive device is provided with a display area, the internal virtual cut content acquired by the terminal device may be directly displayed in the display area of the interactive device. The display area of the interaction device can be an area where a display screen of the interaction device is located, namely, the internal virtual sectioning content is displayed through the display screen of the interaction device.
In other embodiments, the terminal device may determine a designated display area based on the 6DoF information of the interactive apparatus, where the designated display area may be used to define an area where the internal virtual cut content is displayed in a superimposed manner in the real space, so as to adapt to an interactive apparatus without a display screen, or to adapt to a situation where the display screen of the interactive apparatus is not suitable for displaying the current internal virtual cut content. At this time, after the terminal device obtains the internal virtual cutting content, the internal virtual cutting content may be rendered based on the spatial position of the designated display area, and the internal virtual cutting content may be displayed in the designated display area in an overlapping manner. The plane of the designated display area can be perpendicular to the touch plane of the interaction device, or coincident with the touch plane of the interaction device, or parallel to each other, or inclined relative to each other, and the relative position relationship between the designated display area and the interaction device can be adjusted according to the watching habits of the user, so that the user can use the touch screen conveniently. The size of the designated display area may be a preset size, or may be adjusted according to the actual application requirement.
When the three-dimensional object is a three-dimensional virtual object displayed based on the terminal device, the display state of the three-dimensional virtual object may be directly controlled to display the internal virtual sectioning content.
The processing method for internal virtual sectioning content provided in the above embodiment determines the spatial position of the virtual cutting plane according to the 6DoF information of the interaction device, and acquires and displays the internal virtual sectioning content of the three-dimensional object according to the virtual cutting plane. Therefore, the user can utilize the interaction device to control the information of the dissected three-dimensional object, and observe the internal structure (such as a section, an internal structure and the like) of the three-dimensional object in an augmented reality mode, the convenience of the user in checking the internal structure of the three-dimensional object is improved, the observation efficiency is improved, and the interactivity is high.
Referring to fig. 4, another embodiment of the present application provides a content processing method based on augmented reality, which is applicable to the terminal device. According to the content processing method based on augmented reality, a user can determine the space position of a virtual cutting plane through 6DoF information of an interaction device, terminal equipment obtains and displays internal virtual sectioning content of a three-dimensional object according to the virtual cutting plane, so that the user can control the information of the sectioned three-dimensional object by using the interaction device, observe the internal structure of the three-dimensional object in an augmented reality mode, directly process the internal virtual sectioning content through the interaction device, the convenience of the user for viewing and editing the internal structure of the three-dimensional object can be improved, the observation efficiency is improved, and the interactivity is high.
Once triggered, the augmented reality-based content processing method provided in some embodiments may automatically run through the terminal device in the flow of the method in the embodiments, where each step may be performed sequentially according to an order in the flowchart, or may be performed simultaneously according to a plurality of steps in an actual situation, which is not limited herein. In some embodiments, referring to fig. 4, the method for processing the internal virtual cut content may include steps S210 to S290.
Step S210: model data of a three-dimensional object is acquired.
In some embodiments, the three-dimensional object may include at least one of a physical object in real space and a virtual object displayed by the terminal device.
In some embodiments, when the three-dimensional object is a three-dimensional virtual object displayed by the terminal device, the three-dimensional virtual object may be a three-dimensional virtual object created by a user using the interactive apparatus. At this time, referring to fig. 5, before obtaining the model data of the three-dimensional object, the method further includes: the step of creating a three-dimensional virtual object includes steps S211 to S214.
Step S211: and determining a drawing command corresponding to the control operation according to the control operation acquired by the interactive device.
In some embodiments, the manipulation area of the interaction device has a touch panel, and when a user touches and moves the manipulation area, the interaction device receives a control operation and generates a corresponding control instruction, and the terminal device can control the instruction to generate a corresponding drawing command. The drawing command may include attribute information such as a line, a contour, and a color of the drawing content.
Step S212: and 6DoF information of the interactive device is acquired, and the section parameters of the three-dimensional virtual object are determined based on the 6DoF information and the drawing command.
In some embodiments, the terminal device may obtain 6DoF information of the interaction apparatus, and determine a section parameter of a section of the three-dimensional virtual object corresponding to the 6DoF information according to a drawing command corresponding to the 6DoF information. Cross-sectional parameters of a three-dimensional virtual object include, but are not limited to including: cross-sectional profile, material effects, dimensions, line patterns, etc. As a specific implementation manner, each piece of different 6DoF information of the interaction device may respectively correspond to different cross sections of the three-dimensional virtual object, and a plane where the interaction device is located may be determined by using the 6DoF information of the interaction device, and the plane where the cross section of the three-dimensional virtual object is located may be used as the plane where the cross section of the three-dimensional virtual object is located, that is, the interaction device may be coplanar with the cross section of the three-dimensional virtual object, or the interaction device may be directly overlapped with the cross section. The user can change the 6DoF information of the interaction device by moving the interaction device to determine different sections of the three-dimensional virtual object, control operation is performed through the control area of the interaction device to draw the sections of the three-dimensional virtual object, and the terminal equipment can determine the section parameters of the corresponding sections according to the generated drawing command.
In some embodiments, the creating process of the three-dimensional virtual object may establish contour data of the three-dimensional virtual object from the end contour and determine the three-dimensional structure data of the three-dimensional virtual object according to the 6DoF information of the interaction device, in this case, step S212 may include: determining a drawing reference surface, generating an end contour of the three-dimensional virtual object on the drawing reference surface according to a drawing command, acquiring 6DoF information of the moved interactive device when the interactive device moves, and generating a cross-sectional contour of the three-dimensional virtual object corresponding to the 6DoF information according to the 6DoF information and the determined drawing command.
In some embodiments, the terminal device determines a drawing reference plane according to the 6DoF information of the interactive device, and determines a drawing command corresponding to the drawing reference plane according to the touch operation received by the interactive device, so as to determine an end contour of the three-dimensional virtual object, where the drawing reference plane may be used to define a starting plane for model drawing. For example, as shown in fig. 6, if the plane where the current interactive device 200 is located is taken as the drawing reference plane, in this case, the drawing reference plane 405 that can be determined by the terminal device 100 based on the 6DoF information of the interactive device 200 is taken as the plane where the end section of the three-dimensional virtual object 400 is located, the user draws a desired graphic (shown as a circle in the drawing) in the manipulation area of the interactive device 200, and the terminal device 100 determines that the end contour of the three-dimensional virtual object 400 is in a circular shape according to the touch command transmitted by the interactive device 200.
When the interactive device moves, the 6DoF information changed by the moving interactive device and drawing commands respectively corresponding to the 6DoF information are obtained, and the terminal equipment can generate the cross section outline of the cross section of the three-dimensional virtual object corresponding to the 6DoF information.
Further, in some embodiments, the interactive device may be provided with a display screen, which may be a touch display screen, so that the cross-sectional profile of the three-dimensional virtual object drawn by the user can be displayed in the display screen of the interactive device in real time.
Step S213: and determining the stereo model data of the three-dimensional virtual object according to the section parameters.
When the interactive device moves, a plurality of different 6DoF information is generated, each 6DoF information may have a corresponding drawing command, and the terminal device may determine a cross-sectional profile corresponding to each 6DoF information. In one embodiment, step S213 may include: determining the relative distance between the section outline and the end outline according to the 6DoF information corresponding to the section outline; and generating a three-dimensional model extending from the end profile to the section profile in the three-dimensional virtual object according to the relative distance.
When the interaction device moves, the space distance between the two cross section outlines can be determined according to the 6DoF information corresponding to the two cross sections in the moving process, so that the thickness information of the three-dimensional virtual object between the two cross sections can be determined, and the terminal equipment can fill the three-dimensional model between the two cross sections according to the cross section outlines and the relative distance of the two cross sections to obtain corresponding three-dimensional model data. In one embodiment, after the terminal device generates the cross-section profile of the cross-section, the terminal device may determine the relative distance between the cross-section and the drawing reference plane, that is, the relative distance between the cross-section profile and the end profile, according to the 6DoF information of the cross-section, and generate the three-dimensional model extending from the end profile to the cross-section profile. The entire model of the three-dimensional virtual object can be regarded as a volumetric model consisting of a plurality of cross-sectional profiles.
For example, as shown in fig. 6 to 7, in fig. 6, the plane where the current interactive apparatus 200 is located is taken as the drawing reference plane 405, at this time, based on that the interactive apparatus 200 can determine the end section of the three-dimensional virtual object 400, the user draws the end contour of the three-dimensional virtual object 400 in the shape of a circular ring through the touch interactive apparatus 200, at this time, the user moves the interactive apparatus 200 upward and modifies the diameter data of the circular ring contour on the interactive apparatus 200 at the same time, the terminal device determines the spatial height dimension of the three-dimensional virtual object 400 corresponding to the 6DoF information and the corresponding section contour parameter, where the spatial height dimension is determined by the displacement of the interactive apparatus 200, as in fig. 7, the interactive apparatus 200 moves upward by the distance D in the vertical direction, at this time, the drawing section currently edited by the user moves from the drawing reference plane 405 of fig. 6 to the current drawing section 406, at this time, when the user draws the cross-sectional profile in the current drawing cross-section 406, the current spatial model height of the three-dimensional virtual object 400 is determined by the distance D, that is, the current spatial model height of the three-dimensional virtual object 400 is determined by the relative distance between the current cross-sectional profile and the end profile, so that the terminal device can determine the spatial structure data of the three-dimensional virtual object 400 according to the real-time 6DoF information and the corresponding drawing command thereof, thereby generating the three-dimensional model extending from the end profile to the cross-sectional profile in the three-dimensional virtual object.
Step S214: a three-dimensional virtual object is generated and displayed from the stereoscopic model data.
The terminal equipment generates the three-dimensional model and can superpose and display the three-dimensional model in real time in a real space. In some embodiments, after the terminal device obtains and generates a complete stereoscopic model of the three-dimensional virtual object, and the terminal device receives a control instruction for displaying the three-dimensional virtual object, the constructed three-dimensional virtual object is displayed in a superimposed manner in the real space. For example, after the user draws a complete three-dimensional virtual model and operates to "save" the three-dimensional virtual model, the terminal device displays the three-dimensional virtual object in the real space in an overlapping manner according to the control instruction of the user.
In other embodiments, the terminal device may display the three-dimensional virtual object in the building process in an overlapping manner on the interactive device, and overlap the currently generated cross-sectional profile with the manipulation area of the interactive device, as shown in fig. 7 and 8, so as to improve convenience of the user in operating and editing the three-dimensional virtual object.
Step S220: six degree of freedom (6DoF) information of the interactive device is determined from the acquired marker image, the marker image including a marker disposed on the interactive device.
In some embodiments, the marker on the interactive device is identified, and the terminal device may first acquire an image including the marker through the image acquisition device, and then identify the marker in the image. Before the terminal device collects the image containing the marker, the spatial position of the terminal device in the real space can be adjusted, and the spatial position of the interaction device in the real space can also be adjusted, so that the marker on the interaction device is positioned in the visual range of the image collection device of the terminal device, and the terminal device can collect the image of the marker and recognize the image. The visual range of the image capturing device may be determined by the orientation and size of the field angle.
Step S230: and determining the spatial position of the virtual cutting plane according to the 6DoF information.
In some embodiments, step S230 may include the steps of: acquiring a first space coordinate set of the interactive device in a virtual space according to 6DoF information of the interactive device; acquiring a relative position relation between the virtual cutting plane and the interactive device; and obtaining a second space coordinate set of the virtual cutting plane in the virtual space according to the relation between the first space coordinate set and the relative position. The relative position relationship may include, but is not limited to, position information, a rotation direction, a rotation angle, and the like of the virtual cutting plane relative to the interactive device.
As a specific embodiment, a virtual space coordinate system may be established with the terminal device as an origin, and then based on the 6DoF information of the interaction apparatus, a first space coordinate set of the interaction apparatus in the virtual space coordinate system may be obtained, where the first space coordinate set includes coordinates of each point of the interaction apparatus in the virtual space coordinate system. The relative position relationship between the virtual cutting plane and the interaction device may be preset in the terminal device, and the relative position relationship may be a relative spatial position relationship between the virtual cutting plane and the whole interaction device, and according to the relative position relationship and the first spatial coordinate set of the interaction device, the second spatial coordinate set of the virtual cutting plane in the virtual space may be determined, so that the spatial position of the virtual cutting plane in the virtual space may be determined. For example, if the relative position relationship is that the virtual cutting plane coincides with the interactive device, the first spatial coordinate set can be directly used as the second spatial coordinate set, and the relative position relationship is not limited herein. The user moves through operating the interactive device, and the terminal equipment determines the virtual cutting plane corresponding to the 6DoF information of the interactive device in real time based on the first relative position relation between the interactive device and the virtual cutting plane.
In some embodiments, referring to fig. 9, step S230 may include steps S231 to S233.
Step S231: and acquiring a reference axis, wherein the reference axis is a specified structure outline axis of the three-dimensional object or a specified coordinate axis of a specified space coordinate system.
In some embodiments, a structural contour axis of the three-dimensional object is obtained as a reference axis, where the structural contour axis may be a contour axis of the three-dimensional object closest to or farthest from the terminal device, for example, when the three-dimensional object is a geometric model and has an obvious outer contour edge, any outer contour edge may be used as the reference axis, e.g., an outer contour edge of the three-dimensional object closest to the terminal device is used as the reference axis; in some embodiments, the structure contour axis may also be a symmetry axis, a central axis, or the like that is referenced when modeling the three-dimensional object, for example, when the three-dimensional object is a rotation model, as shown in fig. 8, the three-dimensional virtual object 400 is a rotation model having a rotation central axis O, and the rotation central axis O may be used as a reference axis.
In some embodiments, the specified coordinate axis in the virtual space coordinate system is acquired as a reference axis. The virtual space coordinate system may be a virtual space coordinate system for rendering and displaying a three-dimensional virtual model, for example, a virtual space coordinate system established with the terminal device as an origin, and in this case, any one of the axes X, Y, Z in the virtual space coordinate system may be used as a reference axis.
Step S232: and establishing a virtual cutting plane according to the reference axis, wherein the virtual cutting plane and the reference axis form a specified included angle relationship.
In some embodiments, after the reference axis is determined, an axis coordinate set of the reference axis in the virtual space coordinate system is obtained, an included angle between the virtual cutting plane and the reference axis is determined according to a specified included angle relationship, and a plane coordinate set of the virtual cutting plane in the virtual space coordinate system is determined according to the included angle and the axis coordinate set. For example, the specified angular relationship may be perpendicular, parallel, or a predetermined angle therebetween. It should be understood that there may be numerous virtual cutting planes established based on the reference axis, for example, there may be numerous planes perpendicular to the reference axis, and at this time, the relative position relationship between the virtual cutting plane and the interaction device may be obtained to further determine the specific virtual cutting plane according to the relative position relationship.
In some embodiments, a first relative position relationship between the virtual cutting plane and the interaction device may be preset in the terminal device, and in some embodiments, the first relative position relationship may be a relative distance relationship between the virtual cutting plane and a reference point of the interaction device, or the first relative position relationship may be a relative distance relationship between the reference point of the virtual cutting plane and the reference point of the interaction device. For example, the reference point of the virtual cutting plane may be an intersection point between the virtual cutting plane and the reference axis, the reference point of the interaction device may be a centroid point of the marker of the interaction device, and the first relative positional relationship is a relative distance relationship between the intersection point between the virtual cutting plane and the reference axis and the centroid point of the marker of the interaction device, so that a large amplitude variation of the virtual cutting plane caused by a fine spatial motion of the interaction device may be prevented. Therefore, the terminal device obtains the first space coordinate set of the interactive device in the virtual space according to the 6DoF information of the interactive device, and can determine the second space coordinate set of the virtual cutting plane in the virtual space according to the first relative position relationship, the first space coordinate set of the interactive device, and the specified included angle relationship between the reference axis and the virtual cutting plane.
Step S233: when the interactive device moves, the virtual cutting plane is controlled to move according to the movement component of the interactive device in the direction of the reference axis, and the space position of the virtual cutting plane is determined according to the 6DoF information of the interactive device.
Furthermore, the relative angle between the virtual cutting plane and the reference axis can be fixed by limiting the specified included angle relationship between the virtual cutting plane and the reference axis, and then the virtual cutting plane can be controlled to move along the direction of the reference axis according to the movement component of the interaction device in the direction of the reference axis. For example, in the embodiment shown in fig. 8, when the rotation axis O is used as the reference axis, the virtual cutting plane 401 is perpendicular to the rotation axis O, and when the interaction device 200 moves, only the movement component of the interaction device 200 in the direction of the rotation axis O is taken as the effective movement, and the virtual cutting plane is controlled to move along the rotation axis O.
And when the interactive device continuously moves, determining a second space coordinate set of the virtual cutting plane corresponding to the 6DoF information in the virtual space according to the 6DoF information of the interactive device.
Step S240: and acquiring cutting information of the virtual cutting plane on the three-dimensional object.
In some embodiments, step S240 may include: acquiring a third space coordinate set of the three-dimensional object in the virtual space; acquiring the same space coordinates in the third space coordinate set and the second space coordinate set, and determining an intersection coordinate set; and acquiring cutting information of the three-dimensional object according to the intersection coordinate set. And after the terminal equipment determines a second coordinate set of the virtual cutting plane corresponding to the 6DoF information of the interaction device, the terminal equipment determines the cutting information of the virtual cutting plane on the three-dimensional object according to the relationship between the second coordinate set and the coordinate set of the three-dimensional object in the virtual space.
When the three-dimensional object is a three-dimensional virtual object displayed in a virtual space based on the terminal device, the terminal device directly determines a third space coordinate set of the three-dimensional object according to rendering coordinates of the three-dimensional virtual object, performs Boolean operation on the second space coordinate set and the third space coordinate set, calculates to obtain the same space coordinates in the third space coordinate set and the second space coordinate set, and determines an intersection coordinate set, so that cutting information of the three-dimensional object is obtained according to the intersection coordinate set. The intersection coordinate set can be understood as a virtual space coordinate value occupied by a section element of the three-dimensional object when the three-dimensional object is cut by the virtual cutting plane.
When the three-dimensional virtual object is a physical object existing in a real space, the terminal device may acquire the spatial position coordinates of the physical object through image recognition. In some embodiments, the terminal device acquires an image of the physical object through the image acquisition device, acquires an actual relative positional relationship between the physical object in the image and the terminal device, determines an actual coordinate of the physical object in a real space according to the actual relative positional relationship, and maps the actual coordinate of the physical object to the virtual space coordinate system to obtain a third spatial coordinate set of the three-dimensional object existing as the physical object in the virtual space coordinate system. And the terminal equipment performs Boolean operation on the second space coordinate set and the third space coordinate set, calculates the same space coordinates in the third space coordinate set and the second space coordinate set, and determines an intersection coordinate set, so that the cutting information of the three-dimensional object is obtained according to the intersection coordinate set. In some embodiments, the physical object may be further provided with a physical marker, so that the terminal device determines the spatial posture of the physical object by recognizing the physical marker, thereby determining the spatial position information of the physical object. The specific features of the entity tag and the method for the terminal device to identify the entity tag refer to the above description of the tag of the interaction device, and are not repeated in this specification.
Step S250: and obtaining the internal virtual sectioning content of the three-dimensional object according to the sectioning information and the model data.
In some embodiments, the internal virtual cut content includes an internal virtual cut plane that is a cross-section formed by an interior of the three-dimensional object when the three-dimensional object is cut by the virtual cutting plane. Further, the terminal device can acquire a coordinate set of the cross section in the virtual space according to the cutting information, and then render the cross section according to model data corresponding to each coordinate of the coordinate set so as to display internal virtual cutting content.
In some embodiments, the internal virtual sectioning content may be a three-dimensional model formed by sectioning a three-dimensional object by a virtual sectioning plane, the three-dimensional model formed by sectioning the three-dimensional object includes at least a first three-dimensional model and a second three-dimensional model respectively located on two opposite sides of the virtual sectioning plane, and the terminal device renders the first three-dimensional model and the second three-dimensional model so as to display the internal virtual sectioning content. At this time, step S250 may include: and determining first virtual sectioning content and second virtual sectioning content of the three-dimensional object sectioned by the virtual cutting plane according to the cutting information and the model data, wherein the first virtual sectioning content is positioned on one side of the virtual cutting plane close to the terminal equipment, and the second virtual sectioning content is positioned on one side of the virtual cutting plane far away from the terminal equipment. Therefore, the section formed by the second virtual cutting content with respect to the virtual cutting plane can be oriented towards the direction in which the terminal device is located, so as to be convenient for the user to observe.
For example, as shown in fig. 2, when the three-dimensional object is the simulated heart model 300, the terminal device 100 determines a virtual cutting plane 401 used for cutting the simulated heart model 300 according to the 6DoF information of the interaction apparatus 200, and after the simulated heart model 300 is cut by the virtual cutting plane 401, the corresponding virtual cut content may include a first stereoscopic model 4031 and a second stereoscopic model 4033, where the first stereoscopic model 4031 is located on a side of the cutting plane 401 close to the terminal device 100, and the second stereoscopic model 4033 is located on a side of the cutting plane 401 away from the cutting plane, so that a user can observe a cut section of the three-dimensional object 300 on the second stereoscopic model 4033 from a perspective of the terminal device 100. In other embodiments, the relative positions of the first stereoscopic model 4031 and the second stereoscopic model 4033 may be interchanged and are not limited to those described in this specification, for example, the first stereoscopic model 4031 is located on a side of the cutting plane 401 away from the terminal device 100, and the second stereoscopic model 4033 is located on a side of the cutting plane 401 close to the cutting plane, so that a user can observe a cut-out section of the three-dimensional object 300 on the first stereoscopic model 4031 from a perspective of the terminal device 100.
Step S260: displaying the inner virtual cut content.
After the terminal equipment acquires the internal virtual sectioning content, the internal virtual sectioning content can be displayed. In some embodiments, after the terminal device determines rendering coordinates for rendering internal virtual sectioning content to be displayed in a virtual space, the terminal device may construct the internal virtual sectioning content of the three-dimensional object according to the obtained parameter data corresponding to the internal virtual sectioning content to be displayed, and render the internal virtual sectioning content according to the rendering coordinates. The terminal device can obtain RGB values and corresponding coordinates of each vertex of the internal virtual cutting content from the parameter data.
In some embodiments, the internal virtual sectioning content is a stereoscopic model formed by sectioning the three-dimensional object by the virtual sectioning plane, that is, the internal virtual sectioning content includes a first virtual sectioning content and a second virtual sectioning content, and when the terminal device displays the internal virtual sectioning content, the terminal device may respectively determine display parameters of the first virtual sectioning content and the second virtual sectioning content, and then respectively display the first virtual sectioning content and the second virtual sectioning content according to the display parameters. At this time, step S260 may include: acquiring a first display parameter of the first virtual sectioning content and a second display parameter of the second virtual sectioning content, displaying the first virtual sectioning content according to the first display parameter, and displaying the second virtual sectioning content according to the second display parameter; the first display parameter comprises a first transparency, and the second display parameter comprises a second transparency. Further, the terminal device sets the first transparency to a value greater than the second transparency, so that the transparency of the first virtual sectioning content located on the side of the virtual cutting plane close to the terminal device is greater than the transparency of the second virtual sectioning content located on the side of the virtual cutting plane far from the terminal device, so that when the first virtual sectioning content is located between the terminal device and the second virtual sectioning content, that is, when the second virtual sectioning content is shielded by the first virtual sectioning content from the viewpoint of the terminal device, the user can observe the second virtual sectioning content through the first virtual sectioning content.
In some embodiments, for example, as shown in fig. 2, when the three-dimensional object is a simulated heart model 300, the simulated heart model 300 is cut by a virtual cutting plane 401 to form a first solid model 4031 and a second solid model 4033, where the first solid model 4031 is located on a side of the cutting plane 401 close to the terminal device 100, and the second solid model 4033 is located on a side of the cutting plane 401 away from the cutting plane, the terminal device 100 may set a first transparency of the first solid model 4031 to be greater than a second transparency of the second solid model 4033, for example, the first transparency is 70% and the second transparency is 5%, so that the user can observe the second solid model 4033 through the first solid model 4031 at this time, and can observe a cut section of the three-dimensional object 300 on the second solid model 4033 from a perspective of the terminal device 100.
In some embodiments, the display parameters of the internal virtual cut content may also include parameters other than transparency, for example, image parameters such as saturation, gray scale, brightness, contrast, and the like, and are not limited to those described in the examples of this specification.
Further, in some embodiments, when displaying the internal virtual cut content, the terminal device may also determine display parameters of the virtual cutting plane, including but not limited to, including: transparency, saturation, gray scale, brightness, contrast, etc. The user can display the virtual cutting plane in different states by setting different parameters, for example, the virtual cutting plane is hidden or displayed, so that the user can observe the internal virtual cutting content conveniently.
Step S270: and receiving an operation instruction sent by the interaction device.
In some embodiments, the operation instruction sent by the interactive device is generated based on a control operation (e.g., a touch action, a press action, etc.) input by a user at the interactive device, and the control operation may include a touch operation, a press key operation, etc., where the type of the touch operation at least includes any one or more of a click, a slide, and a multi-touch. The interactive device may also detect different control operation parameters (such as a touch position parameter and a touch frequency parameter) of the interactive device, and send an operation instruction that does not pass. The interactive device converts the control operation information of the user into an operation instruction and sends the operation instruction to the terminal equipment.
Step S280: based on the operation instruction, a corresponding content processing instruction is determined.
And after receiving the operation instruction sent by the interactive device, the terminal equipment determines a corresponding content processing instruction according to preset data analysis and data processing. The types of the content processing instructions at least comprise instructions of editing attributes, zooming, marking, moving, rotating and the like.
Step S290: and processing the internal virtual cutting content according to the content processing instruction.
Further, after generating a content processing instruction according to the control operation received by the interactive device, the terminal device processes the previously determined internal virtual cut content according to the content processing instruction, where the processing action for the internal virtual cut content according to the content processing instruction at least includes: editing the attribute of the internal virtual sectioning content, zooming the internal virtual sectioning content, marking the internal virtual sectioning content, moving the internal virtual sectioning content, rotating the internal virtual sectioning content and the like.
In some embodiments, when the terminal device displays the internal virtual cut content, when the operation instruction input by the user at the interaction device is obtained as double-clicking or multi-clicking some internal virtual cut content, a content processing instruction for modifying or marking the internal virtual cut content is generated, for example, the content processing instruction is used for controlling the terminal device to generate a data frame or a text frame around the selected internal virtual cut content, and then the operation of the user is further detected to modify parameters of the data frame or perform character input on the text frame. The data in the data frame may be parameters related to the internal virtual dissection content, such as element names (e.g., artery, vein, etc. in fig. 2), characteristics, and the like.
In some embodiments, when the operation instruction input by the user at the interaction device is acquired as the distance relative contraction combination of the two fingers when the terminal device displays the internal virtual cut content, a content processing instruction for reducing the currently displayed internal virtual cut content is generated, and the content processing instruction is used for controlling the terminal device to reduce the currently displayed internal virtual cut content relative to the view angle of the user. And if the operation instruction input by the user in the interactive device is acquired, and the distance between the two fingers is relatively enlarged and far away, generating a content processing instruction for enlarging the currently displayed internal virtual cutting content, wherein the content processing instruction is used for controlling the terminal equipment to enlarge the currently displayed internal virtual cutting content relative to the view angle of the user.
In some embodiments, the internal virtual cutting content includes an internal virtual cutting plane, the internal virtual cutting plane is a cross section of the three-dimensional object cut by the virtual cutting plane, and the terminal device processes the internal virtual cutting content, and may directly process parameters of the internal virtual cutting plane according to a control operation of a user. At this time, step S290 may include: acquiring parameter data of an internal virtual cutting plane; and modifying the parameter data according to the touch instruction, and adjusting the internal virtual cutting plane according to the modified parameter data.
For example, in the embodiment shown in fig. 8, after the user determines the virtual cutting plane 401 by moving the interaction device 200, the terminal device 100 displays the content of the internal virtual cutting plane 4035 based on the interaction device 200, the user may perform a control operation on the interaction device 200, the terminal device 100 determines a corresponding content processing instruction according to the control operation, for example, when the user presses and drags the contour line of the circular cross section, the terminal device 100 generates a content processing instruction for changing the diameter of the circular ring according to the control operation of the user, and changes the diameter of the circular ring of the three-dimensional virtual object 400 at the internal virtual cutting plane 4035 according to the content processing instruction.
In some embodiments, a user may input an instruction for rotating the internal virtual sectioning content through the interaction device, and then the terminal device generates the instruction for rotating the internal virtual sectioning content according to the control operation received by the interaction device, and controls the virtual sectioning content to perform a corresponding rotation action, so as to facilitate the user to observe the internal virtual sectioning content from different angles. For example, in the embodiment shown in fig. 2, the internal virtual content includes two first solid models 4031 and a second solid model 4033 after the simulation heart model 300 is cut, when the user needs to view the content of the second solid model 4033 from different angles, a rotation instruction can be input on the interaction apparatus 200, and then the terminal device 100 controls the second solid model 4033 and the first solid model 4031 to rotate.
The user can control the rotation of the internal virtual sectioning content through the control area of the interactive device, and can also control the rotation of the internal virtual sectioning content according to the space posture of the interactive device so as to change the display angle of the internal virtual sectioning content. In some embodiments, the terminal device may determine the spatial attitude of the interaction device based on the marker of the interaction device, and in other embodiments, the terminal device may determine the spatial attitude of the interaction device based on the detection data of the inertial measurement sensor of the interaction device.
In some embodiments, the three-dimensional object is a physical object actually existing in the real space, and the user may control the rotation of the internal virtual cut content by changing the spatial posture of the physical object, so as to change the display angle of the internal virtual cut content. At this time, the terminal device may acquire the spatial posture of the entity object through image recognition, and determine the display angle of the internal virtual sectioning content according to the spatial posture of the entity object, and therefore, the above augmented reality-based content processing method may further include the steps of: collecting a picture containing an entity object, identifying the entity object in the picture, and determining the posture information of the entity object; determining a display angle of the internal virtual sectioning content corresponding to the posture information according to the posture information of the entity object; and controlling the display of the internal virtual cutting content at the display angle.
In some embodiments, the entity object may be further provided with an entity marker, so that the terminal device determines the spatial posture of the entity object by recognizing the entity marker, thereby determining the display angle of the internal virtual cutting content. Further, the specific features of the entity marker and the method for the terminal device to identify the entity marker may refer to the above description of the entity object, and are not repeated in this specification. In other embodiments, the physical object may further be provided with an inertial measurement sensor, and the terminal device may determine the spatial attitude of the physical object according to data detected by the inertial measurement sensor of the physical object. For example, in the embodiment shown in fig. 2, when the three-dimensional object simulates the heart model 300, the user can rotate by holding the simulated heart model 300, and the terminal device can determine the display angle of the internal virtual cutting content according to the rotation angle of the simulated heart model 300.
Further, in order to avoid the situation that the virtual cutting plane moves due to the displacement generated by the interaction device when receiving the control operation of the user, the terminal device may lock the current virtual cutting plane according to the requirement of the user. For example, when a user needs to edit a current internal virtual sectioning content, a locking designation is input, the terminal device locks a current virtual cutting plane according to a locking instruction of the user, so that a relative spatial position relationship between the current virtual cutting plane and a three-dimensional object is kept unchanged, and the relative spatial position relationship between the virtual cutting plane and the interaction device is decoupled, at this time, the interaction device can generate spatial displacement, and the terminal device does not change the spatial position of the virtual cutting plane based on the spatial position of the interaction device, so that the situation that the virtual cutting plane moves due to the displacement generated when the interaction device receives a control operation of the user is avoided. When a user needs to update the position of the virtual cutting plane, an unlocking instruction can be input through the interaction device, the terminal device unlocks the current virtual cutting plane according to the unlocking instruction of the user, so that the relative spatial position relationship between the current virtual cutting plane and the three-dimensional object is decoupled, and the terminal device determines the spatial position of the virtual cutting plane again according to the 6DoF information of the interaction device, wherein the process of determining the spatial position of the virtual cutting plane by the terminal device can refer to the above steps of the description, and the description is not repeated one by one.
In the content processing method based on augmented reality provided in the above embodiment, the spatial position of the virtual cutting plane for cutting the three-dimensional object is determined according to the 6DoF information of the interaction device, and the internal virtual cutting content of the three-dimensional object is acquired and displayed according to the virtual cutting plane. Thereby make the user can utilize the information of interactive device control three-dimensional object after being dissected to observe the inner structure of three-dimensional object through augmented reality's mode, improve the convenience that the user looked over the inner structure of three-dimensional object, improve observation efficiency, and interactive high.
The content processing method based on augmented reality can be applied to various fields, for example, the teaching field or the medical field. When the content processing method is applied to the medical field, the three-dimensional object can be a three-dimensional virtual model established based on the focus of the patient, a virtual cutting plane for cutting the three-dimensional virtual model can be determined through the interaction device, and virtual cutting content of the three-dimensional virtual model is obtained according to the virtual cutting plane, that is, a user can check the internal structure or/and images of any part of the three-dimensional virtual model of the focus of the patient by controlling the movement of the interaction device, so that the pathological change condition of the focus can be deeply known, a more precise and accurate treatment scheme can be formulated, and the method has great practical significance.
Referring to fig. 10, a block diagram of an augmented reality-based content processing apparatus 500 according to some embodiments is shown, which is applied to a terminal device to execute the augmented reality-based content processing method. The augmented reality-based content processing apparatus 500 runs on the terminal device 100 shown in fig. 2, and is configured to execute the augmented reality-based content processing method described above. In some embodiments, the augmented reality based content processing apparatus 500 is stored in a memory of the terminal device 100 and is configured to be executed by one or more processors of the terminal device 1.
In the embodiment illustrated in fig. 10, the augmented reality-based content processing apparatus 500 may include: a model data acquisition module 510, an information determination module 520, a virtual cutting plane determination module 530, a sectioning content acquisition module 540, and a display module 550. It is understood that the modules may be program modules running on a computer-readable storage medium, and the purpose and operation of the modules are as follows:
the model data acquisition module 510 is used for acquiring model data of a three-dimensional object. The information determining module 520 is configured to determine six degrees of freedom (6DoF) information of the interactive device according to the acquired marker image, where the marker image includes a marker, and the marker is disposed on the interactive device. The virtual cutting plane determining module 530 is configured to determine a spatial position of the virtual cutting plane according to the 6DoF information. The sectioning content obtaining module 540 is configured to obtain cutting information of the virtual cutting plane on the three-dimensional object, and obtain internal virtual sectioning content of the three-dimensional object according to the cutting information and the model data. The display module 550 is used to display the internal virtual cut content.
The virtual cutting plane determining module 530 is further configured to obtain a first space coordinate set of the interactive device in the virtual space according to the 6DoF information of the interactive device, obtain a relative position relationship between the virtual cutting plane and the interactive device, and obtain a second space coordinate set of the virtual cutting plane in the virtual space according to the first space coordinate set and the relative position relationship. Further, in some embodiments, the virtual cutting plane determination module 530 is further configured to: and acquiring a reference axis, wherein the reference axis is an appointed structure contour axis of the three-dimensional object or an appointed coordinate axis of an appointed space coordinate system, and establishing a virtual cutting plane according to the reference axis, and the virtual cutting plane and the reference axis form an appointed included angle relationship. When the interactive device moves, the virtual cutting plane determining module 530 controls the virtual cutting plane to move according to the movement component of the interactive device in the direction of the reference axis, and determines the spatial position of the virtual cutting plane according to the 6Dof information of the interactive device.
The sectioning content obtaining module 540 is further configured to obtain a third spatial coordinate set of the three-dimensional object in the virtual space, obtain the same spatial coordinates in the third spatial coordinate set and the second spatial coordinate set, and determine an intersection coordinate set; and acquiring cutting information of the three-dimensional object according to the intersection coordinate set.
In some embodiments, the cutting content obtaining module 540 is further configured to obtain, according to the cutting information and the model data, first virtual cutting content and second virtual cutting content, where the three-dimensional object is cut by a virtual cutting plane, the first virtual cutting content is located on a side of the virtual cutting plane close to the terminal device, and the second virtual cutting content is located on a side of the virtual cutting plane far from the terminal device.
In some embodiments, the display module 550 is further configured to obtain a first display parameter of the first virtual cut content and a second display parameter of the second virtual cut content, display the first virtual cut content according to the first display parameter, and display the second virtual cut content according to the second display parameter; the first display parameter comprises a first transparency, and the second display parameter comprises a second transparency.
In some embodiments, the augmented reality-based content processing apparatus 500 further includes a content processing module 560, and the content processing module 560 is configured to receive an operation instruction sent by the interaction apparatus, determine a corresponding content processing instruction based on the operation instruction, and process the internal virtual cut content according to the content processing instruction. In some embodiments, the internal virtual sectioning content includes an internal virtual sectioning surface, which is a section formed inside the three-dimensional object when the three-dimensional object is sectioned by the virtual cutting plane; the content processing module 560 is further configured to obtain parameter data of the internal virtual cutting plane, modify the parameter data according to the touch instruction, and adjust the internal virtual cutting plane according to the modified parameter data.
In some embodiments, the three-dimensional object is a three-dimensional virtual object displayed in a virtual space, and the augmented reality based content processing apparatus 500 further comprises a virtual object creation module 570, the virtual object creation module 570 being configured to create the three-dimensional virtual object. In some embodiments, the virtual object creation module 570 is configured to determine, according to the control operation obtained by the interaction device, a drawing command corresponding to the control operation; acquiring 6DoF information of the interactive device, and determining section parameters of the three-dimensional virtual object based on the 6DoF information and the drawing command; determining three-dimensional model data of the three-dimensional virtual object according to the section parameters; and generating a three-dimensional virtual object from the stereo model data. The display module 550 is also used to display the three-dimensional virtual object created by the virtual object creation module.
In some embodiments, the virtual object creation module 570 a section creation unit 571 and a stereoscopic model creation unit 572. The section creating unit 571 is configured to obtain 6DoF information of the interaction device, and determine a section parameter of the three-dimensional object based on the 6DoF information and the drawing command. In some embodiments, the section creating unit 571 is configured to determine a drawing reference plane, and generate an end contour of the three-dimensional virtual object on the drawing reference plane according to the drawing command, and when the interactive device moves, the section creating unit 571 acquires 6DoF information of the moved interactive device, and generates a section contour of the three-dimensional virtual object corresponding to the 6DoF information according to the 6DoF information and the determined drawing command. The solid model creating unit 572 is configured to determine solid model data of the three-dimensional virtual object according to the cross-section parameter determined by the cross-section creating unit 571. In some embodiments, the solid model creating unit 572 is configured to determine a relative distance between the cross-sectional profile and the end profile according to the 6DoF information corresponding to the cross-sectional profile, and is further configured to generate a solid model extending from the end profile to the cross-sectional profile in the three-dimensional virtual object according to the relative distance.
In some embodiments, the three-dimensional object is an actual physical object existing in the real space, the augmented reality-based content processing apparatus 500 further includes a physical object monitoring module 580, the physical object monitoring module 580 is configured to collect a picture including the physical object, identify the physical object in the picture, determine posture information of the physical object, determine a display angle of the internal virtual cut content corresponding to the posture information according to the posture information of the physical object, and the display module 550 is further configured to control display of the internal virtual cut content at the display angle determined by the physical object monitoring module 580.
In summary, some embodiments provide a content processing method and apparatus based on augmented reality, which are applied to a terminal device, and are used for cutting a virtual cutting plane of a three-dimensional object according to 6DoF information of an interaction apparatus, and obtaining and displaying an internal virtual cutting content of the three-dimensional object according to the virtual cutting plane. Thereby make the user can utilize the information of interactive device control three-dimensional object after being dissected to observe the inner structure of three-dimensional object through augmented reality's mode, improve the convenience that the user looked over the inner structure of three-dimensional object, improve observation efficiency, and interactive high.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 11, a block diagram of a terminal device according to some embodiments is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, a head-mounted display device, and the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, an image acquisition apparatus 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
In some embodiments, image capture device 130 is used to capture images of three-dimensional objects and to capture scene images of a target scene. The image capturing device 130 may be an infrared camera or a color camera, and the specific type of camera is not limited in some embodiments.
Referring to fig. 12, a block diagram of a computer-readable storage medium according to some embodiments is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above. The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A content processing method based on augmented reality is applied to terminal equipment and is characterized by comprising the following steps:
acquiring model data of a three-dimensional object;
determining six-degree-of-freedom (6DoF) information of an interactive device according to an acquired marker image, wherein the marker image comprises a marker arranged on the interactive device;
determining the spatial position of a virtual cutting plane according to the 6DoF information;
acquiring cutting information of the virtual cutting plane on the three-dimensional object according to the space position;
obtaining internal virtual sectioning contents of the three-dimensional object according to the cutting information and the model data; and
Displaying the internal virtual cut content.
2. The method of claim 1, wherein the internal virtual cut content comprises an internal virtual cut plane, the internal virtual cut plane being a cross-section of an interior of the three-dimensional object formed when the virtual cutting plane cuts the three-dimensional object; after the displaying the internal virtual sectioning content, the method further comprises:
acquiring parameter data of the internal virtual cutting plane;
receiving an operation instruction sent by the interaction device;
determining a corresponding content processing instruction based on the operation instruction; and
and modifying the parameter data according to the content processing instruction, and adjusting the internal virtual cutting plane according to the modified parameter data.
3. The method of claim 1, wherein said obtaining internal virtual cut content of the three-dimensional object from the cutting information and the model data comprises:
and acquiring first virtual sectioning content and second virtual sectioning content of the three-dimensional object sectioned by the virtual cutting plane according to the cutting information and the model data, wherein the first virtual sectioning content is positioned on one side of the virtual cutting plane close to the terminal equipment, and the second virtual sectioning content is positioned on one side of the virtual cutting plane far away from the terminal equipment.
4. The method of claim 3, wherein said displaying said internal virtual cut content comprises:
acquiring a first display parameter of the first virtual sectioning content and a second display parameter of the second virtual sectioning content; and
displaying the first virtual sectioning content according to the first display parameter, and displaying the second virtual sectioning content according to the second display parameter; wherein the first display parameter comprises a first transparency and the second display parameter comprises a second transparency.
5. The method of claim 1, wherein the determining the spatial location of the virtual cutting plane from the 6DoF information comprises:
acquiring a first space coordinate set of the interaction device in the virtual space according to the 6DoF information;
acquiring a relative position relation between a virtual cutting plane and the interaction device;
obtaining a second space coordinate set of the virtual cutting plane in the virtual space according to the relative position relation between the first space coordinate set and the virtual cutting plane;
the obtaining of the cutting information of the virtual cutting plane on the three-dimensional object includes:
acquiring a third space coordinate set of the three-dimensional object in the virtual space;
Acquiring the same space coordinates in the third space coordinate set and the second space coordinate set, and determining an intersection coordinate set;
and acquiring cutting information of the three-dimensional object according to the intersection coordinate set.
6. The method of claim 1, wherein said determining the spatial location of a virtual cutting plane from the 6DoF information comprises:
acquiring a reference axis, wherein the reference axis is a specified structure contour axis of the three-dimensional object or a specified coordinate axis of a specified space coordinate system;
establishing a virtual cutting plane according to the reference axis, wherein the virtual cutting plane and the reference axis form a specified included angle relationship;
when the interactive device moves, the virtual cutting plane is controlled to move according to the movement component of the interactive device in the direction of the reference axis, and the space position of the virtual cutting plane is determined according to the 6DoF information of the interactive device.
7. The method of claim 1, wherein the three-dimensional object comprises a three-dimensional virtual object; prior to the obtaining model data of the three-dimensional object, the method further comprises: creating a three-dimensional virtual object, comprising:
determining a drawing command corresponding to the control operation according to the control operation acquired by the interactive device;
Acquiring 6DoF information of the interaction device, and determining section parameters of the three-dimensional virtual object based on the 6DoF information and the drawing command;
determining stereo model data of the three-dimensional virtual object according to the section parameters; and
and generating and displaying the three-dimensional virtual object according to the stereo model data.
8. The method of claim 7, wherein the obtaining 6DoF information of the interactive device and determining cross-sectional parameters of a three-dimensional object based on the 6DoF information and the drawing command comprises: determining a drawing reference surface, and determining an end contour of a three-dimensional virtual object on the drawing reference surface according to the drawing command;
when the interactive device moves, 6DoF information of the moved interactive device is obtained, and a cross-sectional profile of the three-dimensional virtual object corresponding to the 6DoF information is determined according to the 6DoF information and the determined drawing command;
determining stereo model data of the three-dimensional virtual object according to the section parameters, comprising:
determining the relative distance between the section outline and the end outline according to the 6DoF information corresponding to the section outline;
and generating a stereoscopic model extending from the end contour to the section contour in the three-dimensional virtual object according to the relative distance.
9. An augmented reality based content processing apparatus, the apparatus comprising:
the model data acquisition module is used for acquiring model data of the three-dimensional object;
the information determining module is used for determining six-degree-of-freedom (6DoF) information of the interactive device according to the acquired marker image, wherein the marker image comprises a marker, and the marker is arranged on the interactive device;
the virtual cutting plane determining module is used for determining the spatial position of a virtual cutting plane according to the 6DoF information;
the sectioning content acquisition module is used for acquiring cutting information of the virtual cutting plane on the three-dimensional object and acquiring internal virtual sectioning content of the three-dimensional object according to the cutting information and the model data; and
and the display module is used for displaying the internal virtual sectioning content.
10. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-10.
11. A computer-readable storage medium having program code stored therein, the program code being invoked by a processor to perform the method of any one of claims 1 to 8.
CN201910352571.4A 2019-04-28 2019-04-28 Content processing method and device based on augmented reality, terminal equipment and storage medium Pending CN111862333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910352571.4A CN111862333A (en) 2019-04-28 2019-04-28 Content processing method and device based on augmented reality, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910352571.4A CN111862333A (en) 2019-04-28 2019-04-28 Content processing method and device based on augmented reality, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111862333A true CN111862333A (en) 2020-10-30

Family

ID=72966299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910352571.4A Pending CN111862333A (en) 2019-04-28 2019-04-28 Content processing method and device based on augmented reality, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111862333A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN114170407A (en) * 2022-02-15 2022-03-11 北京所思信息科技有限责任公司 Model mapping method, device, equipment and storage medium of input equipment
CN114388059A (en) * 2022-01-13 2022-04-22 西湖大学 Protein section generation method based on three-dimensional force feedback controller
CN114706489A (en) * 2022-02-28 2022-07-05 北京所思信息科技有限责任公司 Virtual method, device, equipment and storage medium of input equipment
CN116030228A (en) * 2023-02-22 2023-04-28 杭州原数科技有限公司 Method and device for displaying mr virtual picture based on web
WO2023155672A1 (en) * 2022-02-15 2023-08-24 Beijing Source Technology Co., Ltd. Input device model projecting method, apparatus and system
WO2024007704A1 (en) * 2022-07-08 2024-01-11 腾讯科技(深圳)有限公司 Virtual object display methods and apparatus, terminal device, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608628B1 (en) * 1998-11-06 2003-08-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) Method and apparatus for virtual interactive medical imaging by multiple remotely-located users
US20110107270A1 (en) * 2009-10-30 2011-05-05 Bai Wang Treatment planning in a virtual environment
US20180033210A1 (en) * 2014-03-17 2018-02-01 Nitin Vats Interactive display system with screen cut-to-shape of displayed object for realistic visualization and user interaction
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
KR20180041890A (en) * 2016-10-17 2018-04-25 삼성전자주식회사 Method and apparatus for displaying virtual objects
US20180200018A1 (en) * 2016-03-21 2018-07-19 Washington University System and method for virtual reality data integration and visualization for 3d imaging and instrument position data
WO2018159867A1 (en) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Three-dimensional medical image control method and device therefor
WO2018200767A1 (en) * 2017-04-27 2018-11-01 Arthrology Consulting, Llc Method for augmenting a surgical with virtual guidance content
CN109284000A (en) * 2018-08-10 2019-01-29 西交利物浦大学 Three-dimensional geometry object visualization method and system under a kind of reality environment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6608628B1 (en) * 1998-11-06 2003-08-19 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) Method and apparatus for virtual interactive medical imaging by multiple remotely-located users
US20110107270A1 (en) * 2009-10-30 2011-05-05 Bai Wang Treatment planning in a virtual environment
US20180033210A1 (en) * 2014-03-17 2018-02-01 Nitin Vats Interactive display system with screen cut-to-shape of displayed object for realistic visualization and user interaction
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
US20180200018A1 (en) * 2016-03-21 2018-07-19 Washington University System and method for virtual reality data integration and visualization for 3d imaging and instrument position data
KR20180041890A (en) * 2016-10-17 2018-04-25 삼성전자주식회사 Method and apparatus for displaying virtual objects
WO2018159867A1 (en) * 2017-02-28 2018-09-07 메디컬아이피 주식회사 Three-dimensional medical image control method and device therefor
WO2018200767A1 (en) * 2017-04-27 2018-11-01 Arthrology Consulting, Llc Method for augmenting a surgical with virtual guidance content
CN109284000A (en) * 2018-08-10 2019-01-29 西交利物浦大学 Three-dimensional geometry object visualization method and system under a kind of reality environment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269782A (en) * 2021-04-21 2021-08-17 青岛小鸟看看科技有限公司 Data generation method and device and electronic equipment
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN114388059A (en) * 2022-01-13 2022-04-22 西湖大学 Protein section generation method based on three-dimensional force feedback controller
CN114170407A (en) * 2022-02-15 2022-03-11 北京所思信息科技有限责任公司 Model mapping method, device, equipment and storage medium of input equipment
WO2023155672A1 (en) * 2022-02-15 2023-08-24 Beijing Source Technology Co., Ltd. Input device model projecting method, apparatus and system
US11869145B2 (en) 2022-02-15 2024-01-09 Beijing Source Technology Co., Ltd. Input device model projecting method, apparatus and system
CN114706489A (en) * 2022-02-28 2022-07-05 北京所思信息科技有限责任公司 Virtual method, device, equipment and storage medium of input equipment
WO2024007704A1 (en) * 2022-07-08 2024-01-11 腾讯科技(深圳)有限公司 Virtual object display methods and apparatus, terminal device, and storage medium
CN116030228A (en) * 2023-02-22 2023-04-28 杭州原数科技有限公司 Method and device for displaying mr virtual picture based on web

Similar Documents

Publication Publication Date Title
CN111862333A (en) Content processing method and device based on augmented reality, terminal equipment and storage medium
US20210365492A1 (en) Method and apparatus for identifying input features for later recognition
EP3882861A2 (en) Method and apparatus for synthesizing figure of virtual object, electronic device, and storage medium
US20110292036A1 (en) Depth sensor with application interface
JP4768196B2 (en) Apparatus and method for pointing a target by image processing without performing three-dimensional modeling
CN110163942B (en) Image data processing method and device
WO2019041900A1 (en) Method and device for recognizing assembly operation/simulating assembly in augmented reality environment
KR101556992B1 (en) 3d scanning system using facial plastic surgery simulation
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
JP7026825B2 (en) Image processing methods and devices, electronic devices and storage media
CN110070556A (en) Use the structural modeling of depth transducer
US20190026935A1 (en) Method and system for providing virtual reality experience based on ultrasound data
US11508141B2 (en) Simple environment solver using planar extraction
CN110489027B (en) Handheld input device and display position control method and device of indication icon of handheld input device
CN104656893A (en) Remote interaction control system and method for physical information space
CN114026606A (en) Fast hand meshing for dynamic occlusion
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
US20230244354A1 (en) 3d models for displayed 2d elements
CN110675413B (en) Three-dimensional face model construction method and device, computer equipment and storage medium
CN110288714A (en) A kind of Virtual simulation lab system
CN111651031B (en) Virtual content display method and device, terminal equipment and storage medium
JPH11175765A (en) Method and device for generating three-dimensional model and storage medium
EP4336459A1 (en) Information processing device, information processing method, and program
CN117369233A (en) Holographic display method, device, equipment and storage medium
Yao et al. Design of a prototype for augmented reality defective bone repair simulation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination