CN117853662A - Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player - Google Patents

Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player Download PDF

Info

Publication number
CN117853662A
CN117853662A CN202410139765.7A CN202410139765A CN117853662A CN 117853662 A CN117853662 A CN 117853662A CN 202410139765 A CN202410139765 A CN 202410139765A CN 117853662 A CN117853662 A CN 117853662A
Authority
CN
China
Prior art keywords
dimensional model
interaction
display
text
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410139765.7A
Other languages
Chinese (zh)
Inventor
唐兴波
李智鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aidipu Technology Co ltd
Original Assignee
Aidipu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aidipu Technology Co ltd filed Critical Aidipu Technology Co ltd
Priority to CN202410139765.7A priority Critical patent/CN117853662A/en
Publication of CN117853662A publication Critical patent/CN117853662A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for realizing real-time interaction of a three-dimensional model in a demonstration text by a player, wherein the method comprises the following steps: acquiring an interaction instruction aiming at a three-dimensional model displayed in a text demonstration application; analyzing the interaction instruction to determine an interaction result of the three-dimensional model; and rendering the interaction result in real time, and displaying the rendered three-dimensional model in real time in a text demonstration application. By the aid of the method and the device, real-time interaction can be performed on the three-dimensional model displayed in the text demonstration application, and interaction results are rendered in real time and displayed in real time, so that omnibearing and imaging three-dimensional display is provided, immersive interaction experience is increased in the information transmission and display process, application scenes of the three-dimensional model are enriched, and convenience of interaction of the three-dimensional model is improved.

Description

Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player
Technical Field
The application relates to the technical field of three-dimensional graphic image processing and video playing, in particular to a method and a device for realizing real-time interaction of a three-dimensional model in a demonstration text by a player.
Background
With the rise of new technologies such as 5G, VR and the universe, the visual presentation mode is changed. In the past, a video shooting device records a physical world through images and presents the physical world in a mode of pictures and videos, and belongs to an image processing technology. In the digital world or the virtual world, the three-dimensional model is needed to be presented and expressed by means of a graphic technology, for example, the fields of digital twin, digital media, digital interaction and the like in the metauniverse evolution process all need to be rendered in real time by a three-dimensional graphic image engine, and three-dimensional visual presentation is realized through the three-dimensional model and data linkage.
In text presentation applications, three-dimensional models are generally not well presented and interacted with. In order to improve the information expression capability, the three-dimensional model interaction method applicable to text demonstration application can be provided, the application scene of the three-dimensional model is expanded, and the three-dimensional model is more convenient to operate.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for realizing real-time interaction of a three-dimensional model in a demonstration text by a player.
In a first aspect, an embodiment of the present disclosure provides a method for implementing real-time interaction of a three-dimensional model in a presentation text by a player, including: acquiring an interaction instruction aiming at a three-dimensional model displayed in a text demonstration application; analyzing the interaction instruction to determine an interaction result of the three-dimensional model; and rendering the interaction result in real time, and displaying the rendered three-dimensional model in real time in a text demonstration application.
In a second aspect, an embodiment of the present disclosure provides an apparatus for implementing real-time interaction of a three-dimensional model in a presentation text by a player, including: an instruction acquisition unit configured to acquire an interaction instruction for a three-dimensional model presented in a text presentation application; the instruction analysis unit is configured to analyze the interaction instruction and determine an interaction result of the three-dimensional model; and the real-time rendering unit is configured to render the interaction result in real time and display the rendered three-dimensional model in real time in the text demonstration application.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of one embodiment of a method for a player of the present disclosure to implement three-dimensional model real-time interactions in a presentation text;
FIG. 2 is a flow chart of another embodiment of a method for implementing three-dimensional model real-time interaction in a presentation text by a player of the present disclosure;
FIG. 3 is a schematic illustration of the interaction results of displacing individual components of a three-dimensional snowman model;
FIG. 4 is a schematic illustration of interaction of an extraction item of a vehicle model;
FIG. 5 is a schematic illustration of interaction of a hub of a vehicle model;
FIG. 6 is a schematic illustration of interaction with a three-dimensional model through a control;
fig. 7 is a schematic structural diagram of an embodiment of an apparatus for implementing real-time interaction of three-dimensional models in presentation text by a player of the present disclosure.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments in accordance with the present disclosure. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
In order to make the technical scheme and advantages of the present disclosure more apparent, the present disclosure will be further described in detail below with reference to the accompanying drawings and specific embodiments.
FIG. 1 illustrates a flow 100 of one embodiment of a method for a player of the present disclosure to implement three-dimensional model real-time interactions in presentation text. As shown in fig. 1, the method for implementing real-time interaction of the three-dimensional model in the presentation text by the player of the present embodiment may include the following steps:
step 101, obtaining an interaction instruction aiming at a three-dimensional model displayed in a text demonstration application.
In this embodiment, the three-dimensional model may be presented in a text presentation application. Text presentation applications herein may include, but are not limited to Word, powerPoint and the like. The three-dimensional model may be a model previously constructed by a three-dimensional model construction application, and may include a plurality of parts or may be accompanied by an animation.
The execution subject of the three-dimensional model interaction method may acquire the interaction instruction in various ways. For example by means of a trigger button, or by means of a combination of keys, or by means of a gesture trigger. The interactive instructions can operate on the three-dimensional model, for example, the perspective of the three-dimensional model can be changed, or each component of the three-dimensional model can be split, or operation (such as rotation, scaling, displacement and reset) can be performed on a single component, or play control can be performed on animation added to the three-dimensional model.
And 102, analyzing the interaction instruction to determine an interaction result of the three-dimensional model.
After the interaction instruction is acquired, the interaction instruction can be analyzed, and an interaction result of the three-dimensional model is determined. Specifically, according to the input mode of the interaction instruction, the state of the corresponding three-dimensional model can be determined, and the state is used as the interaction result. Or determining the interaction result of the three-dimensional model according to the corresponding relation between the interaction instruction and the interaction result.
If the interactive instruction is a play control instruction for the animation in the three-dimensional model, the interactive result can be play control for the animation. For example, the interactive instruction is to make the out-going animation of the three-dimensional model automatically play. And automatically playing the animation when the three-dimensional model comes out of the scene as an interaction result.
If the interactive instruction is input through a button, the interactive result may be a state of a three-dimensional model corresponding to the button. For example, the button is a component hiding instruction, and the interaction result may be a state of the three-dimensional model after hiding a certain component.
If the interaction instruction is input in real time by the user through a mouse or a touch screen, the interaction result can be the state of the three-dimensional model after the three-dimensional model is operated by the input. For example, a user drags the three-dimensional model through a mouse to enable the bottom surface of the three-dimensional model to face upwards, and the interaction result is the state of visual angle change of the three-dimensional model in the process of dragging the three-dimensional model.
And step 103, rendering the interaction result in real time, and displaying the rendered three-dimensional model in real time in a text demonstration application.
After the interactive result of the three-dimensional model is determined, the interactive result can be rendered in real time, and the rendered three-dimensional model is displayed in real time in the text demonstration application. In particular, a three-dimensional player exhibiting a three-dimensional model may be embedded in a text presentation application in the form of a plug-in. The three-dimensional player can realize real-time rendering of the interaction result by calling a rendering engine of the stratum. For example, a call button of the three-dimensional player is embedded on a panel of the text demonstration application, and the insertion of the three-dimensional model into the text demonstration application and the real-time rendering of the interaction of the three-dimensional model can be realized by clicking the call button. The real-time rendered three-dimensional model may be presented in real-time in a text presentation application. In some specific practices, the rendering viewport of the above-described rendering engine is Alpha-channeled to enable setting of transparency for the three-dimensional model.
The method for realizing real-time interaction of the three-dimensional model in the demonstration text by the player provided by the embodiment of the invention can perform real-time interaction on the three-dimensional model displayed in the text demonstration application, and perform real-time rendering and real-time display on the interaction result, thereby providing omnibearing and patterned three-dimensional display, increasing immersive interaction experience in the information transmission and display process, enriching the application scene of the three-dimensional model and improving the convenience of three-dimensional model interaction.
Continuing with fig. 2, a flow 200 of another embodiment of a method for a player to implement three-dimensional model real-time interactions in presentation text according to the present disclosure is shown. As shown in fig. 2, the method of the present embodiment may include the steps of:
in step 201, in response to receiving a presentation request of a three-dimensional model through a text presentation application, a display area of the three-dimensional model is generated in a display interface of the text presentation application.
In this embodiment, the presentation request of the three-dimensional model may be received by the text presentation application. The presentation request may be triggered by clicking a button in the text presentation application or by entry of a combination key. Upon receiving the presentation request, a display area of the three-dimensional model may be generated in a display interface of the text presentation application. The size of the display area may be preset, and the position of the display area may be preset. In some specific application scenarios, the display area may use a text input position as a vertex, and a preset size as a length, so as to obtain a rectangular area as the display area.
In specific practice, the text presentation application may be a PPT, word or WPS text, WPS presentation. Three-dimensional player plug-ins may be developed based on VSTO Office Addin or based on VBA. When the plug-in is developed, controls such as custom menus, buttons, forms, input boxes and the like can be added to the text demonstration application. The three-dimensional player plug-in is deployed by writing information into the registry. Setting registry parameters, controlling whether to load automatically when starting and storing path of plug-in program file of three-dimensional player. The three-dimensional player plug-in calls a bottom layer real-time three-dimensional graphic image rendering engine through the adaptation interface, and real-time rendering drawing and interaction control are carried out in a rendering viewport inserted into the three-dimensional model. In this embodiment, the types of the three-dimensional model may include a static model and a dynamic model.
It will be appreciated that upon receipt of a presentation request, a source file of the three-dimensional model may be acquired to display the three-dimensional model in the display area. Source files may include, but are not limited to: fbx format, obj format, 3mf format, ply format, stl format, glb format, msd format. The source file may be obtained in various manners, for example, may be obtained through a path set when the plug-in is installed, or may be downloaded from a network platform, or may receive a source file specified by a user. For example, the user may drag the source file into the display area to specify the source file. Or the specification of the source file can be realized by inputting a storage path of the source file.
After the source file of the three-dimensional model is obtained, space coordinates, units, materials, illumination, hierarchical structures and the like can be converted, and initial display is performed. Here, coordinate conversion refers to converting coordinates in a three-dimensional model into coordinates of a canvas within a display area. The unit conversion refers to converting units of the three-dimensional model into units of canvas within the display area. It will be appreciated that the process of unit conversion may involve scaling of the three-dimensional model so that an overview of the three-dimensional model may be displayed within the display area. The material, illumination and hierarchical structure of the three-dimensional model can also be converted so as to be capable of restoring the material effect. It should be noted that, due to the difference of the rendering engines, there may be a difference in the display effects of the material, illumination, etc. of the three-dimensional model.
Step 202, obtaining display configuration information of the three-dimensional model.
Display configuration information of the three-dimensional model can also be obtained. The display configuration information may include a spatial position of the three-dimensional model, an initial display angle, camera parameters, a background picture, and the like. The display configuration information may be stored in the same path as the source file of the three-dimensional model, or may be obtained from a preset platform, or may be obtained from an input device of the user.
In this embodiment, different initial display configuration information may be set for different types of three-dimensional models. For example, for a static three-dimensional model, the initial display configuration information may include a preset animation mode, such as an effect of automatic rotation, floating up and down, left and right swing, and the like. For a dynamic three-dimensional model, the initial display configuration information may include automatic play information of an animation, etc.
In some alternative implementations of the present embodiment, the interface type corresponding thereto may be determined first from the text presentation application. And then determining the adapting interface of the corresponding three-dimensional model plug-in according to the interface type. For example, word and PPT are different in interface type, corresponding to different adaptation interfaces.
Meanwhile, the storage path of the three-dimensional model file can be determined according to the identification of the three-dimensional model file. Specifically, the identification of the three-dimensional model file may be the name of the three-dimensional model file. The storage path can be searched by the name.
Finally, according to the storage path and the adaptive interface, the display configuration information of the three-dimensional model can be obtained. Specifically, the storage path may be accessed through the adaptation interface to obtain display configuration information of the three-dimensional model.
And 203, rendering the three-dimensional model according to the display configuration information and displaying the three-dimensional model in a display area.
After the display configuration information is obtained, the three-dimensional model can be rendered according to the display configuration information, and the rendered three-dimensional model is displayed in the display area. Specifically, the underlying rendering engine may be invoked to render when rendering. It will be appreciated that the display configuration information may be modified and may be saved after the modification is completed. After the three-dimensional model is stored, when the three-dimensional model is opened next time, the three-dimensional model can be rendered according to the stored display configuration information.
In some optional implementations of this embodiment, the display configuration information includes an initial display angle. The rendering may be achieved specifically by: performing coordinate transformation on the three-dimensional model and setting the center of the three-dimensional model at the center of the display area; determining the display proportion of the three-dimensional model according to the display area and the three-dimensional model; and rendering the three-dimensional model according to the display proportion and the initial display angle, and displaying the three-dimensional model in a display area.
In this embodiment, when the three-dimensional model is displayed, coordinate conversion may be performed, and the center of the three-dimensional model may be set at the center of the display area. It will be appreciated that if a three-dimensional model exists in a three-dimensional scene, the center of the three-dimensional scene may be disposed at the center of the display area. In this way, the three-dimensional model and/or the three-dimensional scene may be caused to be displayed in the center of the display area. In addition, the three-dimensional model and/or the three-dimensional scene can be scaled or enlarged according to the display scale according to the size of the display area and the size of the three-dimensional model. The display scale here can be derived from the size of the display area compared to the size of the three-dimensional model. After the display scale is obtained, the three-dimensional model can be rendered, and the three-dimensional model is displayed at an initial display angle. In some specific practices, the initial display angle may be 45 degrees up right.
In some optional implementations of this embodiment, the three-dimensional model is located in a three-dimensional scene. Then the display scale may be determined based on the size of the display area and the size of the three-dimensional scene.
It should be noted that, if the three-dimensional model is not located at the center of the three-dimensional scene, after the rendering of the three-dimensional scene is completed, a case may occur in which the three-dimensional scene is displayed at the center of the display area, but the three-dimensional model is not located at the center of the display area, in which case the detailed display of the three-dimensional model may be achieved by performing operations such as moving or hiding the three-dimensional scene.
In some optional implementations of this embodiment, the display configuration information includes animation playback configuration information. The three-dimensional model and the attached animation can be rendered according to the animation playing configuration information, and the animation is played in the display area at the same time. For example, the animation playing configuration information may include information of when to play, what speed to play, where to play, how many times to play, etc. If the animation play configuration information is to be automatically played once at a 2-time speed, the above-described animation can be automatically played once at a 2-time speed while the three-dimensional model is displayed.
In some optional implementations of this embodiment, an interactive panel of the three-dimensional model may be displayed in a designated area of the display area. The interactive panel may include an entry for inputting interactive instructions. The user can input the interaction instruction through the interaction panel. Specifically, the interactive panel may be suspended at a designated area of the display area. Alternatively, the interactive panel may be hidden when the user does not operate the three-dimensional model, and the interactive panel may be displayed when a designated operation is detected. For example, when the mouse is detected to slide in the display area, the interactive panel may be displayed.
And 204, determining an interaction result of the three-dimensional model according to the interaction instruction and a preset mapping relation.
In this embodiment, when the interaction instruction is analyzed, the interaction result of the three-dimensional model may be determined according to the interaction instruction and a preset mapping relationship. The mapping relation is used for representing the corresponding relation between the interaction instruction and the interaction result. For example, a "view switch" interactive instruction is used to quickly switch views of a three-dimensional model in three-dimensional space to enable viewing of details of the front, back, left, right, top, bottom of the three-dimensional model. The 'camera adjustment' interaction instruction is used for adjusting the spatial position (X/Y/Z), the lens orientation (Pan/Tilt/Roll) and the Zoom (Zoom/Focus) of the virtual camera lens, so that the three-dimensional model presents a more 3D perspective effect. Or, dragging the three-dimensional model in a mouse or touch mode to realize rotation, movement and scaling; in addition, through the UI control, the playing of the model self-driven drawing or animation segment, such as skeleton animation and explosion animation, is realized, the playing position is changed, the animation effect is controlled, and the playing speed is adjusted. The interaction result corresponding to the hidden interaction instruction is to hide a certain part. The interaction result corresponding to the reset interaction instruction is that the three-dimensional model is set to be in an initial state. The above-mentioned "reset" interaction instruction may be triggered by a specific operation, for example, a mouse double-click on a blank area of the 3D display window; alternatively, the one-touch reset operation is performed through the UI control.
Alternatively, in this embodiment, the interaction instruction may be parsed in step 205.
Step 205, analyzing the interaction instruction to determine interaction operation information; according to the interactive operation information, the posture of the three-dimensional model is adjusted; and determining an interaction result according to the adjusted gesture.
The interactive instruction can be analyzed to determine the interactive operation information. The interaction instruction here may be an instruction for real-time interaction such as dragging, scaling, etc. of the three-dimensional model. For such interactive instructions, real-time interactive operation information may be determined. The interactive operation information here may include a drag position, a scaled displacement, and the like. According to the interactive operation information, the gesture of the three-dimensional model can be adjusted. For example, the three-dimensional model is rotated according to the position of the drag to change the posture of the three-dimensional model. Or determining the final scaling according to the scaled displacement and the scaling corresponding to the unit displacement. And adjusting the posture of the three-dimensional model according to the final scaling. And determining an interaction result according to the adjusted gesture. Here, the gesture corresponding to the three-dimensional model at the end of the interaction may be used as the interaction result.
In this embodiment, the three-dimensional model includes at least one component, each of which may correspond to an export item. In other implementations, the three-dimensional model as a whole may also correspond to the derivative term. The interaction instruction may also be parsed by step 206.
Step 206, determining a target component for which the interaction instruction is directed; and determining the interaction result of the target component according to the interaction instruction and the leading-out item of the target component.
In this embodiment, the target component for which the interaction instruction is directed may be determined first. Specifically, a component at the position indicated by the interaction instruction may be taken as a target component, or a component corresponding to the identifier indicated by the interaction instruction may be taken as a target component. And then determining the interaction result of the target component according to the interaction instruction and the extraction item of the target component. The export items may include, but are not limited to: text, data, pictures (Logo), BMG music, video, etc. The function of the lead-out is that the original content can be replaced by modification.
In some alternative implementations of the present embodiment, the outgoing items may also be modified. If modification information for the outgoing item is received, the outgoing item may be modified according to the modification information. For example, if the lead item is a text header, the text in the text header may be modified. Or the leading-out item is a picture, and the picture can be updated or replaced.
After the modification is completed, if a save request for the modified export item is received, the modified export item may be saved. The modified results may be directly presented the next time the three-dimensional model is opened.
In this embodiment, elements in the three-dimensional model or scene may be set as the export item, and then the content of the export item may be modified and rendered in real time using the player plug-in. For example, the target component is the vehicle itself, the lead-out item is a data chart of vehicle configuration parameters, and the data or map in the data chart can be directly modified if desired. Or the target part is a hub, the leading-out items are words, the words comprise information such as names and prices of the hub, and the words in the words can be modified in real time. By modifying the export items, the need for updated dynamic presentation can be satisfied.
In some specific practices, for a three-dimensional model containing several objects, the three-dimensional objects therein may be selected and the selected state may be highlighted, and then a hiding or separate display operation may be performed; in this way, independent presentation of the critical components may be achieved.
Fig. 3 shows a schematic diagram of the interaction results of displacing individual components of a three-dimensional snowman model. As can be seen from fig. 4, the interaction instruction is a move-out instruction, the target component is a leg, and the interaction result is to move the leg of the rider to the designated position.
FIG. 4 shows a schematic diagram of interaction with an extraction item of a vehicle model. After the three-dimensional model is displayed in the text demonstration application, the text in the three-dimensional model can be directly updated or replaced through the UI input control. The color of the car body is taken as a texture material, and when the car body is modified, a replacement picture can be preset or a texture map can be selected from the outside. The figure shows a plurality of car body colors, and the texture map in the car body colors can be rendered in real time by clicking different car body colors, so that the switching of the car body colors can be realized.
FIG. 5 shows a schematic illustration of the interaction of the hubs of a vehicle model. As can be seen from fig. 5, by clicking the hub part, switching of different types of hubs can be achieved.
FIG. 6 shows a schematic diagram of interaction with a three-dimensional model through a control. In fig. 6, the user may click on buttons on the panel to control the fan three-dimensional model, or may click on buttons below the fan to control the fan three-dimensional model. For example, clicking a switch button may turn the fan on or off. Clicking the pan button controls the fan to pan.
And 207, rendering the interaction result in real time, and displaying the rendered three-dimensional model in real time in a text demonstration application.
The real-time interactive operation is synchronized to the background 3D player through the interface so as to analyze the interactive data, and the bottom layer rendering engine is called to render the three-dimensional model in real time according to the received interactive instruction, wherein the details comprise the material, texture, mapping, lighting effect, shadow and the like of the three-dimensional model.
The real-time rendering effect of the three-dimensional model may be transmitted to the text presentation application via an interface for visual presentation at a presentation window of the text presentation application.
The method for realizing real-time interaction of the three-dimensional model in the demonstration text by the player provided by the embodiment of the invention can facilitate a demonstrator or a spectator to realize 3D immersive and visual information transmission and acquisition through the real-time interaction operation obtained by the visualizer, and can read the three-dimensional digital content more intuitively, comprehensively, stereoscopically and vividly and promote the propagation of the three-dimensional digital content.
With further reference to fig. 7, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an apparatus for implementing real-time interaction of a three-dimensional model in a presentation text by a player, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 7, an apparatus 700 for implementing real-time interaction of a three-dimensional model in a presentation text by a player of the present embodiment includes: an instruction acquisition unit 701, an instruction parsing unit 702, and a real-time rendering unit 703.
An instruction acquisition unit 701 configured to acquire an interaction instruction for a three-dimensional model presented in a text presentation application.
The instruction parsing unit 702 is configured to parse the interaction instruction and determine an interaction result of the three-dimensional model.
The real-time rendering unit 703 is configured to render the interaction result in real time, and display the rendered three-dimensional model in real time in the text demonstration application.
In summary, in the technical scheme of the disclosure, real-time interaction can be performed on the three-dimensional model displayed in the text demonstration application, and real-time rendering and real-time display can be performed on the interaction result, so that omnibearing and patterned three-dimensional display is provided, immersive interaction experience is increased in the information transmission and display process, the application scene of the three-dimensional model is enriched, and the convenience of three-dimensional model interaction is improved.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (10)

1. A method for realizing real-time interaction of a three-dimensional model in a demonstration text by a player, which is characterized by comprising the following steps:
acquiring an interaction instruction aiming at a three-dimensional model displayed in a text demonstration application;
analyzing the interaction instruction to determine an interaction result of the three-dimensional model;
and rendering the interaction result in real time, and displaying the rendered three-dimensional model in real time in the text demonstration application.
2. The method according to claim 1, wherein the method further comprises:
generating a display area of the three-dimensional model in a display interface of a text presentation application in response to receiving a presentation request of the three-dimensional model through the text presentation application;
acquiring display configuration information of the three-dimensional model;
and rendering the three-dimensional model according to the display configuration information, and displaying the rendered three-dimensional model and an interactive panel in the display area, wherein the interactive panel comprises an inlet for inputting an interactive instruction.
3. The method of claim 2, wherein the obtaining display configuration information of the three-dimensional model comprises:
analyzing the display request, and determining the interface type and the three-dimensional model file identification of the text demonstration application;
determining an adaptive interface of a corresponding three-dimensional model plug-in according to the interface type of the text demonstration application;
determining a storage path of the three-dimensional model file according to the three-dimensional model file identifier;
and acquiring display configuration information of the three-dimensional model according to the storage path and the adaptation interface.
4. The method of claim 2, wherein the display configuration information includes an initial display angle, the three-dimensional model being located in a three-dimensional scene; and
rendering the three-dimensional model according to the display configuration information and displaying the three-dimensional model in the display area, wherein the rendering comprises the following steps:
performing coordinate transformation on the three-dimensional model and setting the center of the three-dimensional model at the center of the display area;
determining the display proportion of the three-dimensional model according to the display area and the three-dimensional scene;
and rendering the three-dimensional model according to the display proportion and the initial display angle, and displaying the three-dimensional model in the display area.
5. The method of claim 2, wherein the display configuration information comprises animation playback configuration information; and
rendering the three-dimensional model according to the display configuration information and displaying the three-dimensional model in the display area, wherein the rendering comprises the following steps:
and rendering the three-dimensional model and the attached animation according to the animation playing configuration information, and playing the animation in the display area.
6. The method of claim 1, wherein the parsing the interaction instruction to determine the interaction result of the three-dimensional model comprises:
and determining an interaction result of the three-dimensional model according to the interaction instruction and a preset mapping relation, wherein the mapping relation is used for representing the corresponding relation between the interaction instruction and the interaction result.
7. The method of claim 1, wherein the parsing the interaction instruction to determine the interaction result of the three-dimensional model comprises:
analyzing the interactive instruction to determine interactive operation information;
according to the interactive operation information, adjusting the posture of the three-dimensional model;
and determining the interaction result according to the adjusted gesture.
8. The method of claim 1, wherein the three-dimensional model comprises at least one component, each component corresponding to an export item; and
the analyzing the interaction instruction to determine the interaction result of the three-dimensional model comprises the following steps:
determining a target component for which the interaction instruction is directed;
and determining an interaction result of the target component according to the interaction instruction and the extraction item of the target component.
9. The method according to claim 1, wherein the method further comprises:
modifying the lead-out item according to modification information in response to receiving the modification information for the lead-out item;
in response to receiving a save request for the modified export item, the modified export item is saved.
10. An apparatus for realizing real-time interaction of a three-dimensional model in a demonstration text by a player, the apparatus comprising:
an instruction acquisition unit configured to acquire an interaction instruction for a three-dimensional model presented in a text presentation application;
the instruction analysis unit is configured to analyze the interaction instruction and determine an interaction result of the three-dimensional model;
and the real-time rendering unit is configured to render the interaction result in real time and display the rendered three-dimensional model in the text demonstration application in real time.
CN202410139765.7A 2024-01-31 2024-01-31 Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player Pending CN117853662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410139765.7A CN117853662A (en) 2024-01-31 2024-01-31 Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410139765.7A CN117853662A (en) 2024-01-31 2024-01-31 Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player

Publications (1)

Publication Number Publication Date
CN117853662A true CN117853662A (en) 2024-04-09

Family

ID=90532500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410139765.7A Pending CN117853662A (en) 2024-01-31 2024-01-31 Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player

Country Status (1)

Country Link
CN (1) CN117853662A (en)

Similar Documents

Publication Publication Date Title
AU650179B2 (en) A compositer interface for arranging the components of special effects for a motion picture production
US6791581B2 (en) Methods and systems for synchronizing skin properties
US20020101444A1 (en) Methods and systems for creating skins
TWI606384B (en) Engaging presentation through freeform sketching
US10643399B2 (en) Photorealistic scene generation system and method
KR20120045744A (en) An apparatus and method for authoring experience-based learning content
US9697638B2 (en) Image processing apparatus, image processing method, and data structure of image file
CN111803945A (en) Interface rendering method and device, electronic equipment and storage medium
Rumiński et al. Creation of interactive AR content on mobile devices
JP2022500795A (en) Avatar animation
GB2400290A (en) Multidimensional image data processing in a hierarchical dat structure
JP2727974B2 (en) Video presentation device
Hayashi et al. Automatic generation of personal virtual museum
White et al. Multimodal mixed reality interfaces for visualizing digital heritage
CN116243831B (en) Virtual cloud exhibition hall interaction method and system
CN111949904B (en) Data processing method and device based on browser and terminal
CN106716501A (en) Visual decoration design method, apparatus therefor, and robot
CN115379278B (en) Recording method and system for immersion type micro lessons based on augmented reality (XR) technology
CN117853662A (en) Method and device for realizing real-time interaction of three-dimensional model in demonstration text by player
US11625900B2 (en) Broker for instancing
Trapp et al. Communication of digital cultural heritage in public spaces by the example of roman cologne
Giertsen et al. An open system for 3D visualisation and animation of geographic information
CN115019019B (en) Method for realizing 3D special effect editor
CN117437342B (en) Three-dimensional scene rendering method and storage medium
CN116450017B (en) Display method and device for display object, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination