CN115202624A - Full-real scene construction system, method, electronic equipment and storage medium - Google Patents

Full-real scene construction system, method, electronic equipment and storage medium Download PDF

Info

Publication number
CN115202624A
CN115202624A CN202210628634.6A CN202210628634A CN115202624A CN 115202624 A CN115202624 A CN 115202624A CN 202210628634 A CN202210628634 A CN 202210628634A CN 115202624 A CN115202624 A CN 115202624A
Authority
CN
China
Prior art keywords
component
scene
combination
interface
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210628634.6A
Other languages
Chinese (zh)
Inventor
刘慧珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xintang Sichuang Educational Technology Co Ltd
Original Assignee
Beijing Xintang Sichuang Educational Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xintang Sichuang Educational Technology Co Ltd filed Critical Beijing Xintang Sichuang Educational Technology Co Ltd
Priority to CN202210628634.6A priority Critical patent/CN115202624A/en
Publication of CN115202624A publication Critical patent/CN115202624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure relates to a truthful scene construction system, method, electronic device, and storage medium. The full-true scene construction system comprises a component construction module, a scene combination module and a scene construction module, wherein: the component building module is specifically used for packaging at least one interface in the preset interface library into at least one component and setting editable component attributes for each component in the at least one component; the scene combination module is used for constructing at least one scene combination applied to different education scenes based on at least one component; the scene construction module is used for constructing the target total-real scene according to at least one scene combination. The system provided by the disclosure effectively reduces the development period of the virtual education scene and improves the development efficiency.

Description

Full-real scene construction system, method, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for constructing a total reality scene, an electronic device, and a storage medium.
Background
At present, the online teaching mode is widely applied in the education field. With the development of the all-real scene, online teaching based on the virtual education scene gradually becomes a key point of research and application, meanwhile, the rapid development of the virtual education scene is also crucial, and for the development of the virtual education scene, the development is usually carried out based on a 3D editor calling interface, so that the development period is long, and the development efficiency is low.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a system and a method for constructing a full-reality scene, an electronic device and a storage medium, so that the development period of a virtual education scene is effectively shortened, and the development efficiency is improved.
According to an aspect of the present disclosure, there is provided a truthful scene construction system, including a component construction module, a scene combination module and a scene construction module, wherein:
the component construction module is specifically used for packaging at least one interface in a preset interface library into at least one component and setting an editable component attribute for each component in the at least one component;
the scene combination module is specifically configured to construct at least one scene combination applied to different educational scenes based on the at least one component;
the scene construction module is used for constructing a target total true scene according to the at least one scene combination.
According to another aspect of the present disclosure, there is provided a total reality scene construction method, the method including:
packaging at least one interface in a preset interface library into at least one component, and setting editable component attributes for each component in the at least one component;
constructing at least one scene combination applied to different educational scenes based on the at least one component;
and constructing a target total true scene according to the at least one scene combination.
According to another aspect of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory storing a program, wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to the above-described all-true scene construction.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform a method of constructing from an all-true scene.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described all-true scene construction method.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
the full-true scene construction system comprises a component construction module, a scene combination module and a scene construction module, wherein: the component construction module is specifically used for packaging at least one interface in the preset interface library into at least one component and setting editable component attributes for each component in the at least one component; the scene combination module is used for constructing at least one scene combination applied to different education scenes based on at least one component; the scene construction module is used for constructing the target total-real scene according to at least one scene combination. The system provided by the disclosure effectively reduces the development period of the virtual education scene and improves the development efficiency.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a truthful scene construction system according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of another all-real scene construction system provided in the embodiment of the present disclosure;
fig. 3 is an interface schematic diagram of a scene editor provided in an embodiment of the present disclosure;
fig. 4 is a flowchart of a method for constructing a total reality scene according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure can be more clearly understood, embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Before explaining the embodiments of the present disclosure in detail, the related terms referred to in the present disclosure will be explained first.
1. All-true environment/all-true scene
An all-real scene is a scene displayed (or provided) by a client of an application program (such as an education application program) when the client runs on a terminal, and the virtual scene refers to an environment created for a virtual object to perform activities (such as education and learning), and can be an all-real classroom, a virtual teaching scene and the like. The virtual scene may be a scene that is a simulation of the real world, or may be an imaginary scene, or a semi-real and semi-imaginary scene. It is understood that the truthful scene provided by the present disclosure is three-dimensional, which can be understood as a three-dimensional virtual scene.
2. Virtual object
The virtual object refers to an object controlled by the terminal in the application program. Taking an application as an educational application as an example, an educational all-real scene displayed in the educational application may be, for example, an all-real classroom, a virtual object refers to a student and/or a teacher controlled by a terminal in the educational application, the virtual object may be a character gesture, and is recorded as a virtual character, and the virtual object is three-dimensional and may be understood as a three-dimensional virtual object or a three-dimensional virtual model.
At present, in the field of 3D editors, the existing editors only provide the interface capability of the basic technology of the editors, and the applicability to educational scenes is poor; secondly, interface development needs to be called again based on application layer requirements, service requirements cannot be completed at low cost, and time and labor are wasted.
Fig. 1 is a schematic structural diagram of an all-true scene constructing system provided in an embodiment of the present disclosure, where the all-true scene constructing system 100 includes a component constructing module 110, a scene combining module 120, and a scene constructing module 130, where:
the component construction module 110 is configured to encapsulate at least one interface in a preset interface library into at least one component, and set an editable component attribute for each component in the at least one component.
It can be understood that, before the truthful scene construction system 100 is executed, an interface library needs to be constructed in advance, the interface library constructed in advance is denoted as a preset interface library, the preset interface library includes a large number of underlying interfaces, and the underlying interfaces may be interfaces provided by an editor and having a basic technology, which specifically includes but is not limited to: a text display interface (UItext interface), an image display interface (UIimage interface), an identification display interface (UIlabel interface), a rigid body interface (regiody interface), a collision interface (collider interface), a video interface (audio interface), an audio interface (video interface), a sky box interface (sky box interface), a terrain interface (terrin interface), a bone interface (avatar interface), an animation interface, a trigger interface (trigger interface), an editable language logic interface (lua logic interface), a particle system interface, and a visual special effect interface (visual effect interface).
It can be understood that the component building module 110 is configured to package at least one interface in the preset interface library into at least one component, that is, each component may be obtained based on at least one interface package, and the interfaces involved in each component may not be identical, for example, the component 1 is packaged by the interface 1 to the interface 3, and the component 2 is packaged by the interface 2 to the interface 4. The component building module 110 may obtain a plurality of components by packaging based on the preset interface library, where a component may be understood as a general component suitable for an educational scene, and in the process of packaging the components, an editable component attribute is set for each component, where the editable component attribute refers to an attribute that can be selected by the truthful scene in the building process and is related to the component, for example, for a special effect component, a different special effect attribute may be set for the special effect component, that is, a different special effect may be selected based on the special effect component in the building process of the truthful scene. Specifically, the encapsulation of the general component can be realized by encapsulating a plurality of category functions related to a plurality of interfaces and embedding part of attributes in the functions.
Wherein the scene combination module 120 is configured to construct at least one scene combination applied to different educational scenes based on the at least one component.
It can be understood that the scene combination module 120 is configured to construct at least one scene combination applied to different truthful education scenes, which may be a truthful classroom, a testing scene, or a certain teaching interaction scene, based on at least one component constructed by the component construction module 110. Each scene combination of the constructed at least one scene combination is composed of at least one component, and the components involved in each scene combination are not identical, for example, the scene combination 1 is constructed by the component 1 to the component 3, and the scene combination 2 is constructed by the component 1, the component 2 and the component 4. Secondly, each scene combination is configured with editable attributes, and during the application process of the scene combination, the components involved in the scene component during construction can be selected, for example, the scene combination 1 is constructed by 3 components, and when a certain truthful scene is constructed based on the scene combination 1, at least one component required can be selected from the 3 components. In addition, after the scene combination module 120 completes the construction of at least one scene combination, the at least one scene combination is built into a studio editor, so that the full-real scene is conveniently constructed based on the at least one scene combination built in the studio editor.
Wherein the scene construction module 130 is configured to construct a target total reality scene according to the at least one scene combination.
The scene building module 130 is specifically configured to, in response to a trigger operation of a target scene combination in the at least one scene combination, edit a component attribute of each component in at least one component included in the target scene combination, and build a target truthful scene.
It can be understood that the scene construction module 130 constructs an educational scene by using the above studio editor, wherein the studio editor is internally configured with at least one scene combination constructed by the scene combination module 120, a target scene combination in the at least one scene combination is dragged to a scene editing area in the studio editor, the scene construction module 130 responds to a trigger operation of the target scene combination, the trigger operation refers to a dragging operation for the target scene combination, component attributes of each component in the at least one component related to the target scene combination are edited, specifically, a required target component is selected, and a required function is set for the target component to construct a target truthful scene.
In another embodiment, fig. 2 is a schematic structural diagram of another all-real scene constructing system provided in the embodiment of the present disclosure, and the all-real scene constructing system 100 further includes a preset interface library 140, where:
the at least one component packaged by the component construction module 110 includes at least one of the following components: the system comprises a dialog box component, a physical component, an audio and video component, an environment atmosphere component, a character animation component, a trigger logic component and a special effect component, wherein other components which can be constructed can be determined according to the requirements of users.
The component construction module 110 is specifically configured to encapsulate a text display interface, an image display interface, and an identifier display interface in a preset interface library as the dialog box component, and set an editable first component attribute for the dialog box component, where the first component attribute at least includes one attribute of dialog box content, font, and color.
Understandably, the component building module 110 is specifically configured to encapsulate the text display interface, the image display interface and the identification display interface in the preset interface library into a dialog box component, and set editable first component attributes for the dialog box component, where the first component attributes refer to attributes related to the text display interface, the image display interface and the identification display interface, such as content selectable in the dialog box, size of the dialog box, font and color in the dialog box, and the like. For example, whether an image or text is displayed within the dialog box may be selected, the type, size, color, etc. of the dialog box may be set, and the type of the dialog box may be in a scroll style, based on the first component properties of the dialog box component. The method has the advantages that the editing of characters and images can be realized on the basis of the dialog box component obtained by packaging, the dialog box component can be edited on the basis of the first component attribute, the character box and the image box do not need to be called respectively, the follow-up construction of a full-reality scene is facilitated, the operation is simpler and more convenient, the development efficiency is further improved, and the calling cost is saved.
For example, the packaging process is described in detail by taking the example that the component building module 110 packages the text box component: (1) Creating a text display interface (UiText) object and an image display interface (UiImage) object, and changing the layout (RectTransform) of the text display interface object and the image display interface object to align the UiText object and the UiImage object; subsequently, a dialog box component is formed by adding an identification display interface (UiLabel) object or a button display interface (UiButton) object. (2) Add text (text) default attribute, default text (default text) is: "please input alternative characters", the font size (size) is 14, the font color (textcolor) is black (black); adding an image default attribute, wherein the default UiImage object background color (background) attribute is white and the like, can be modified and covered by a locally uploaded picture, and the name (name) attribute of the default UiButton component or the UiLabel component is 'button', and the name supports modification. (3) And packaging the layout (layout) objects of the UiText and the UiImage into a dialog box component (UiTextModel), wherein the dialog box component is a dialog box which can be directly called and can freely edit characters, images, buttons and the like.
It can be understood that the process of the component building module 110 building other common components based on at least one interface and the process of building dialog components are not described herein again.
The component construction module 110 is specifically configured to encapsulate the rigid body interface and the collision interface in the preset interface library as the physical component, and set an editable second component attribute for the physical component, where the second component attribute at least includes one attribute of a collision and a moment.
It can be understood that the component construction module 110 is specifically configured to encapsulate the rigid body interface and the collision interface in the preset interface library as physical components, where the physical components may be configured to construct a force-generated answer scenario, for example, push and pull an answer to a question box, and after the answer and the question box collide with each other, determine whether the answer is correct, and set editable second component attributes for the physical components, where the second component attributes include physically related attributes such as collision and moment, which are not described herein again, and the attributes of the physical components may be edited based on the second component attributes.
The component construction module 110 is specifically configured to encapsulate a video interface and an audio interface in the preset interface library as the audio/video component, and set an editable third component attribute for the audio/video component, where the third component attribute at least includes one attribute of an audio/video type and an audio/video effect.
It can be understood that the component building module 110 is specifically configured to encapsulate the video interface and the audio interface in the preset interface library into an audio/video component, where the audio/video component may be configured to configure audio and/or video for an educational scene, and set an editable third component attribute for the audio/video component, where the third component attribute at least includes one attribute of an audio/video type and an audio/video effect, that is, whether audio or video is used may be selected, and different audio/video may also be selected, and specifically, audio/video may be loaded locally, for example, text reading audio may be loaded, and the like.
The component construction module 110 is specifically configured to encapsulate a sky box interface and a terrain interface in the preset interface library as the ambient atmosphere component, and set an editable fourth component attribute for the ambient atmosphere component, where the fourth component attribute includes an educational scene attribute.
It can be understood that the component building module 110 is specifically configured to encapsulate the sky box interface and the terrain interface in the preset interface library as an environmental atmosphere component, and set an editable fourth component attribute for the environmental atmosphere component, where the fourth component attribute includes an educational scene attribute, and the environmental atmosphere component is used to set different educational scenes, such as indoor or outdoor educational scenes.
The component construction module 110 is specifically configured to encapsulate the skeleton interface and the animation interface in the preset interface library as the character animation component, and set an editable fifth component attribute for the character animation component, where the fifth component attribute includes a character action attribute.
It can be understood that the component building module 110 is specifically configured to encapsulate the skeleton interfaces and the animation interfaces in the preset interface library as a character animation component, and set an editable fifth component attribute for the character animation component, where the fifth component attribute includes a character motion attribute, and the character animation component is configured to provide a virtual object for an educational scene, and also edit different motion attributes for the virtual object, such as motion attributes of running, jumping, walking, and the like.
The component construction module 110 is specifically configured to encapsulate a trigger interface and an editable language logic interface in the preset interface library as the trigger logic component, and set an editable sixth component attribute for the trigger logic component, where the sixth component attribute at least includes one attribute of a click, an approach, and a trigger logic.
It can be understood that the component building module 110 is specifically configured to encapsulate the trigger interface and the editable language logic interface in the preset interface library as a trigger logic component, and set an editable sixth component attribute for the trigger logic component, where the sixth component attribute at least includes one of a click attribute, a proximity attribute and a trigger logic attribute, and the trigger logic component is configured to provide a trigger logic for other components in the education scene, for example, a certain button in the education scene is clicked to display a teaching question, and the trigger logic may be that after a preset time elapses from the end of the previous education question, the next teaching question is displayed.
The component construction module 110 is specifically configured to encapsulate a particle system interface and a visual special effect interface in the preset interface library as the special effect component, and set an editable seventh component attribute for the special effect component, where the seventh component attribute at least includes one attribute of a light special effect and a drop special effect.
It can be understood that the component construction module 110 is specifically configured to encapsulate the particle system interface and the visual special effect interface in the preset interface library into a special effect component, and set an editable seventh component attribute for the special effect component, where the seventh component attribute at least includes one attribute of a light special effect and a drop special effect, and the special effect component may set a special effect related to a reward after a task is completed in an education scene, for example, after an education question is answered, the drop special effect may be displayed, and a drop reward may be displayed.
Wherein the at least one scene combination constructed by the scene combination module 120 comprises at least one of the following combinations: the interactive combination of the lecture table, the interactive combination of the teaching task, the combination of the reward of the completion task and the combination of the three-dimensional collision answering.
The scene combination module 120 is specifically configured to construct the podium interaction combination based on the audio/video component, the character animation component, and the trigger logic component.
Understandably, the scene combination module 120 is specifically configured to construct a podium interaction combination based on the audio/video component, the character animation component and the trigger logic component, and to embed the podium interaction combination into the studio editor.
Illustratively, the construction flow of the scene combination module 120 for constructing the podium interactive combination is as follows: (1) packaging the audio assembly: and calling an AudioClip interface in an Audio Source (Audio Source) to set a music resource of the lecture station, wherein the AudioClip interface is a simple abstraction for playing an Audio clip, then using an Audio listener (Audio Listener) to play music, presetting a propagation effect (spread) as a 3D sound mixing effect, and realizing that default effect logic can be automatically matched by dragging a player (mp 3) resource capable of playing a music file. (2) encapsulating character animation (action): a moving-to-speech-stage action FeedBack suppression (fbx) material is loaded in an animation (animation) file of a character (player), the player standing and walking action state is changed by using an animation controller (animation controller), and after packaging, the character posture can be edited by only dragging the character action material. (3) encapsulation logic triggering: the code writes character trigger action and music logic, and the player judges whether to trigger different actions and music according to the trigger hitlistance distance. (4) And packaging the audio and video component, the character animation component and the trigger logic component into a speech (speech) class to form a final speech platform component capable of being directly applied.
It can be understood that the manner of constructing other scene components by the scene combination module 120 is interactively combined with the above-mentioned podium, and is not described herein again.
Wherein the scene combination module 120 is specifically configured to construct the teaching task interaction combination based on the dialog component, the environment atmosphere component, and the trigger logic component.
Understandably, the scene combination module 120 is specifically configured to construct teaching task interaction combinations based on the dialog component, the environment atmosphere component, and the trigger logic component, where the teaching task interaction combinations are used to construct different teaching tasks, for example, an explanation task for a certain article.
Wherein the scene combination module 120 is specifically configured to construct the task reward completion combination based on the special effect component and the trigger logic component.
Wherein the scene combination module 120 is specifically configured to construct the three-dimensional collision question combination based on the dialog component and the physical component.
It is understood that the scene combination module 120 may also construct different scene combinations based on different components, and further possible scene combinations are not described herein.
Illustratively, referring to fig. 3, fig. 3 is an interface schematic diagram of a scene editor provided in the embodiment of the present disclosure, where the scene editor may be a studio editor with at least one scene combination built in a scene combination module 120, fig. 3 includes a studio editor interface 300, the studio editor interface 300 includes a podium interactive combination 310 and a scene editing region 320, and also includes other scene combinations 1 and scene combinations 2 built in to the editor, the podium interactive combination 310 includes an audio-video component 311, a character animation component 312, and a trigger logic component 313, a user drags the podium interactive combination 310 in the studio editor interface 300 to the scene editing region 320, and edits attributes of each component in the scene editing region 320, so as to implement a business requirement without developing secondary development, and at the same time, support a user to replace and select different special effect changes, and transmit gate trigger logic, and the like. For example, some of the components in the podium interaction group 310 may be changed, only the character animation component 312 and the trigger logic component 313 are used, the audio/video component 311 is not used, the actions of the virtual objects in the character animation component 312 are changed, for example, walking is changed to running, the trigger logic in the trigger logic component 313 is changed, for example, approach trigger is changed to click trigger, and the components in the podium interaction group 310 may be saved for subsequent use after the editing is completed.
The embodiment of the present disclosure provides a total truth scene construction system, which includes: the system comprises an assembly construction module, a scene combination module and a scene construction module, wherein: the component construction module is used for packaging at least one interface in the preset interface library into at least one component, and setting editable component attributes for each component in the at least one component, namely packaging a plurality of interfaces in the preset interface library into universal components which can be directly called according to the requirements of an education scene; the scene combination module is used for constructing at least one scene combination applied to different education scenes based on at least one component, namely constructing the scene combination suitable for various education scenes based on at least one constructed general component, flexibly combining a plurality of generated components into a template suitable for education interaction scenes, and directly embedding the scene combination into an editor, so that the education scenes can be conveniently constructed based on the editor; the scene construction module is used for constructing target truthful scene education scene component combinations according to at least one scene combination, the scene construction module utilizes the editor to construct a target education scene, the scene combination is specifically dragged to a scene editing area of the editor, the attributes of the components are modified in the scene editing area, and the construction of the target education scene is completed. The system provided by the disclosure effectively reduces the development period of the virtual education scene, improves the development efficiency, and effectively reduces the waste of manpower and material resources.
On the basis of the foregoing embodiment, fig. 4 is a flowchart of a truthful scene construction method provided by the embodiment of the present disclosure, which is applied to a terminal configured with a truthful scene construction system, and specifically includes the following steps S410 to S430 shown in fig. 4:
s410, packaging at least one interface in a preset interface library into at least one component, and setting editable component attributes for each component in the at least one component.
It is understood that the implementation step of S410 is specifically referred to the implementation step of the component building module 110, and is not described herein again.
And S420, constructing at least one scene combination applied to different education scenes based on the at least one component.
It can be understood that, on the basis of the foregoing S410, the implementation step of S420 specifically refers to the implementation step of the foregoing scene combination module 120, and details are not described herein.
And S430, constructing a target total true scene according to the at least one scene combination.
It can be understood that, on the basis of the foregoing S420, the implementation step of S430 specifically refers to the implementation step of the foregoing scene building module 130, and is not described herein again.
Optionally, the constructing a target truthful scene according to the at least one scene combination in S430 above specifically includes the following steps:
and responding to the triggering operation of a target scene combination in the at least one scene combination, editing the component attribute of each component in at least one component included in the target scene combination, and constructing a target truetype scene.
It can be understood that, based on an editor with a plurality of built-in scene combinations, in response to a trigger operation of a target scene combination in the scene combinations, component attributes of at least each component in the components included in the target scene combination are edited, attributes suitable for a target education scene are selected, and a target truthful scene is constructed, wherein the target scene combination can be at least one scene combination, the scene combinations can be used in combination, the components in each scene combination can be selected to be used or not according to requirements, the specific mode for constructing the education scene based on the scene combination is not limited in the above, and can be selected according to user requirements.
The embodiment of the disclosure provides a method for constructing a total real scene, wherein at least one interface in a preset interface library is packaged into at least one component, an editable component attribute is set for each component in the at least one component, and a plurality of interfaces are packaged into a universal component, so that the calling cost is saved, and the development efficiency is greatly improved; then, at least one scene combination applied to different education scenes is built based on at least one component, different universal components are built into interaction component combinations which can be used by the education scenes, and more personalized requirements are realized by editing each component in the scene combination, so that the flexibility of the construction of the education scenes is improved; the target total-reality scene is constructed according to at least one scene combination, and the scene combination can be directly edited, so that the construction speed of the total-reality scene is accelerated, the development efficiency is improved, and the development cost is further saved.
An exemplary embodiment of the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor, the computer program, when executed by the at least one processor, is for causing the electronic device to perform a method according to an embodiment of the disclosure.
The exemplary embodiments of the present disclosure also provide a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is adapted to cause the computer to perform a method according to an embodiment of the present disclosure.
Referring to fig. 5, a block diagram of a structure of an electronic device 500, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the electronic device 500 are connected to the I/O interface 505, including: an input unit 506, an output unit 507, a storage unit 508, and a communication unit 509. The input unit 506 may be any type of device capable of inputting information to the electronic device 500, and the input unit 506 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device. Output unit 507 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 504 may include, but is not limited to, magnetic or optical disks. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 501 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above. For example, in some embodiments, the text recognition method or the training method of the recognition network may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM 502 and/or the communication unit 509. In some embodiments, the computing unit 501 may be configured in any other suitable way (e.g., by means of firmware) to perform a text recognition method or a training method of a recognition network.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The previous description is only for the purpose of describing particular embodiments of the present disclosure, so as to enable those skilled in the art to understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The utility model provides a truthful scene construction system which characterized in that, includes subassembly construction module, scene combination module and scene construction module, wherein:
the component construction module is used for packaging at least one interface in a preset interface library into at least one component and setting editable component attributes for each component in the at least one component;
the scene combination module is used for constructing at least one scene combination applied to different education scenes based on the at least one component;
the scene construction module is used for constructing a target total-reality scene according to the at least one scene combination.
2. The system of claim 1, wherein the at least one component comprises at least one of: the system comprises a dialog box component, a physical component, an audio and video component, an environment atmosphere component, a character animation component, a trigger logic component and a special effect component.
3. The system of claim 2, wherein the component construction module is specifically configured to package a text display interface, an image display interface, and an identification display interface in a preset interface library as the dialog component, and set an editable first component attribute for the dialog component, wherein the first component attribute at least includes one of dialog content, font, and color;
the component construction module is specifically configured to encapsulate a rigid body interface and a collision interface in the preset interface library as the physical component, and set an editable second component attribute for the physical component, where the second component attribute at least includes one attribute of a collision and a moment;
the component construction module is specifically used for packaging a video interface and an audio interface in the preset interface library into the audio and video component and setting an editable third component attribute for the audio and video component, wherein the third component attribute at least comprises one attribute of an audio and video type and an audio and video effect;
the component construction module is specifically configured to encapsulate a sky box interface and a terrain interface in the preset interface library as the environmental atmosphere component, and set an editable fourth component attribute for the environmental atmosphere component, where the fourth component attribute includes an educational scene attribute;
the component construction module is specifically used for packaging a skeleton interface and an animation interface in the preset interface library into the character animation component and setting an editable fifth component attribute for the character animation component, wherein the fifth component attribute comprises a character action attribute;
the component building module is specifically used for packaging a trigger interface and an editable language logic interface in the preset interface library into the trigger logic component and setting an editable sixth component attribute for the trigger logic component, wherein the sixth component attribute at least comprises one of clicking, approaching and triggering logic;
the component construction module is specifically configured to encapsulate a particle system interface and a visual special effect interface in the preset interface library as the special effect component, and set an editable seventh component attribute for the special effect component, where the seventh component attribute at least includes one attribute of a light effect and a drop effect.
4. The system of claim 2, wherein the at least one scene combination comprises at least one of the following combinations: the system comprises a podium interactive combination, a teaching task interactive combination, a completion task rewarding combination and a three-dimensional collision answering combination.
5. The system of claim 4, wherein the scene combination module is specifically configured to construct the podium interaction combination based on the audio-video component, the character animation component, and the trigger logic component;
the scene combination module is specifically used for constructing the teaching task interactive combination based on the dialog box component, the environment atmosphere component and the trigger logic component;
the scene combination module is specifically configured to construct the complete task reward combination based on the special effect component and the trigger logic component;
the scene combination module is specifically used for constructing the three-dimensional collision question combination based on the dialog box component and the physical component.
6. The system according to claim 1, wherein the scene construction module is specifically configured to, in response to a trigger operation of a target scene combination in the at least one scene combination, edit a component attribute of each component in at least one component included in the target scene combination to construct a target truetype scene.
7. A method for constructing a total true scene, which is characterized by comprising the following steps:
packaging at least one interface in a preset interface library into at least one component, and setting editable component attributes for each component in the at least one component;
constructing at least one scene combination applied to different educational scenes based on the at least one component;
and constructing a target total true scene according to the at least one scene combination.
8. The method of claim 7, wherein constructing the target truthful scene from the at least one scene combination comprises:
and responding to the triggering operation of a target scene combination in the at least one scene combination, editing the component attribute of each component in at least one component included in the target scene combination, and constructing a target truetype scene.
9. An electronic device, characterized in that the electronic device comprises:
a processor; and
a memory for storing a program, wherein the program is stored in the memory,
wherein the program comprises instructions which, when executed by the processor, cause the processor to carry out the all-true scene construction method according to any one of claims 7 to 8.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the all real scene constructing method according to any one of claims 7 to 8.
CN202210628634.6A 2022-06-06 2022-06-06 Full-real scene construction system, method, electronic equipment and storage medium Pending CN115202624A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210628634.6A CN115202624A (en) 2022-06-06 2022-06-06 Full-real scene construction system, method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210628634.6A CN115202624A (en) 2022-06-06 2022-06-06 Full-real scene construction system, method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115202624A true CN115202624A (en) 2022-10-18

Family

ID=83576756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210628634.6A Pending CN115202624A (en) 2022-06-06 2022-06-06 Full-real scene construction system, method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115202624A (en)

Similar Documents

Publication Publication Date Title
KR20120045744A (en) An apparatus and method for authoring experience-based learning content
CN102609991A (en) Volume-reduction optimization method for three-dimensional solid model
WO2018058811A1 (en) Virtual reality scene loading method and device
KR102224785B1 (en) Method And Apparatus for Providing Coding Education Service
WO2023124055A1 (en) Digital-twin-based artificial enhancement method and apparatus, and medium
WO2020186934A1 (en) Method, apparatus, and electronic device for generating animation containing dynamic background
CN109391848A (en) A kind of interactive advertisement system
CN107861928A (en) One kind makes demonstration courseware method and apparatus
CN110062290A (en) Video interactive content generating method, device, equipment and medium
CN113935868A (en) Multi-courseware teaching demonstration system based on Unity3D engine
CN112218130A (en) Control method and device for interactive video, storage medium and terminal
CN108762878A (en) A kind of application program interactive interface creating method, device, equipment and storage medium
CN114237540A (en) Intelligent classroom online teaching interaction method and device, storage medium and terminal
WO2023241369A1 (en) Question answering method and apparatus, and electronic device
Gao et al. [Retracted] Realization of Music‐Assisted Interactive Teaching System Based on Virtual Reality Technology
Rankin et al. Training systems design: bridging the gap between users and developers using storyboards
CN111167119A (en) Game development display method, device, equipment and storage medium
CN115202624A (en) Full-real scene construction system, method, electronic equipment and storage medium
Clark Extended reality in informal learning environments
Song et al. Hands-on: rapid interactive application prototyping for media arts and stage production
Jin et al. Volumivive: An Authoring System for Adding Interactivity to Volumetric Video
CN114827703B (en) Queuing playing method, device, equipment and medium for views
WO2023231553A1 (en) Prop interaction method and apparatus in virtual scene, electronic device, computer readable storage medium, and computer program product
CN117671204A (en) Interactive business application loading method and device of meta universe
CN115964111A (en) Information display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination