CN112987921A - VR scene explanation scheme generation method - Google Patents

VR scene explanation scheme generation method Download PDF

Info

Publication number
CN112987921A
CN112987921A CN202110195568.3A CN202110195568A CN112987921A CN 112987921 A CN112987921 A CN 112987921A CN 202110195568 A CN202110195568 A CN 202110195568A CN 112987921 A CN112987921 A CN 112987921A
Authority
CN
China
Prior art keywords
explanation
action
file
scene
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110195568.3A
Other languages
Chinese (zh)
Other versions
CN112987921B (en
Inventor
战立涛
李广朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHEZHI HULIAN (BEIJING) SCIENCE & TECHNOLOGY CO LTD
Original Assignee
CHEZHI HULIAN (BEIJING) SCIENCE & TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHEZHI HULIAN (BEIJING) SCIENCE & TECHNOLOGY CO LTD filed Critical CHEZHI HULIAN (BEIJING) SCIENCE & TECHNOLOGY CO LTD
Priority to CN202110195568.3A priority Critical patent/CN112987921B/en
Publication of CN112987921A publication Critical patent/CN112987921A/en
Application granted granted Critical
Publication of CN112987921B publication Critical patent/CN112987921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The invention discloses a method for generating a VR scene explanation scheme, which comprises the following steps: creating an explanation file and generating a time line for the created explanation file; selecting an explanation audio clip related to the theme content of the display scene from a database, and adding the selected explanation audio clip into the timeline without overlapping time; playing the selected explanation audio clip in the VR scene, and executing the action of the explanation audio clip in the process of playing the explanation audio clip to obtain action information; binding the action information to a time line; and storing the time line, the explanation audio clips added into the time line and the action information bound to the time line into an explanation file, thereby obtaining an explanation scheme. The realization separately records corresponding audio frequency, action under different show scenes, for recording a complete explanation file simultaneously to all show scenes, and audio frequency and action in the explanation file are recorded simultaneously, the generation of explanation scheme is more nimble, also can carry out the later stage and edit.

Description

VR scene explanation scheme generation method
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR scene explanation scheme generation method and a VR scene explanation scheme playing method.
Background
With the continuous development of VR technology, it is now common to explain a specific target, such as a vehicle, in a VR scene. At present, a method for explaining a target based on a VR scene mainly comprises two steps of recording and playing an explanation file, wherein in the stage of recording the explanation file, an interpreter explains audios of all scenes, and simultaneously, according to specific actions of audio operation, for example, a mouse clicks or drags a VR picture to display a specific angle, and the explanation audios, action data and action time are recorded. And in the stage of playing the explanation file, synchronously playing the explanation audio and the action data, and completely reproducing the explanation audio and the visual angles of all scenes of the explanation.
However, in the process of recording audio and motion data by an interpreter, the interpreter is usually required to interpret audio and motion simultaneously, and the interpreting process is completed at one time, so that recording is inflexible, fault-tolerant capability is poor, post-editing cannot be performed, and specific motion cannot be operated by a user in the process of playing the interpreted audio and motion, and interactivity is poor.
Therefore, a more flexible and highly interactive VR scene interpretation scheme generation and playing method needs to be provided for users.
Disclosure of Invention
To this end, the present invention provides a solution to, or at least alleviate, at least one of the problems identified above.
According to an aspect of the present invention, there is provided a VR scene interpretation scheme generation method, executed in a computing device, the computing device being communicatively connected to a database, where pre-recorded interpretation audio segments are stored in the database, where an interpretation file is used to interpret display scenes, each display scene corresponds to one interpretation file, each interpretation file corresponds to one or more interpretation audio segments, and each interpretation audio segment interprets a document, the method including the steps of:
creating an explanation file and generating a time line for the created explanation file;
selecting an explanation audio clip related to the theme content of the display scene from the database, and adding the selected explanation audio clip into the timeline without overlapping time;
playing the selected explanation audio clip in the VR scene, and executing the action of the explanation audio clip in the process of playing the explanation audio clip to obtain action information;
binding the action information to the timeline;
and storing the time line, the explanation audio clips added into the time line and the action information bound to the time line into the explanation file, thereby obtaining an explanation scheme.
Optionally, the method further comprises the steps of:
and creating an action ID for executing the action explaining the audio clip, wherein the action ID represents the sequence of the action.
Optionally, the action information includes an action time point, an action ID, action data and an action type, the action time point records an action occurrence time, the action data records the content of the action, and the action type represents an action category;
wherein the step of binding the action information to the timeline comprises:
and binding action information to the time line according to the action time points and the sequence of the action IDs.
Optionally, the method further comprises the steps of:
assigning a name to the explanation file, wherein the name corresponds to the presentation scene;
and storing the time line, the explanation audio clip and the explanation file of the action information and the name of the explanation file into the database.
Optionally, the action types include one or more of playing an audio clip, switching a display scene, switching a color, opening a door, closing a door, switching a view angle, switching a viewpoint, opening a picture anchor, closing picture browsing, opening a video anchor, closing video playing, clicking a price inquiry anchor, closing price inquiry, opening a chat robot anchor, and closing a chat robot page.
Optionally, the display scene comprises one or more of a driving seat, a co-driving seat, a rear row and a trunk.
Optionally, the case includes one or more of a center console, a steering wheel, a dashboard, large screen content, navigation, reverse image, voice control, mobile phone interconnection, air conditioning, vehicle television, multi-function rearview mirror, seat conditioning, seat heating and ventilation, air conditioning, mobile phone charging, shift knob, driving mode, parking, steep descent, engine start and stop, parking brake, storage space, front row space, rear row space, door panel function, door and window control, speaker, airbag, parking brake, center armrest, skylight, vanity mirror sun visor, cup holder, wireless charging, storage space, seat falling, back room space, and maintenance.
Optionally, the database stores materials in advance, where the materials include texts, pictures and videos, and the method further includes:
acquiring the materials from the database;
creating a VR scene based on pictures in the material;
adding an anchor point in the created VR scene;
and associating the added anchor points with the material to finish the editing of the VR scene.
According to another aspect of the present invention, there is provided a VR scene interpretation scheme playing method, executed in a server, where the server is respectively in communication connection with a client and a database, the database stores an interpretation scheme list, the interpretation scheme list includes a plurality of interpretation file names, each interpretation file name corresponds to an interpretation file, and each interpretation file is generated by the VR scene interpretation file generating method according to any one of claims 1 to 8, and the method includes:
responding to the operation that a user opens a page for playing the explanation file at the client, and displaying a VR scene;
acquiring an explanation scheme list from the database, and displaying the explanation scheme list in a VR scene;
responding to the user clicking the name of the explanation file in the explanation scheme list, and acquiring the corresponding explanation file from the database;
and playing the acquired explanation file content in the VR scene.
Optionally, the explanation file includes a timeline, an explanation audio clip added to the timeline, and action information bound to the timeline, and the step of playing the acquired explanation file content in the VR scene includes:
in the VR scene, the explanation audio clip and the action information are played according to the time line of the acquired explanation file.
Optionally, the action information includes an action time point, an action ID and action data, the action time point records an action occurrence time, the action ID represents an order of occurrence of the action, the action data records content of the action, and the playing and explaining the audio clip and the action information includes:
playing the explanation audio clip;
recording the playing time of the explaining audio clip;
reading an action ID of a currently playing action, wherein the currently playing action is executed based on the action data;
traversing the time line of the playing audio decoding clip at intervals of first preset time, judging whether the action ID in the time line is larger than the action ID of the current playing, if so, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if so, playing the action of the action ID in the time line, and if not, continuously playing the current action.
Optionally, wherein the action information further includes an action type, further including the steps of:
judging an action type, wherein the action type is determined according to the action type;
and playing the action of the corresponding category according to different playing methods.
Optionally, the action types include one or more of playing an audio clip, switching a display scene, switching a color, opening a door, closing a door, switching a view angle, switching a viewpoint, opening a picture anchor, closing picture browsing, opening a video anchor, closing video playing, clicking a price inquiry anchor, closing price inquiry, opening a chat robot anchor, and closing a chat robot page.
Optionally, the method further comprises the steps of:
responding to the user to trigger a pause anchor point at the client, pausing the playing of the explained audio clip, and stopping recording the playing time of the explained audio clip;
responding to the operation that a user moves a mouse in a VR scene, and pausing playing action;
counting the time for moving the mouse;
and judging whether the time for moving the mouse is greater than second preset time, if not, continuing to pause the playing action, and if so, playing the explanation audio clip and the action information in the VR scene according to the acquired time line of the explanation file.
According to yet another aspect of the invention, there is provided a computing device comprising at least one processor; and a memory storing program instructions configured for execution by the at least one processor, the program instructions comprising instructions for performing the above-described method according to the invention.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions which, when read and executed by a computing device, cause the computing device to perform the above-described method of the present invention.
According to the VR scene explanation scheme generation method provided by the invention, corresponding audio and actions can be recorded separately in different display scenes, and compared with the method that a complete explanation file is recorded simultaneously aiming at all display scenes, and the audio and the actions in the explanation file are recorded simultaneously, the explanation scheme is generated more flexibly and can be edited in a later period.
In addition, according to the VR scene explanation scheme playing method based on the explanation scheme generating method, a user can freely select an explanation file by clicking the name of the explanation file in the explanation scheme list in a VR scene, and can interrupt explanation audio and actions at any time in the process of watching the content of the explanation file, so that a visual angle can be freely dragged, an interested anchor point can be clicked, and interactivity of the user in the process of watching the explanation scheme is reserved, and therefore user experience is improved.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
Fig. 1 shows a schematic diagram of a VR scene interpretation scheme generation system 100 according to an embodiment of the invention;
FIG. 2 shows a schematic diagram of a computing device 200, according to one embodiment of the invention;
fig. 3 shows a schematic diagram of a VR scene interpretation scheme playback system 300 according to an embodiment of the invention;
FIG. 4 illustrates a flow diagram of a VR scene interpretation scheme generation method 400 in accordance with one embodiment of the present invention;
fig. 5 shows a flowchart of a VR scene interpretation scheme playing method 500 according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the continuous development of VR technology, it is now common to explain a specific target, such as a vehicle, in a VR scene. At present, a method for explaining a target based on a VR scene mainly comprises two steps of recording and playing an explanation file, wherein in the stage of recording the explanation file, an interpreter explains audios of all scenes, and simultaneously, a specific action is operated according to the audios, for example, a mouse is used for clicking or dragging a VR picture to display a specific angle, and the explanation audios, action data and action time are recorded. And in the stage of playing the explanation file, synchronously playing the explanation audio and the action data according to the action time, and completely reproducing the explanation audio and the operation action of the interpreter aiming at all scenes.
However, in the process of recording audio and motion data by an interpreter, the interpreter is generally required to record a complete interpretation scheme for all display scenes, and the interpretation audio and the operation motion in the interpretation scheme are performed simultaneously, so that the interpretation process is completed at one time, which results in inflexible recording, poor fault-tolerant capability, and incapability of performing later editing. In addition, because the explanation audio and the action of the complete flow are stored, the data size is large, the loading speed is low during playing, the pause phenomenon is easily caused, and the user experience is poor. In the process of playing the explanation audio and the operation action, the user cannot operate on the playing page or select the display scene for playing, and the interactivity and the user experience are poor.
Therefore, in order to solve the above technical problems, the present invention provides a more flexible VR scene interpretation scheme generating system 100 and a VR scene interpretation scheme playing system 300 with strong interactivity.
Fig. 1 shows a schematic diagram of a VR scene interpretation scheme generation system 100 according to an embodiment of the invention.
As shown in fig. 1, the VR scenario interpretation scheme generation system 100 includes a computing device 200, a database 110, the computing device 200 communicatively coupled to the database 110. The database 110 stores pre-recorded lecture audio segments, each of which explains a document. The database 310 also stores materials in advance, wherein the materials include characters, pictures and videos, and the materials are used for generating and editing VR scenes.
According to one embodiment of the invention, the document includes one or more of a center console, a steering wheel, a dashboard, large screen content, navigation, reverse image, voice control, cell phone interconnection, air conditioning, vehicle television, multi-function rearview mirror, seat conditioning, seat heating and ventilation, air conditioning control, cell phone charging, gear shift, driving mode, parking, hill descent, engine start and stop, parking brake, storage space, front space, rear space, door panel function, door and window control, speaker, airbag, parking brake, center armrest, sunroof, vanity mirror visor, cup holder, wireless charging, storage space, seat down, back room space, and maintenance.
For example, the steering wheel interpretation of the great wall car-tank 300 is: the 12.3 inch full liquid crystal instrument panel has clear display effect, but the UI style can only be used as reference because the instrument panel is a non-final version. At present, information such as altitude, air pressure, head orientation and the like can be displayed on an instrument panel.
It should be noted that the present invention is not limited by the manner in which computing device 200 is connected to database 110. For example, the computing device 200 may access the internet through a wired or wireless manner and connect with the database 110 through a data interface, so that the computing device 200 may acquire the lecture audio clip from the database 110 based on a network, and the computing device 200 may upload the generated lecture file to the database 110 based on the network.
The explanation files are used for explaining the display scenes, each display scene corresponds to one explanation file, and each explanation file corresponds to one or more explanation audio clips. According to one embodiment of the invention, the display scenario includes one or more of a driver's seat, a co-driver's seat, a rear row, and a trunk.
The database 110 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; the explanation audio clip may be provided in a plurality of geographic locations as a distributed database, such as HBase, or may reside in the computing device 200, so that the computing device 200 may directly obtain the explanation audio clip from the database 110, or may directly upload the generated explanation file to the database 110. In summary, the data storage 110 is used for storing the explanation audio segments and the explanation files generated by the computing device 200, and the present invention is not limited to the specific deployment and configuration of the data storage 110.
The computing device 200 is used to generate an explanation scheme, and the explanation scheme generation system 100 of the present invention operates as follows: firstly, an explanation file is created, a time line is generated for the created explanation file, an explanation audio clip relevant to the theme content of a display scene is selected from a database 110, the selected explanation audio clip is added to the time line in a non-overlapping mode, the selected explanation audio clip is played in a VR scene, the action of the explanation audio clip is executed in the process of playing the explanation audio clip to obtain action information, the action information is bound to the time line, finally, the time line, the explanation audio clip added to the time line and the action information bound to the time line are stored in the explanation file, the explanation file after the information is stored is uploaded to the database 110 to be stored, and an explanation scheme is obtained. The separate recording of the explanation files aiming at different display scenes is realized, the explanation audio and the action in the explanation files can be separately recorded, the recording flexibility is increased due to the fact that the explanation audio and the action are separately recorded, and the explanation audio clips and the action recorded in different display scenes can be edited in the later stage.
FIG. 2 shows a block diagram of a computing device 200, according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, a computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 204 may include one or more levels of cache, such as a level one cache 210 and a level two cache 212, a processor core 214, and registers 216. Example processor cores 214 may include Arithmetic Logic Units (ALUs), Floating Point Units (FPUs), digital signal processing cores (DSP cores), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. The physical memory in the computing device is usually referred to as a volatile memory RAM, and data in the disk needs to be loaded into the physical memory to be read by the processor 204. System memory 206 may include an operating system 220, one or more applications 222, and program data 224. In some implementations, the application 222 can be arranged to execute instructions on the operating system with the program data 224 by the one or more processors 204. Operating system 220 may be, for example, Linux, Windows, or the like, which includes program instructions for handling basic system services and for performing hardware-dependent tasks. The application 222 includes program instructions for implementing various user-desired functions, and the application 222 may be, for example, but not limited to, a browser, instant messenger, a software development tool (e.g., an integrated development environment IDE, a compiler, etc.), and the like. When the application 222 is installed into the computing device 200, a driver module may be added to the operating system 220.
When computing device 200 is started, processor 204 reads program instructions for operating system 220 from system memory 206 and executes them. Applications 222 run on top of operating system 220, utilizing the interface provided by operating system 220 and the underlying hardware to implement various user-desired functions. When a user launches an application 222, the application 222 is loaded into the system memory 206, and the processor 204 reads and executes the program instructions of the application 222 from the system memory 206.
Computing device 200 also includes storage device 232, storage device 232 including removable storage 236 and non-removable storage 238, each of removable storage 236 and non-removable storage 238 being connected to storage interface bus 234. Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to the basic configuration 202 via the bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. Example peripheral interfaces 244 can include a serial interface controller 254 and a parallel interface controller 256, which can be configured to facilitate communications with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 258. An example communication device 246 may include a network controller 260, which may be arranged to facilitate communications with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In a computing device 200 according to the present invention, the application 222 includes a plurality of program instructions that perform a VR scene interpretation scheme generation method that can instruct the processor 204 to perform the VR scene interpretation scheme generation method 400 of the present invention such that the computing device 200 generates an interpretation scheme by performing the VR scene interpretation scheme generation method 400 of the present invention.
Fig. 3 shows a schematic diagram of a VR scene interpretation scheme playback system 300 according to an embodiment of the present invention.
As shown in fig. 3, the VR scene interpretation scheme playing system 300 includes a database 310, a client 320, and a server 330. Server 330 is communicatively coupled to database 310 and client 320, respectively. It should be noted that the present invention is not limited to the connection of the server 330 to the database 310 and the client 320. For example, the server 330 may access the internet through a wired or wireless manner, and is connected with the database 310 and the client 320 through data interfaces, respectively.
The database 310 stores therein an explanation file generated based on the VR scene explanation scheme generating system 100 and an explanation scheme list including a plurality of explanation file names, each explanation file name corresponding to an explanation file, each explanation file explaining a display scene. So that the server 330 can obtain the list of the interpretation schemes and the interpretation file from the database 310 based on the network, the client 320 can also send a request for obtaining the interpretation file to the server 330 based on the network and receive the interpretation file sent by the server 330. For example, when the VR scene interpretation scheme playing system 300 is applied to the car field, the interpretation scheme list is shown in table 1:
TABLE 1
Driving position
Copilot seat
Rear row
Trunk
In one embodiment of the present invention, the database 310 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; may be located in multiple geographic locations as a distributed database, such as HBase, etc., or may reside in server 330, such that server 330 may retrieve the list of interpretation schemes and the interpretation file directly from database 310. In short, the database 310 is used to store the explanation files and the explanation scheme list, and the present invention does not limit the specific deployment and configuration conditions of the database 310.
The client 320 is a terminal device used by a user, and may specifically be a personal computer such as a desktop computer and a notebook computer, or may also be a mobile phone, a tablet computer, a multimedia device, an intelligent wearable device, and the like, but is not limited thereto. In one embodiment, the client 320 is a mobile terminal, such as a mobile phone, a tablet computer, and the like, and one or more mobile applications are installed in the client 320, and the applications can provide a function of playing an explanation file in a VR scene. After the application is installed, the client 320 may send an HTTP request for obtaining the explanation file corresponding to the presentation scenario to the computing device 200 through the network in the application, or may receive an HTTP request for obtaining the explanation file corresponding to the presentation scenario sent by the computing device 200. Here, the present invention is not limited to a specific use of the application. It should be noted that the number of the clients 320 may be one or more, and the present invention is not limited to this.
Server 330 may be an application server, a Web server, or the like; but may also be implemented as a desktop computer, a notebook computer, a processor chip, a tablet computer, etc., but is not limited thereto. Server 330 may retrieve data stored in database 310. For example, the server 330 may directly read the explanation files and the explanation scheme list in the database 310 (when the database 310 is a local database of the computing device 200), or may access the internet in a wired or wireless manner, and obtain the explanation files and the explanation scheme list in the database 310 through a data interface. The server 330 of the present invention may be implemented as a computing device 200, so that the VR scene interpretation scheme playing method of the present invention may be executed in the computing device 200, and a structure diagram of the computing device 200 is shown in fig. 2, which is not described herein again.
The working process of the VR scene interpretation scheme playing system 300 of the present invention is as follows: the server 330 responds to an operation of opening a page for playing an explanation file in the mobile application of the client 320 by a user, displays a VR scene, acquires an explanation scheme list from the database 310, displays the explanation scheme list in the VR scene, responds to the user clicking an explanation file name in the explanation scheme list, acquires a corresponding explanation file from the database 310, and plays the acquired explanation file content in the VR scene.
Therefore, based on the VR scene explanation scheme playing system, a user can freely select an explanation file, namely a self-selection display scene, by clicking the name of the explanation file in the explanation scheme list in the VR scene page of the mobile application. In the process of watching the content of the explanation file, the explanation audio and the action can be interrupted at any time, the picture is freely dragged to realize the visual angle conversion, and the interested anchor point can be clicked, so that the interactivity of the user in the process of watching the explanation scheme is improved. Moreover, each display scene corresponds to one explanation file, but not all display scenes are explained simultaneously through one explanation file, that is, the explanation files of each display scene are stored independently, so that the speed of loading the explanation files by the server 330 is increased, the phenomenon of pause during reading the explanation files is avoided, and the user experience is improved.
Fig. 4 shows a schematic flow chart of a VR scene interpretation scheme generation method 400 according to an embodiment of the present invention, where the method 400 can be applied to the field of speaking vehicles, that is, vehicles are explained, each vehicle corresponds to one or more interpretation schemes, and the one or more interpretation schemes can be generated by the VR scene interpretation scheme generation method 400 provided by the present invention. As shown in fig. 4, the method begins at step S410. Before step S410 is executed, the VR scene needs to be generated and edited, and the process of generating and editing the VR scene is as follows: the method includes the steps of obtaining pre-stored materials from the database 110, creating a VR scene based on pictures in the materials, adding anchors to the created VR scene, associating the added anchors with the materials, and finishing editing of the VR scene.
The anchor point is a graphic button in the VR scene, the position is generally fixed, and when a user clicks the anchor point, materials such as characters, pictures or videos can be displayed in the VR scene. It should be noted that the present invention is not limited to the implementation of creating a VR scene based on pictures in a material, and all methods that can create a VR scene in the prior art are within the scope of the present invention.
Then, in step S410, an explanation file is created, and a timeline is generated for the created explanation file, that is, each explanation file corresponds to one timeline, and at this time, the created explanation file only includes the timeline. And assigning names to the created explanation files, wherein the names of the explanation files are different, and the definition mode of the names is random and only needs to be different. In one embodiment, the name may correspond to the presentation scene, for example, if the presentation scene is a driver's seat, the name of the explanation file may be defined as the driver's seat, and each vehicle may include one or more presentation scenes, and thus one or more explanation files. For example, if a certain vehicle includes two presentation scenes, it corresponds to two lecture files.
Next, in step S420, an explanation audio clip related to the subject matter of the exhibition scene is selected from the database 210, and the selected explanation audio clip is added to the time line without overlapping time. When the explanation audio clips are added to the timeline, the adding sequence of the explanation audio clips is not limited, but it should be noted that the time for adding the timeline is not overlapped among the explanation audio clips.
Subsequently, in step S430, in the VR scene, the selected explanation audio clip is played, and the action of explaining the audio clip is executed in the process of playing the explanation audio clip, so as to obtain the action information. Specifically, the selected explanation audio clip is played, in the process of playing the audio clip, the action in the audio clip is executed, the action ID is created for executing the action in the explanation audio clip, and the action time point, the action data and the action type are recorded, so that the action information is obtained. Wherein the action information includes: the audio playing method comprises the following steps that an action time point, an action ID, action data and an action type are adopted, the action time point records action occurrence time, the action ID represents the sequence of action occurrence, the action data records the content of the action, the action data can show the action to a user in the audio playing process subsequently based on the action data, and the action type represents the action category.
In one embodiment, the action types include one or more of playing an audio clip, showing scene switching, color switching, opening a door, closing a door, viewing angle switching, viewpoint switching, opening a picture anchor, closing picture browsing, opening a video anchor, closing video playing, clicking a price inquiry anchor, closing price inquiry, opening a chat robot anchor, and closing a chat robot page.
In one embodiment, the action ID is a code of the action, the code is used to indicate the sequence of the action, and the action ID may be a random code as long as the sequence of each action is distinguished. For example, an explanation audio clip includes five actions, specifically, door opening, door closing, view switching, viewpoint switching, and picture anchor point opening, and the sequence of the five actions is: viewpoint switching, door opening, view switching, picture anchor point opening, and door closing, then the encoding for these five actions is: the method comprises the steps of opening a door (2), closing the door (5), switching visual angles (3), switching viewpoints (1) and opening a picture anchor point (4). Of course, the present invention does not limit the specific implementation manner of encoding the actions, and the encoding manners that can be implemented in the prior art are within the protection scope of the present invention, as long as the encoding can distinguish the sequence of each action.
Next, in step S440, the action information is bound to the timeline, in one embodiment, the action information is bound to the timeline according to the sequence of the action time point and the action ID, and finally, in step S450, the timeline, the explanation audio clip added to the timeline, and the action information bound to the timeline are stored in the explanation file, so as to obtain an explanation scheme, and the explanation file storing the timeline, the explanation audio clip, the action information, and the name of the explanation file are stored in the database 110.
According to the VR scene explanation scheme generation method 400 provided by the invention, a plurality of explanation files can be recorded according to different display scenes, and the audio and the action of the explanation files can be recorded separately, so that the explanation scheme is generated more flexibly and the explanation scheme can be effectively accelerated to be manufactured compared with the case that a complete explanation file is recorded aiming at all the display scenes and the audio and the action in the explanation file are recorded simultaneously. And each explanation file has a time line, and when the time line is edited, the audio and the action added into the time line can be edited and replaced, so that the later edition of the explanation file is realized.
Fig. 5 shows a schematic flow diagram of a VR scene interpretation scheme playing method 500 according to an embodiment of the invention, and the method 500 can be applied to playing the content of an interpretation vehicle in the field of speaking vehicles. As shown in fig. 5, the method starts in step S510, and in response to an operation of a user opening a play explanation file page on a client, a VR scene is displayed, and in particular, in response to an operation of a user opening a play explanation file page in a mobile application of a client 320, a VR scene is displayed.
It should be noted that the database 310 also stores materials, where the materials include characters, pictures, and videos, creates a VR scene based on the pictures in the materials, adds an anchor point to the created VR scene, associates the added anchor point with the materials, and completes editing of the VR scene.
Next, in step S520, an explanation scheme list is obtained from the database 310, and the explanation scheme list is displayed in the VR scene. The explanation scheme list includes a plurality of explanation file names, each explanation file name corresponds to an explanation file, each explanation file includes a time line, an explanation audio clip added to the time line, and action information bound to the time line, and the explanation file is generated by the VR scene explanation file generation method 400 provided by the present invention. When the explanation scheme list is displayed, the explanation audio clip is read and loaded from the database 310 by reading the url, and the explanation audio clip is loaded in advance, so that the audio clip does not need to be loaded when the audio playing action is executed later, the explanation audio clip is played more smoothly, and the user experience is improved. Each display scene corresponds to one explanation file, each explanation file corresponds to one or more explanation audio clips, and the explanation audio clips are not the complete audio of all scenes, so that the speed of reading and loading the explanation audio clips is high.
The list of the explanation schemes is shown in table 1, and will not be described herein.
Subsequently, in step S530, in response to the user clicking the name of the explanation file in the explanation scheme list, the corresponding explanation file is acquired from the database 310. Finally, in step S540, in the VR scene, the acquired explanation file content is played. The explanation file comprises a time line, an explanation audio clip added into the time line and action information bound to the time line, wherein the action information comprises an action time point, an action ID, an action type and action data. The action time point records action occurrence time, the action ID represents the sequence of action occurrence, the action data records the content of the action, and the action type represents the action type.
In one embodiment, the step of playing the acquired explanation file content in the VR scene includes: in a VR scene, playing an explanation audio clip through a player according to a timeline of an acquired explanation file, recording playing time of the explanation audio clip, and reading an action ID of a current playing action, wherein the current playing action is executed based on action data, the timeline of the currently playing explanation audio clip is traversed at intervals of a first preset time, whether the action ID in the timeline is larger than the action ID of the current playing is judged, if yes, whether an action time point is smaller than the sum of the playing time of the explanation file and the first preset time is continuously judged, if yes, an action type is judged according to the action type, actions of action IDs of corresponding types in the timeline are played according to different playing methods, and if not, the current action is continuously played. And checking the currently executed action at every interval of first preset time so as to ensure that the synchronous error of the audio and the action is less than the first preset time while the explained audio is played, thereby realizing the complete playback action.
The first preset time is set according to specific situations, and is not limited herein, and in an embodiment of the present invention, the first preset time may be 10 ms. The action types here are the same as those in the VR scene interpretation scheme generation method 400, and are not described here again.
If the action type is playing audio, the read and loaded explanation audio clip is played through the player, and the played explanation audio clip is read and loaded from the database 310 in advance while the explanation scheme list is displayed, so that the explanation audio clip can be guaranteed to be played in a non-delayed state, a user cannot feel a pause phenomenon, and user experience is improved.
If the action type is scene switching, color switching and vehicle door opening and closing, corresponding actions are played back in the VR scene, and corresponding pictures in the material are loaded. If the action type is visual angle switching or viewpoint switching, the action type is switched to different visual angles or viewpoints, and the transition animation with the duration of 1 second is played in the process of switching the visual angles or the viewpoints, so that a user has natural and smooth experience. And if the action types are anchor point opening, anchor point closing, price inquiry, chat robot opening or closing, loading different page elements, and displaying different blocks and dialog boxes in the VR scene.
Then, in the process of playing the explaining audio clip and the action, the user can operate the keyboard and the mouse at any time. Specifically, the method comprises the following steps: the server 330 may respond to the user triggering the pause anchor at the client 330, pause playing the explained audio clip, continue playing, and stop recording the playing time of the explained audio clip. Responding to the mouse moving operation of the user in the VR scene, pausing the playing action, continuing to play the explanation audio clip, counting the time of moving the mouse, judging whether the time of moving the mouse is greater than a second preset time, if not, continuing to pause the playing action, and if so, executing the step S540. Through the above process, user interactivity is preserved in the viewing of the explanation scheme. The second preset time is preset according to an actual situation, and the present invention is not limited to this, for example, the second preset time may be 5 seconds.
According to the VR scene explanation scheme playing method based on the explanation scheme generating method, a user can freely select an explanation file by clicking the name of the explanation file in an explanation scheme list in a VR scene, and can interrupt explanation audio and actions at any time in the process of watching the content of the explanation file, so that a visual angle can be freely dragged, an interested anchor point can be clicked, the interactivity of the user in the process of watching the explanation scheme is improved, and the user experience is improved.
Each display scene corresponds to one explanation file, and not all the display scenes are explained simultaneously through one explanation file, that is, the explanation files of each display scene are stored independently, so that the speed of loading the explanation files by the server 330 is increased, the phenomenon of pause during reading the explanation files is avoided, and the user experience is improved. In addition, because the explanation audio clip is read and loaded in advance, the explanation audio clip does not need to be recorded again in the process of playing the action of the explanation audio clip, so that the pause phenomenon can not occur in the process of playing the explanation audio clip, and the user experience is further improved.
A8 the method of a1, wherein the database stores materials including words, pictures and videos in advance, the method further comprises the steps of:
acquiring the materials from the database;
creating a VR scene based on pictures in the material;
adding an anchor point in the created VR scene;
and associating the added anchor points with the material to finish the editing of the VR scene.
A10 the method of a9, wherein the lecture file includes a timeline, a lecture audio clip joining the timeline, and action information bound to the timeline, and the playing the content of the obtained lecture file in the VR scene includes:
in the VR scene, the explanation audio clip and the action information are played according to the time line of the acquired explanation file.
A11 the method of a10, wherein the action information includes action time points, action IDs and action data, the action time points record action occurrence times, the action IDs indicate the sequence of action occurrences, the action data record the content of actions, and the step of playing the narration audio clip and the action information includes:
playing the explanation audio clip;
recording the playing time of the explaining audio clip;
reading an action ID of a currently playing action, wherein the currently playing action is executed based on the action data;
traversing the time line of the playing audio decoding clip at intervals of first preset time, judging whether the action ID in the time line is larger than the action ID of the current playing, if so, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if so, playing the action of the action ID in the time line, and if not, continuously playing the current action.
A12 the method of a11, wherein the action information further includes an action type, further comprising the steps of:
judging an action type, wherein the action type is determined according to the action type;
and playing the action of the corresponding category according to different playing methods.
A13 the method of a12, wherein the action types include one or more of playing an audio clip, showing scene switching, color switching, opening a door, closing a door, viewing angle switching, viewpoint switching, opening a picture anchor, closing picture browsing, opening a video anchor, closing video playback, clicking a price inquiry anchor, closing a price inquiry, opening a chat robot anchor, and closing a chat robot page.
A14 the method of any one of A9 to 13, further comprising the steps of:
responding to the user to trigger a pause anchor point at the client, pausing the playing of the explained audio clip, and stopping recording the playing time of the explained audio clip;
responding to the operation that a user moves a mouse in a VR scene, and pausing playing action;
counting the time for moving the mouse;
and judging whether the time for moving the mouse is greater than second preset time, if not, continuing to pause the playing action, and if so, playing the explanation audio clip and the action information in the VR scene according to the acquired time line of the explanation file.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the VR scene interpretation scheme generation method and the VR scene interpretation scheme playing method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with examples of this invention. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A VR scene interpretation scheme generation method implemented in a computing device, the computing device communicatively coupled to a database, the database storing pre-recorded interpretation audio clips, wherein an interpretation file is used to interpret display scenes, each display scene corresponds to an interpretation file, each interpretation file corresponds to one or more interpretation audio clips, and each interpretation audio clip interprets a document, the method comprising the steps of:
creating an explanation file and generating a time line for the created explanation file;
selecting an explanation audio clip related to the theme content of the display scene from the database, and adding the selected explanation audio clip into the timeline without overlapping time;
playing the selected explanation audio clip in the VR scene, and executing the action of the explanation audio clip in the process of playing the explanation audio clip to obtain action information;
binding the action information to the timeline;
and storing the time line, the explanation audio clips added into the time line and the action information bound to the time line into the explanation file, thereby obtaining an explanation scheme.
2. The method of claim 1, further comprising the steps of:
and creating an action ID for executing the action explaining the audio clip, wherein the action ID represents the sequence of the action.
3. The method of claim 2, wherein the action information includes an action time point, an action ID, action data, and an action type, the action time point recording an action occurrence time, the action data recording a content of an action, the action type representing an action category;
wherein the step of binding the action information to the timeline comprises:
and binding action information to the time line according to the action time points and the sequence of the action IDs.
4. The method of claim 1, further comprising the steps of:
assigning a name to the explanation file, wherein the name corresponds to the presentation scene;
and storing the time line, the explanation audio clip and the explanation file of the action information and the name of the explanation file into the database.
5. The method of any of claims 1-4, wherein the action types include one or more of playing an audio clip, showing a scene switch, color switch, opening a door, closing a door, switching a perspective, viewpoint switch, opening a picture anchor, closing a picture view, opening a video anchor, closing a video playback, clicking an inquiry anchor, closing a price inquiry, opening a chat robot anchor, closing a chat robot page.
6. The method of any one of claims 1 to 5, wherein the presentation scenario comprises one or more of a driver's seat, a co-driver's seat, a rear row, a trunk.
7. The method of any one of claims 1 to 6, wherein the document comprises one or more of a center console, steering wheel, dashboard, large screen content, navigation, reverse image, voice control, cell phone interconnection, air conditioning, on-board television, multi-function rearview mirror, seat conditioning, seat heating ventilation, air conditioning control, cell phone charging, shift knob, driving mode, park, hill descent, engine start and stop, parking brake, storage space, front row space, rear row space, door panel function, door control, speaker, airbag, parking brake, center armrest, sunroof, vanity mirror visor, cup holder, wireless charging, storage space, seat down, back box space, maintenance.
8. A VR scenario explanation method executed in a server, where the server is respectively in communication connection with a client and a database, the database stores an explanation scenario list, the explanation scenario list includes a plurality of explanation file names, each explanation file name corresponds to an explanation file, and each explanation file is generated by the VR scenario explanation file generation method according to any one of claims 1 to 8, and the method includes the steps of:
responding to the operation that a user opens a page for playing the explanation file at the client, and displaying a VR scene;
acquiring an explanation scheme list from the database, and displaying the explanation scheme list in a VR scene;
responding to the user clicking the name of the explanation file in the explanation scheme list, and acquiring the corresponding explanation file from the database;
and playing the acquired explanation file content in the VR scene.
9. A computing device, comprising:
at least one processor; and
a memory storing program instructions configured to be suitable for execution by the at least one processor, the program instructions comprising instructions for performing the bullet screen message processing method of any one of claims 1-8.
10. A readable storage medium storing program instructions which, when read and executed by a client, cause the client to perform the method of any one of claims 1-8.
CN202110195568.3A 2021-02-19 2021-02-19 VR scene explanation scheme generation method Active CN112987921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110195568.3A CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110195568.3A CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Publications (2)

Publication Number Publication Date
CN112987921A true CN112987921A (en) 2021-06-18
CN112987921B CN112987921B (en) 2024-03-15

Family

ID=76394259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110195568.3A Active CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Country Status (1)

Country Link
CN (1) CN112987921B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449122A (en) * 2021-07-09 2021-09-28 广州浩传网络科技有限公司 Method and device for generating explanation content of three-dimensional scene graph

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
CN101213606A (en) * 2005-07-01 2008-07-02 微软公司 Synchronization aspects of interactive multimedia presentation management
CN102142215A (en) * 2011-03-15 2011-08-03 南京师范大学 Adaptive geographic information voice explanation method based on position and speed
CN102449680A (en) * 2009-05-26 2012-05-09 松下电器产业株式会社 Information presentation device
CN105261252A (en) * 2015-11-18 2016-01-20 闫健 Panoramic learning platform system-based real-time action rendering method
CN105306861A (en) * 2015-10-15 2016-02-03 深圳市时尚德源文化传播有限公司 Online teaching recording and playing method and system
US20160269712A1 (en) * 2010-06-30 2016-09-15 Lewis S. Ostrover Method and apparatus for generating virtual or augmented reality presentations with 3d audio positioning
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106790498A (en) * 2016-12-15 2017-05-31 深圳市金溢科技股份有限公司 Vehicle-mounted voice intercommunication method, V2X car-mounted terminals and voice inter-speaking system
CN108241461A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 A kind of online method and apparatus for making the manuscript containing audio presentation
CN108241596A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110216693A (en) * 2019-06-21 2019-09-10 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110488979A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of automobile showing system based on augmented reality
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN111798544A (en) * 2020-07-07 2020-10-20 江西科骏实业有限公司 Visual VR content editing system and using method
WO2021008479A1 (en) * 2019-07-18 2021-01-21 乐播新瑞(北京)文化传媒有限公司 Audio generation method and system, audio playing method and system, and central control system and audio playing device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
CN101213606A (en) * 2005-07-01 2008-07-02 微软公司 Synchronization aspects of interactive multimedia presentation management
CN102449680A (en) * 2009-05-26 2012-05-09 松下电器产业株式会社 Information presentation device
US20160269712A1 (en) * 2010-06-30 2016-09-15 Lewis S. Ostrover Method and apparatus for generating virtual or augmented reality presentations with 3d audio positioning
CN102142215A (en) * 2011-03-15 2011-08-03 南京师范大学 Adaptive geographic information voice explanation method based on position and speed
CN105306861A (en) * 2015-10-15 2016-02-03 深圳市时尚德源文化传播有限公司 Online teaching recording and playing method and system
CN105261252A (en) * 2015-11-18 2016-01-20 闫健 Panoramic learning platform system-based real-time action rendering method
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106790498A (en) * 2016-12-15 2017-05-31 深圳市金溢科技股份有限公司 Vehicle-mounted voice intercommunication method, V2X car-mounted terminals and voice inter-speaking system
CN108241461A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 A kind of online method and apparatus for making the manuscript containing audio presentation
CN108241596A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110216693A (en) * 2019-06-21 2019-09-10 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
WO2021008479A1 (en) * 2019-07-18 2021-01-21 乐播新瑞(北京)文化传媒有限公司 Audio generation method and system, audio playing method and system, and central control system and audio playing device
CN110488979A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of automobile showing system based on augmented reality
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN111798544A (en) * 2020-07-07 2020-10-20 江西科骏实业有限公司 Visual VR content editing system and using method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449122A (en) * 2021-07-09 2021-09-28 广州浩传网络科技有限公司 Method and device for generating explanation content of three-dimensional scene graph
CN113449122B (en) * 2021-07-09 2023-01-17 广州浩传网络科技有限公司 Method and device for generating explanation content of three-dimensional scene graph

Also Published As

Publication number Publication date
CN112987921B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US10545636B2 (en) Taskbar media player
CN109275028B (en) Video acquisition method, device, terminal and medium
US7987423B2 (en) Personalized slide show generation
JP4638913B2 (en) Multi-plane 3D user interface
CN110908625A (en) Multi-screen display method, device, equipment, system, cabin and storage medium
US20030160814A1 (en) Slide show presentation and method for viewing same
GB2590545A (en) Video photographing method and apparatus, electronic device and computer readable storage medium
US7903110B2 (en) Photo mantel view and animation
CN114746941A (en) System and method for adding virtual audio tags to video
US20150278181A1 (en) Method and system for creating multimedia presentation prototypes
US11392287B2 (en) Method, device, and storage mediumfor switching among multimedia resources
CN111800668A (en) Bullet screen processing method, device, equipment and storage medium
US9773524B1 (en) Video editing using mobile terminal and remote computer
US11341096B2 (en) Presenting and editing recent content in a window during an execution of a content application
CN112987921B (en) VR scene explanation scheme generation method
US20220303459A1 (en) Enhancing quality of multimedia
US20220303457A1 (en) Multimedia quality evaluation
US20210035583A1 (en) Smart device and method for controlling same
CN115328364A (en) Information sharing method and device, storage medium and electronic equipment
US20090113507A1 (en) Media System for Facilitating Interaction with Media Data Across a Plurality of Media Devices
CN113392260B (en) Interface display control method, device, medium and electronic equipment
KR101827863B1 (en) System and method for providing multimedia contents
KR20160115024A (en) System and method for generating cartoon data
KR20200022995A (en) Content production system
US11716531B2 (en) Quality of multimedia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant