CN112987921B - VR scene explanation scheme generation method - Google Patents

VR scene explanation scheme generation method Download PDF

Info

Publication number
CN112987921B
CN112987921B CN202110195568.3A CN202110195568A CN112987921B CN 112987921 B CN112987921 B CN 112987921B CN 202110195568 A CN202110195568 A CN 202110195568A CN 112987921 B CN112987921 B CN 112987921B
Authority
CN
China
Prior art keywords
explanation
action
playing
scene
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110195568.3A
Other languages
Chinese (zh)
Other versions
CN112987921A (en
Inventor
战立涛
李广朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chezhi Interconnection Beijing Technology Co ltd
Original Assignee
Chezhi Interconnection Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chezhi Interconnection Beijing Technology Co ltd filed Critical Chezhi Interconnection Beijing Technology Co ltd
Priority to CN202110195568.3A priority Critical patent/CN112987921B/en
Publication of CN112987921A publication Critical patent/CN112987921A/en
Application granted granted Critical
Publication of CN112987921B publication Critical patent/CN112987921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The invention discloses a VR scene explanation scheme generation method, which comprises the following steps: creating an explanation file, and generating a time line for the created explanation file; selecting explanation audio fragments related to the subject content of the display scene from a database, and adding the selected explanation audio fragments into the time line in a non-overlapping manner; in the VR scene, playing the selected explanation audio fragment, and executing the action of the explanation audio fragment in the process of playing the explanation audio fragment to obtain action information; binding action information to a timeline; and storing the timelines, the explanation audio clips added into the timelines and the action information bound to the timelines into explanation files, thereby obtaining an explanation scheme. Corresponding audio and actions are recorded separately under different display scenes, and compared with the mode that a complete explanation file is recorded for all display scenes at the same time, the audio and actions in the explanation file are recorded at the same time, the generation of the explanation scheme is more flexible, and the later editing can be performed.

Description

VR scene explanation scheme generation method
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR scene explanation scheme generating method and a VR scene explanation scheme playing method.
Background
With the continuous development of VR technology, specific targets, such as vehicles, are mostly taught in VR scenes. The current method for explaining targets based on VR scenes mainly comprises two steps of recording and playing an explanation file, and in the stage of recording the explanation file, an explanation person explains audios of all scenes, and meanwhile, specific actions are operated according to the audios, for example, specific angles are displayed by clicking or dragging VR pictures through a mouse, and the explanation audios, action data and action time are recorded. And in the stage of playing the explanation file, synchronously playing the explanation audio and the action data, and completely reproducing the explanation audio and the visual angles of all scenes of the explanation.
However, in the process of recording audio and action data by the instructor, the instructor is usually required to simultaneously perform the audio and action, the instructor is required to complete the audio and action at one time, so that the instructor is inflexible to record, poor in fault tolerance and incapable of performing later editing, and in the process of playing the audio and action, the user cannot operate specific action, and the interactivity is poor.
Therefore, a more flexible VR scene interpretation scheme generating and playing method with strong interactivity needs to be provided for users.
Disclosure of Invention
To this end, the present invention provides a solution or at least a alleviation of at least one of the problems presented above.
According to one aspect of the present invention, there is provided a VR scene interpretation scheme generating method, executed in a computing device, where the computing device is communicatively connected to a database, and the database stores prerecorded interpretation audio clips, where interpretation files are used to interpret presentation scenes, each presentation scene corresponds to one interpretation file, each interpretation file corresponds to one or more interpretation audio clips, and each interpretation audio clip interprets a document, and the method includes the steps of:
creating an explanation file, and generating a time line for the created explanation file;
selecting explanation audio fragments related to the subject content of the display scene from the database, and adding the selected explanation audio fragments into the time line in a non-overlapping manner;
in the VR scene, playing the selected explanation audio fragment, and executing the action of the explanation audio fragment in the process of playing the explanation audio fragment to obtain action information;
binding the action information to the timeline;
and storing the time line, the explanation audio clip added into the time line and the action information bound to the time line into the explanation file, thereby obtaining an explanation scheme.
Optionally, the method further comprises the steps of:
an action ID is created for performing an action of the narrative audio segment, wherein the action ID indicates the order in which the actions occur.
Optionally, the action information includes an action time point, an action ID, action data and an action type, the action time point records action occurrence time, the action data records content of an action, and the action type represents an action category;
wherein the step of binding the action information to the timeline comprises:
binding the action information to the time line according to the action time point and the sequence of the action IDs.
Optionally, the method further comprises the steps of:
assigning names to the explanation files, wherein the names correspond to the display scenes;
and storing names of the explanation files stored with the time line, the explanation audio fragments and the action information into the database.
Optionally, the action type includes one or more of playing an audio clip, displaying a scene, switching a color, opening a door, closing a door, switching a view angle, switching a viewpoint, opening a picture anchor point, closing a picture browse, opening a video anchor point, closing a video play, clicking a price inquiring anchor point, closing a price inquiring, opening a chat robot anchor point, and closing a chat robot page.
Optionally, the display scene includes one or more of a driver's seat, a co-driver's seat, a rear row and a trunk.
Optionally, the document includes one or more of a center console, a steering wheel, an instrument panel, large screen content, navigation, reverse image, voice control, mobile phone interconnection, air conditioning adjustment, vehicle-mounted television, multifunctional rearview mirror, seat adjustment, seat heating ventilation, air conditioning control, mobile phone charging, gear handle, driving mode, parking, steep slope descent, engine start-stop, parking brake, storage space, front space, rear space, door panel function, door and window control, speaker, air bag, parking brake, center armrest, skylight, cosmetic mirror sun visor, cup holder, wireless charging, storage space, seat collapse, trunk space, maintenance.
Optionally, the database stores materials in advance, the materials include words, pictures and videos, and the method further includes the steps of:
acquiring the materials from the database;
creating a VR scene based on the pictures in the material;
adding anchor points in the created VR scene;
and associating the added anchor points with the materials to finish editing the VR scene.
According to another aspect of the present invention, there is provided a VR scene interpretation scheme playing method, executed in a server, the server being communicatively connected to a client and a database, respectively, the database storing an interpretation scheme list including a plurality of interpretation file names, each interpretation file name corresponding to an interpretation file, each interpretation file being generated by the VR scene interpretation file generating method of any one of claims 1 to 8, the method comprising the steps of:
responding to the operation that a user opens a page for playing the explanation file at the client, and displaying a VR scene;
acquiring an explanation scheme list from the database, and displaying the explanation scheme list in a VR scene;
responding to the names of the explanation files in the explanation scheme list clicked by the user, and acquiring corresponding explanation files from the database;
and in the VR scene, playing the acquired explanation file content.
Optionally, the explanation file includes a time line, an explanation audio clip added to the time line, and action information bound to the time line, and the step of playing the obtained explanation file content in the VR scene includes:
In the VR scene, according to the time line of the acquired explanation file, playing the explanation audio clip and the action information.
Optionally, the action information includes an action time point, an action ID and action data, the action time point records action occurrence time, the action ID indicates a sequence of action occurrence, the action data records content of the action, and the step of playing the explanation audio clip and the action information includes:
playing the explanation audio clip;
recording the playing time of the explanation audio clip;
reading an action ID of a current playing action, wherein the current playing action is executed based on the action data;
traversing the time line of the playing solution audio clip at each first preset time interval, judging whether the action ID in the time line is larger than the action ID currently played, if so, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if so, playing the action of the action ID in the time line, and if not, continuously playing the current action.
Optionally, the action information further includes an action type, and further includes the steps of:
judging an action category, wherein the action category is determined according to the action type;
And playing the actions of the corresponding categories according to different playing methods.
Optionally, the action type includes one or more of playing an audio clip, displaying a scene, switching a color, opening a door, closing a door, switching a view angle, switching a viewpoint, opening a picture anchor point, closing a picture browse, opening a video anchor point, closing a video play, clicking a price inquiring anchor point, closing a price inquiring, opening a chat robot anchor point, and closing a chat robot page.
Optionally, the method further comprises the steps of:
responding to the triggering of a pause anchor point of the user at the client, pausing the playing of the explanation audio fragment, and stopping recording the playing time of the explanation audio fragment;
responding to the operation of moving a mouse in the VR scene by a user, and suspending the playing action;
counting the time for moving the mouse;
judging whether the time for moving the mouse is longer than a second preset time, if not, continuing to pause the playing action, and if so, executing the step of playing the explanation audio clip and the action information in the VR scene according to the obtained time line of the explanation file.
According to yet another aspect of the present invention, there is provided a computing device comprising at least one processor; and a memory storing program instructions, wherein the program instructions are configured to be executed by the at least one processor, the program instructions comprising instructions for performing the above-described method according to the invention.
According to yet another aspect of the present invention, there is provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to perform the above-described method of the present invention.
According to the VR scene explanation scheme generation method provided by the invention, corresponding audio and actions can be recorded separately under different display scenes, so that the generation of the explanation scheme is more flexible and the post-editing can be performed compared with the case that a complete explanation file is recorded for all display scenes at the same time and the audio and actions in the explanation file are recorded at the same time.
In addition, according to the VR scene explanation scheme playing method based on the explanation scheme generating method, a user can freely select an explanation file by clicking the explanation file name in the explanation scheme list in the VR scene, and the explanation audio and actions can be interrupted at any time in the process of watching the content of the explanation file, so that a picture can be dragged freely to realize a conversion view angle, an interested anchor point can be clicked, and the interactivity of the user in the process of watching the explanation scheme is reserved, so that the user experience is improved.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which set forth the various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to fall within the scope of the claimed subject matter. The above, as well as additional objects, features, and advantages of the present disclosure will become more apparent from the following detailed description when read in conjunction with the accompanying drawings. Like reference numerals generally refer to like parts or elements throughout the present disclosure.
FIG. 1 shows a schematic diagram of a VR scenario teaching solution generation system 100 in accordance with one embodiment of the present invention;
FIG. 2 shows a schematic diagram of a computing device 200 according to one embodiment of the invention;
FIG. 3 illustrates a schematic diagram of a VR scene explanation scenario playback system 300 in accordance with one embodiment of the present invention;
FIG. 4 illustrates a flow diagram of a VR scenario interpretation scheme generation method 400 in accordance with one embodiment of the present invention;
fig. 5 shows a flow diagram of a VR scenario interpretation scheme playback method 500 in accordance with one embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the continuous development of VR technology, specific targets, such as vehicles, are mostly taught in VR scenes. The current method for explaining targets based on VR scenes mainly comprises two steps of recording and playing an explanation file, and in the stage of recording the explanation file, an explanation person explains audios of all scenes, and meanwhile, specific actions are operated according to the audios, for example, a VR picture is clicked or dragged through a mouse to display a specific angle, and the explanation audios, action data and action time are recorded. And in the stage of playing the explanation file, according to the action time, synchronously playing the explanation audio and the action data, and completely reproducing the explanation audio and the operation action of the explanation player aiming at all scenes.
However, in the process of recording audio and action data by the interpreter, the interpreter is usually required to record a complete interpretation scheme for all display scenes, and the interpretation audio and operation actions in the interpretation scheme are performed simultaneously, so that the interpretation process is completed once, the recording is inflexible, the fault-tolerant capability is poor, and the post editing cannot be performed. In addition, because of the explanation audio and action of the stored complete flow, the data volume is large, so that the loading speed is low during playing, the clamping phenomenon is easy to cause, and the user experience is poor. In the process of playing the explanation audio and the operation action, the user cannot operate on the playing page, and cannot select the display scene to play, so that the interactivity and the user experience are poor.
Accordingly, in order to solve the above-mentioned technical problems, the present invention proposes a more flexible VR scene interpretation scheme generating system 100 and a VR scene interpretation scheme playing system 300 with strong interactivity.
Fig. 1 shows a schematic diagram of a VR scene interpretation scheme generating system 100 in accordance with one embodiment of the invention.
As shown in fig. 1, the VR scenario interpretation scheme generation system 100 includes a computing device 200, a database 110, and the computing device 200 is communicatively coupled to the database 110. The database 110 stores pre-recorded lecture audio clips, each lecture audio clip teaching a document. The database 310 also stores materials in advance, wherein the materials comprise words, pictures and videos, and the materials are used for generating and editing VR scenes.
According to one embodiment of the invention, the literature includes one or more of center consoles, steering wheels, dashboards, large screen content, navigation, reverse images, voice control, cell phone interconnection, air conditioning adjustment, on-board televisions, multi-function rear view mirrors, seat adjustment, seat heating ventilation, air conditioning control, cell phone charging, gear handle, driving mode, parking, steep descent, engine start-stop, parking brake, storage space, front row space, rear row space, door panel function, door and window control, speakers, air bags, parking brake, center armrest, sunroof, vanity visor, cup holder, wireless charging, storage space, seat collapse, trunk space, maintenance.
For example, the steering wheel teaching text of the great wall car-tank 300 is: the 12.3 inch full liquid crystal instrument panel has clear display effect, but the UI style can only be used as reference due to the non-final version. At present, the information such as altitude, air pressure, head orientation and the like can be displayed on the instrument panel.
It should be noted that the present invention is not limited to the manner in which computing device 200 is connected to database 110. For example, the computing device 200 may access the internet through a wired or wireless manner and connect with the database 110 through a data interface, so that the computing device 200 may obtain the lecture audio clip from the database 110 based on the network, and the computing device 200 may upload the generated lecture file to the database 110 based on the network.
The explanation files are used for explaining display scenes, each display scene corresponds to one explanation file, and each explanation file corresponds to one or more explanation audio clips. According to one embodiment of the invention, the display scene comprises one or more of a driver's seat, a co-driver's seat, a rear row and a trunk.
The database 110 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; the system can be used as a distributed database, such as HBase, etc., and arranged at a plurality of geographic positions, and can also reside in the computing device 200, so that the computing device 200 can directly obtain the explanation audio fragments from the database 110, and can directly upload the generated explanation files to the database 110. In summary, the data storage 110 is configured to store the audio clips and the presentation files generated by the computing device 200, and the present invention is not limited to the specific deployment and configuration of the data storage 110.
The computing device 200 is used to generate an explanation scheme, and the explanation scheme generating system 100 of the present invention operates as follows: firstly, creating an explanation file, generating a time line for the created explanation file, selecting explanation audio fragments related to the subject content of a display scene from a database 110, adding the selected explanation audio fragments into the time line in a non-overlapping manner, playing the selected explanation audio fragments in a VR scene, executing actions of the explanation audio fragments in the process of playing the explanation audio fragments to obtain action information, binding the action information to the time line, finally storing the time line, the explanation audio fragments added to the time line and the action information bound to the time line to the explanation file, and uploading the explanation file after the stored information to the database 110 for storage to obtain an explanation scheme. The method has the advantages that the explanation files are recorded separately for different display scenes, the explanation audio and the action in the explanation files can be recorded separately, the recording flexibility is improved because the explanation audio and the action are recorded separately, and the explanation audio fragments and the actions recorded in different display scenes can be edited later.
FIG. 2 illustrates a block diagram of a computing device 200 according to one embodiment of the invention. As shown in FIG. 2, in a basic configuration 202, computing device 200 typically includes a system memory 206 and one or more processors 204. A memory bus 208 may be used for communication between the processor 204 and the system memory 206.
Depending on the desired configuration, the processor 204 may be any type of processing including, but not limited to: a microprocessor (μp), a microcontroller (μc), a digital information processor (DSP), or any combination thereof. Processor 204 may include one or more levels of cache, such as a first level cache 210 and a second level cache 212, a processor core 214, and registers 216. The example processor core 214 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 218 may be used with the processor 204, or in some implementations, the memory controller 218 may be an internal part of the processor 204.
Depending on the desired configuration, system memory 206 may be any type of memory including, but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. Physical memory in a computing device is often referred to as volatile memory, RAM, and data in disk needs to be loaded into physical memory in order to be read by processor 204. The system memory 206 may include an operating system 220, one or more applications 222, and program data 224. In some implementations, the application 222 may be arranged to execute instructions on an operating system by the one or more processors 204 using the program data 224. The operating system 220 may be, for example, linux, windows or the like, which includes program instructions for handling basic system services and performing hardware-dependent tasks. The application 222 includes program instructions for implementing various user desired functions, and the application 222 may be, for example, a browser, instant messaging software, a software development tool (e.g., integrated development environment IDE, compiler, etc.), or the like, but is not limited thereto. When an application 222 is installed into computing device 200, a driver module may be added to operating system 220.
When the computing device 200 is started up, the processor 204 reads and executes program instructions of the operating system 220 from the system memory 206. Applications 222 run on top of operating system 220, utilizing interfaces provided by operating system 220 and underlying hardware, to implement various user-desired functions. When a user launches the application 222, the application 222 is loaded into the system memory 206, and the processor 204 reads and executes the program instructions of the application 222 from the system memory 206.
Computing device 200 also includes a storage device 232, where storage device 232 includes removable storage 236 and non-removable storage 238, where removable storage 236 and non-removable storage 238 are each connected to storage interface bus 234. Computing device 200 may also include an interface bus 240 that facilitates communication from various interface devices (e.g., output devices 242, peripheral interfaces 244, and communication devices 246) to basic configuration 202 via bus/interface controller 230. The example output device 242 includes a graphics processing unit 248 and an audio processing unit 250. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 252. The example peripheral interface 244 may include a serial interface controller 254 and a parallel interface controller 256, which may be configured to facilitate communication via one or more I/O ports 258 and external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device) or other peripherals (e.g., printer, scanner, etc.). The example communication device 246 may include a network controller 260 that may be arranged to facilitate communication with one or more other computing devices 262 over a network communication link via one or more communication ports 264.
The network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media in a modulated data signal, such as a carrier wave or other transport mechanism. A "modulated data signal" may be a signal that has one or more of its data set or changed in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or special purpose network, and wireless media such as acoustic, radio Frequency (RF), microwave, infrared (IR) or other wireless media. The term computer readable media as used herein may include both storage media and communication media.
In the computing device 200 according to the present invention, the application 222 includes a plurality of program instructions for performing the VR scene interpretation scheme generation method that can instruct the processor 204 to perform the VR scene interpretation scheme generation method 400 of the present invention so that the computing device 200 generates an interpretation scheme by performing the VR scene interpretation scheme generation method 400 of the present invention.
Fig. 3 shows a schematic diagram of a VR scene interpretation scheme playback system 300 in accordance with one embodiment of the invention.
As shown in fig. 3, VR scene interpretation scheme playback system 300 includes a database 310, a client 320, and a server 330. The server 330 is communicatively coupled to the database 310 and the client 320, respectively. It should be noted that the present invention is not limited to the connection of the server 330 to the database 310 and the client 320. For example, the server 330 may access the internet through a wired or wireless manner, and be connected to the database 310 and the client 320 through a data interface, respectively.
The database 310 stores an explanation file and an explanation scheme list, where the explanation file is generated based on the VR scene explanation scheme generating system 100, and the explanation scheme list includes a plurality of explanation file names, each explanation file name corresponds to one explanation file, and each explanation file explains one display scene. So that the server 330 can acquire the explanation scheme list and the explanation files from the database 310 based on the network, the client 320 can transmit a request for acquiring the explanation files to the server 330 based on the network as well, and receive the explanation files transmitted from the server 330. For example, if VR scene interpretation scheme playback system 300 is applied to the automotive field, the interpretation scheme list is shown in table 1:
TABLE 1
Driving position
Copilot position
Rear row
Trunk box
In one embodiment of the present invention, database 310 may be a relational database such as MySQL, ACCESS, etc., or a non-relational database such as NoSQL, etc.; may be provided at a plurality of geographical locations as a distributed database, such as HBase, or may reside in the server 330, so that the server 330 may directly obtain the list of solutions and the explanation files from the database 310. In summary, the database 310 is configured to store the explanation files and the list of explanation schemes, and the present invention is not limited to the specific deployment and configuration of the database 310.
The client 320, i.e. a terminal device used by a user, may be a personal computer such as a desktop computer, a notebook computer, or a mobile phone, a tablet computer, a multimedia device, an intelligent wearable device, but is not limited thereto. In one embodiment, the client 320 is a mobile terminal, such as a mobile phone, tablet computer, etc., and one or more mobile applications are installed in the client 320, where the applications can provide a function of playing an explanation file in a VR scenario. After the application is installed, the client 320 may send an HTTP request for obtaining an explanation file corresponding to the presentation scene to the computing device 200 through the network in the application, or may receive an HTTP request for obtaining an explanation file corresponding to the presentation scene sent by the computing device 200. Here, the present invention is not limited to the specific use of the application. It should be noted that the number of clients 320 may be one or more, which is not limited by the present invention.
Server 330 may be an application server, web server, or the like; but not limited to, desktop computers, notebook computers, processor chips, tablet computers, and the like. The server 330 may retrieve data stored in the database 310. For example, the server 330 may directly read the explanation files and the explanation scheme list in the database 310 (when the database 310 is a local database of the computing device 200), or may access the internet through a wired or wireless manner, and obtain the explanation files and the explanation scheme list in the database 310 through a data interface. The server 330 of the present invention may be implemented as a computing device 200, so that the VR scenario explanation scheme playing method of the present invention may be executed in the computing device 200, and the structure diagram of the computing device 200 is shown in fig. 2, which is not repeated herein.
The VR scene interpretation scheme playback system 300 of the present invention works as follows: the server 330 responds to the operation that the user opens the page for playing the explanation file in the mobile application of the client 320, displays the VR scene, acquires the explanation scheme list from the database 310, displays the explanation scheme list in the VR scene, responds to the name of the explanation file in the explanation scheme list clicked by the user, acquires the corresponding explanation file from the database 310, and plays the content of the acquired explanation file in the VR scene.
Therefore, based on the VR scene explanation scheme playing system provided by the invention, the user freely selects the explanation file, namely the self-selection display scene by clicking the explanation file name in the explanation scheme list in the VR scene page of the mobile application. In the process of watching the content of the explanation file, the explanation audio frequency and action can be interrupted at any time, the picture can be dragged freely to realize the conversion of the visual angle, and the interested anchor point can be clicked, so that the interactivity of the user in the process of watching the explanation scheme is improved. Moreover, each display scene corresponds to one explanation file, but not all display scenes are simultaneously explained through one explanation file, that is, the explanation files of each display scene are stored independently, so that the speed of loading the explanation files by the server 330 is improved, the phenomenon of blocking when the explanation files are read is avoided, and the user experience is improved.
Fig. 4 shows a schematic flow chart of a VR scene interpretation scheme generating method 400 according to an embodiment of the present invention, where the method 400 may be applied to the field of vehicle speaking, that is, to the interpretation of vehicles, each vehicle corresponding to one or more interpretation schemes, and one or more interpretation schemes may be generated by the VR scene interpretation scheme generating method 400 provided by the present invention. As shown in fig. 4, the method starts at step S410. Before executing step S410, it is necessary to generate and edit VR scenes, and the process of generating and editing VR scenes is as follows: and acquiring pre-stored materials from the database 110, creating a VR scene based on pictures in the materials, adding anchor points in the created VR scene, associating the added anchor points with the materials, and finishing editing of the VR scene.
The anchor point is a graphic button in the VR scene, the position is generally fixed, and when the user clicks the anchor point, materials such as characters, pictures or videos can be displayed in the VR scene. It should be noted that the present invention is not limited to the implementation manner of creating VR scenes based on pictures in materials, and all methods of creating VR scenes in the prior art are within the scope of the present invention.
Thereafter, in step S410, an explanation file is created, and a time line is generated for the created explanation files, that is, each explanation file corresponds to a time line, and only the time line is included in the created explanation file. And assigning names to the created explanation files, wherein the names of the explanation files are different, and the definition modes of the names are random, so long as the names are different. In one embodiment, the name may correspond to a presentation, for example, if the presentation is a driver's seat, the presentation may be defined as a driver's seat, and each vehicle may include one or more presentation, and then one or more presentation documents. For example, if a vehicle includes two presentation scenes, it corresponds to two presentation documents.
Next, in step S420, an explanation audio clip related to the subject matter of the presentation scene is selected from the database 210, and the selected explanation audio clip is added to the timeline without overlapping in time. The addition order of the audio pieces is not limited when the audio pieces are added to the time line, but it should be noted that the time of adding the time line is not coincident between the audio pieces.
Then in step S430, in the VR scene, the selected explanation audio clip is played, and the action of the explanation audio clip is executed in the process of playing the explanation audio clip, so as to obtain action information. Specifically, the selected explanation audio clip is played, in the process of playing the audio clip, the action in the audio clip is executed, an action ID is created for executing the action in the explanation audio clip, and the action time point, the action data and the action type are recorded, so that the action information is obtained. Wherein, the action information includes: the method comprises the steps of recording action time points, action IDs, action data and action types, wherein the action time points record action occurrence time, the action IDs represent sequence of actions, the action data record action content, and the actions can be displayed to a user only in the process of playing audio based on the action data, and the action types represent action types.
In one embodiment, the action types include one or more of playing an audio clip, displaying a scene switch, color switch, opening a door, closing a door, viewing angle switch, viewpoint switch, opening a picture anchor, closing a picture browse, opening a video anchor, closing a video play, clicking a price inquiry anchor, closing a price inquiry, opening a chat robot anchor, closing a chat robot page.
In one embodiment, the action ID is a code of actions, where the code is used to indicate the sequence of the actions, and the action ID may be a random code, so long as the sequence of the actions is distinguished. For example, one explanation audio clip includes five actions, specifically, door opening, door closing, view angle switching, viewpoint switching, and picture anchor point opening, where the sequence of these five actions occurs is: viewpoint switching, door opening, view switching, picture anchor point opening and door closing, the codes for the five actions are respectively: the method comprises the steps of opening a door (2), closing a door (5), switching a visual angle (3), switching a visual point (1) and opening a picture anchor point (4). Of course, the present invention is not limited to a specific implementation manner of motion coding, and coding manners which can be implemented in the prior art are all within the protection scope of the present invention, so long as the coding can distinguish the sequence of each motion.
Next, in step S440, the action information is bound to the timeline, in one embodiment, the action information is bound to the timeline according to the sequence of the action time points and the action IDs, and finally in step S450, the timeline, the explanation audio clip added to the timeline, and the action information bound to the timeline are saved to the explanation file, so as to obtain an explanation scheme, and names of the explanation file and the explanation file stored with the timeline, the explanation audio clip, the action information are saved to the database 110.
According to the VR scene explanation scheme generating method 400 provided by the invention, a plurality of explanation files can be recorded according to different display scenes, and the audio and actions of the explanation files can be recorded separately, so that the generation of the explanation scheme is more flexible and the preparation of the explanation scheme can be effectively accelerated compared with the case that a complete explanation file is recorded for all display scenes at the same time and the audio and actions in the explanation file are recorded at the same time. And each explanation file has a time line, and when the time line is edited, the audio and actions added into the time line can be edited and replaced, so that the later editing of the intercom explanation file is realized.
Fig. 5 shows a schematic flow chart of a VR scene explanation scheme playing method 500 according to one embodiment of the present invention, where the method 500 may be applied to playing content explaining a vehicle in the field of speaking vehicles. As shown in fig. 5, the method starts in step S510, and the VR scene is displayed in response to the user opening the play lecture file page at the client, specifically, in response to the user opening the play lecture file page in the mobile application of the client 320.
It should be noted that, the database 310 also stores materials, including text, pictures, and videos, creates VR scenes based on the pictures in the materials, adds anchor points in the created VR scenes, associates the added anchor points with the materials, and completes editing of the VR scenes.
Next, in step S520, a list of solutions is acquired from the database 310, and the list of solutions is displayed in the VR scene. The explanation scheme list includes a plurality of explanation file names, each explanation file name corresponds to one explanation file, each explanation file includes a time line, an explanation audio clip added to the time line, and action information bound to the time line, and the explanation files are generated by the VR scene explanation file generating method 400 provided by the present invention. While displaying the explanation scheme list, by reading url, the explanation audio clips are read and loaded from the database 310, and the explanation audio clips are preloaded, so that the audio clips do not need to be loaded when the audio playing action is executed later, the explanation audio clips are played more smoothly, and the user experience is improved. And each display scene corresponds to one explanation file, and each explanation file corresponds to one or more explanation audio fragments instead of complete audio of all scenes, so that the speed of reading and loading the explanation audio fragments is high.
The explanation scheme list is shown in table 1, and is not repeated here.
Then, in step S530, in response to the user clicking on the name of the lecture file in the lecture scheme list, the corresponding lecture file is acquired from the database 310. Finally, in step S540, in the VR scene, the obtained content of the lecture file is played. The explanation file comprises a time line, an explanation audio clip added into the time line and action information bound to the time line, wherein the action information comprises an action time point, an action ID, an action type and action data. The action time point records action occurrence time, the action ID represents the sequence of action occurrence, the action data records the content of the action, and the action type represents the action type.
In one embodiment, the step of playing the obtained content of the lecture file in the VR scene includes: in the VR scene, according to the time line of the acquired explanation file, playing the explanation audio fragment through a player, recording the playing time of the explanation audio fragment, and reading the action ID of the current playing action, wherein the current playing action is executed based on the action data, traversing the time line of the current playing explanation audio fragment every first preset time interval, judging whether the action ID in the time line is larger than the action ID of the current playing, if yes, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if yes, judging the action category according to the action type, and playing the action ID of the corresponding category in the time line according to different playing methods, if not, continuously playing the current action. Checking the action currently being executed at each first preset time interval to ensure that the synchronous error of the audio and the action is smaller than the first preset time while playing the explanation audio, thereby realizing complete playback action.
The first preset time is set according to the specific situation, which is not limited herein, and in one embodiment of the present invention, the first preset time may be 10ms. The action types herein are consistent with those in the VR scene interpretation scheme generating method 400, and will not be described herein.
If the action type is to play audio, the audio clip which is already read and loaded is played through the player, and because the played audio clip is read and loaded from the database 310 in advance while the audio clip list is displayed, the audio clip can be played in a hysteresis-free state, so that the user cannot feel the click phenomenon, and the user experience is improved.
If the action types are scene switching, color switching and door opening and closing, corresponding actions are played back in the VR scene, and corresponding pictures in the material are loaded. If the action type is visual angle switching or viewpoint switching, the action type is switched to different visual angles or viewpoints, and the transition animation with the duration of 1 second is played in the process of switching the visual angles or viewpoints, so that the user has natural and smooth experience. If the action type is to open an anchor point, close the anchor point, inquire, open or close the chat robot, loading different page elements, and displaying different blocks and dialog boxes in the VR scene.
Then, in the process of playing the explanation audio clip and the action, the user can operate the keyboard and the mouse at any time. Specifically: the server 330 may pause playing the narrative audio segment, continue the play action, and stop recording the play time of the narrative audio segment in response to the user triggering a pause anchor at the client 330. Responding to the operation of moving the mouse in the VR scene by the user, suspending the playing action, continuing to play the explanation audio clip, counting the time of moving the mouse, judging whether the time of moving the mouse is longer than the second preset time, if not, continuing to suspend the playing action, and if so, executing step S540. Through the above process, user interactivity is preserved during the viewing of the lecture scheme. The second preset time is preset according to practical situations, and the invention is not limited thereto, for example, the second preset time may be 5 seconds.
According to the VR scene explanation scheme playing method based on the explanation scheme generating method, a user freely selects an explanation file by clicking the explanation file name in the explanation scheme list in the VR scene, and in the process of watching the content of the explanation file, the explanation audio frequency and the action can be interrupted at any time, the picture can be dragged freely to realize the conversion view angle, the interested anchor point can be clicked, the interactivity of the user in the process of watching the explanation scheme is improved, and therefore the user experience is improved.
And each show scene corresponds an explanation file, but not all show scenes explain simultaneously through an explanation file, namely the explanation file of each show scene is the independent storage, improves the speed that server 330 loaded explanation file, avoids appearing the card phenomenon when reading the explanation file, improves user experience. In addition, because the explanation audio clips are read and loaded in advance, the explanation audio clips do not need to be recorded again in the process of playing the explanation audio clips, so that the phenomenon of blocking does not occur in the process of playing the explanation audio clips, and the user experience is further improved.
A8, the method of A1, wherein the database stores materials in advance, the materials comprise words, pictures and videos, and the method further comprises the steps of:
acquiring the materials from the database;
creating a VR scene based on the pictures in the material;
adding anchor points in the created VR scene;
and associating the added anchor points with the materials to finish editing the VR scene.
A10, the method of A9, wherein the explanation file comprises a time line, an explanation audio clip added to the time line, and action information bound to the time line, and the step of playing the acquired explanation file content in the VR scene comprises the following steps:
In the VR scene, according to the time line of the acquired explanation file, playing the explanation audio clip and the action information.
A11 the method of A10, wherein the action information comprises an action time point, an action ID and action data, the action time point records action occurrence time, the action ID represents sequence of action occurrence, the action data records content of the action, and the step of playing the explanation audio clip and the action information comprises:
playing the explanation audio clip;
recording the playing time of the explanation audio clip;
reading an action ID of a current playing action, wherein the current playing action is executed based on the action data;
traversing the time line of the playing solution audio clip at each first preset time interval, judging whether the action ID in the time line is larger than the action ID currently played, if so, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if so, playing the action of the action ID in the time line, and if not, continuously playing the current action.
A12 the method of a11, wherein the action information further includes an action type, further comprising the steps of:
Judging an action category, wherein the action category is determined according to the action type;
and playing the actions of the corresponding categories according to different playing methods.
A13 the method of A12, wherein the action types include one or more of playing audio clips, showing scene switching, color switching, door opening, door closing, view angle switching, viewpoint switching, opening picture anchor points, closing picture browsing, opening video anchor points, closing video playing, clicking price inquiring anchor points, closing price inquiring, opening chat robot anchor points, closing chat robot pages.
The method of any one of A9 to 13, further comprising the step of:
responding to the triggering of a pause anchor point of the user at the client, pausing the playing of the explanation audio fragment, and stopping recording the playing time of the explanation audio fragment;
responding to the operation of moving a mouse in the VR scene by a user, and suspending the playing action;
counting the time for moving the mouse;
judging whether the time for moving the mouse is longer than a second preset time, if not, continuing to pause the playing action, and if so, executing the step of playing the explanation audio clip and the action information in the VR scene according to the obtained time line of the explanation file.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions of the methods and apparatus of the present invention, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to execute the VR scenario interpretation scheme generation method and the VR scenario interpretation scheme play method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media comprise computer storage media and communication media. Computer-readable media include computer storage media and communication media. Computer storage media stores information such as computer readable instructions, data structures, program modules, or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
In the description provided herein, algorithms and displays are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with examples of the invention. The required structure for a construction of such a system is apparent from the description above. In addition, the present invention is not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment, or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into a plurality of sub-modules.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Furthermore, some of the embodiments are described herein as methods or combinations of method elements that may be implemented by a processor of a computer system or by other means of performing the functions. Thus, a processor with the necessary instructions for implementing the described method or method element forms a means for implementing the method or method element. Furthermore, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is for carrying out the functions performed by the elements for carrying out the objects of the invention.
As used herein, unless otherwise specified the use of the ordinal terms "first," "second," "third," etc., to describe a general object merely denote different instances of like objects, and are not intended to imply that the objects so described must have a given order, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (14)

1. A VR scene interpretation scheme generation method, executed in a computing device, where the computing device is communicatively connected to a database, where the database stores prerecorded interpretation audio segments, where interpretation files are used to teach presentation scenes, each presentation scene corresponds to one interpretation file, each interpretation file corresponds to one or more interpretation audio segments, and each interpretation audio segment explains a document, the method includes the steps of:
Creating an explanation file, and generating a time line for the created explanation file;
selecting explanation audio fragments related to the subject content of the display scene from the database, and adding the selected explanation audio fragments into the time line in a non-overlapping manner;
in a VR scene, playing the selected explanation audio fragment, and executing the action of the explanation audio fragment in the process of playing the explanation audio fragment to obtain action information, wherein the action information comprises an action time point, an action ID, action data and an action type, the action time point records action occurrence time, the action data records action content, and the action type represents action category;
binding the action information to the timeline;
storing the time line, the explanation audio clip added into the time line and the action information bound to the time line into the explanation file, thereby obtaining an explanation scheme;
creating an action ID for executing the action of the explanation audio fragment, wherein the action ID represents the sequence of the action;
wherein the step of binding the action information to the timeline comprises:
binding the action information to the time line according to the action time point and the sequence of the action IDs.
2. The method of claim 1, further comprising the step of:
assigning names to the explanation files, wherein the names correspond to the display scenes;
and storing names of the explanation files stored with the time line, the explanation audio fragments and the action information into the database.
3. The method of claim 1, wherein the action types include one or more of play audio clips, show scene cuts, color cuts, open door, close door, view angle cuts, view point cuts, open picture anchor, close picture browse, open video anchor, close video play, click on price enquiry anchor, close price enquiry, open chat robot anchor, close chat robot page.
4. A method according to any one of claims 1 to 3, wherein the presentation comprises one or more of a driver's seat, a co-driver's seat, a rear row, a trunk.
5. A method according to any one of claims 1 to 3, wherein the literature includes one or more of center consoles, steering wheels, dashboards, large screen content, navigation, reverse imaging, voice control, cell phone interconnection, air conditioning adjustment, on-board television, multi-function rearview mirror, seat adjustment, seat heating ventilation, air conditioning control, cell phone charging, gear handle, driving mode, parking, steep descent, engine start-stop, parking brake, storage space, front row space, rear row space, door panel function, door and window control, speakers, air bags, parking brake, center armrest, sunroof, vanity, cup holder, wireless charging, storage space, seat recline, trunk space, maintenance.
6. The method of claim 1, wherein the database stores materials in advance, the materials including text, pictures, and videos, the method further comprising the steps of:
acquiring the materials from the database;
creating a VR scene based on the pictures in the material;
adding anchor points in the created VR scene;
and associating the added anchor points with the materials to finish editing the VR scene.
7. A VR scene explanation scheme playing method, executed in a server, where the server is respectively in communication connection with a client and a database, where the database stores an explanation scheme list, where the explanation scheme list includes a plurality of explanation file names, each of the explanation file names corresponds to one explanation file, and each of the explanation files is generated by the VR scene explanation file generating method according to any one of claims 1 to 6, and the method includes the steps of:
responding to the operation that a user opens a page for playing the explanation file at the client, and displaying a VR scene;
acquiring an explanation scheme list from the database, and displaying the explanation scheme list in a VR scene;
responding to the names of the explanation files in the explanation scheme list clicked by the user, and acquiring corresponding explanation files from the database;
And in the VR scene, playing the acquired explanation file content.
8. The method of claim 7, wherein the lecture file includes a timeline, lecture audio clips added to the timeline, action information tied to the timeline, the step of playing the acquired lecture file content in the VR scene including:
in the VR scene, according to the time line of the acquired explanation file, playing the explanation audio clip and the action information.
9. The method of claim 8, wherein the action information includes an action time point, an action ID, and action data, the action time point recording an action occurrence time, the action ID indicating a sequence in which actions occur, the action data recording contents of the actions, the step of playing the lecture audio clip and the action information including:
playing the explanation audio clip;
recording the playing time of the explanation audio clip;
reading an action ID of a current playing action, wherein the current playing action is executed based on the action data;
traversing the time line of the playing solution audio clip at each first preset time interval, judging whether the action ID in the time line is larger than the action ID currently played, if so, continuously judging whether the action time point is smaller than the sum of the playing time of the explanation file and the first preset time, if so, playing the action of the action ID in the time line, and if not, continuously playing the current action.
10. The method of claim 9, wherein the action information further includes an action type, further comprising the steps of:
judging an action category, wherein the action category is determined according to the action type;
and playing the actions of the corresponding categories according to different playing methods.
11. The method of claim 10, wherein the action types include one or more of play audio clips, show scene cuts, color cuts, open door, close door, view angle cuts, view point cuts, open picture anchor, close picture browse, open video anchor, close video play, click on price enquiry anchor, close price enquiry, open chat robot anchor, close chat robot page.
12. The method according to any one of claims 7 to 11, further comprising the step of:
responding to the triggering of a pause anchor point of the user at the client, pausing the playing of the explanation audio fragment, and stopping recording the playing time of the explanation audio fragment;
responding to the operation of moving a mouse in the VR scene by a user, and suspending the playing action;
counting the time for moving the mouse;
judging whether the time for moving the mouse is longer than a second preset time, if not, continuing to pause the playing action, and if so, executing the step of playing the explanation audio clip and the action information in the VR scene according to the obtained time line of the explanation file.
13. A computing device, comprising:
at least one processor; and
a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor, the program instructions comprising instructions for performing the method of any of claims 1-12.
14. A readable storage medium storing program instructions which, when read and executed by a client, cause the client to perform the method of any of claims 1-12.
CN202110195568.3A 2021-02-19 2021-02-19 VR scene explanation scheme generation method Active CN112987921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110195568.3A CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110195568.3A CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Publications (2)

Publication Number Publication Date
CN112987921A CN112987921A (en) 2021-06-18
CN112987921B true CN112987921B (en) 2024-03-15

Family

ID=76394259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110195568.3A Active CN112987921B (en) 2021-02-19 2021-02-19 VR scene explanation scheme generation method

Country Status (1)

Country Link
CN (1) CN112987921B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113449122B (en) * 2021-07-09 2023-01-17 广州浩传网络科技有限公司 Method and device for generating explanation content of three-dimensional scene graph

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
CN101213606A (en) * 2005-07-01 2008-07-02 微软公司 Synchronization aspects of interactive multimedia presentation management
CN102142215A (en) * 2011-03-15 2011-08-03 南京师范大学 Adaptive geographic information voice explanation method based on position and speed
CN102449680A (en) * 2009-05-26 2012-05-09 松下电器产业株式会社 Information presentation device
CN105261252A (en) * 2015-11-18 2016-01-20 闫健 Panoramic learning platform system-based real-time action rendering method
CN105306861A (en) * 2015-10-15 2016-02-03 深圳市时尚德源文化传播有限公司 Online teaching recording and playing method and system
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106790498A (en) * 2016-12-15 2017-05-31 深圳市金溢科技股份有限公司 Vehicle-mounted voice intercommunication method, V2X car-mounted terminals and voice inter-speaking system
CN108241461A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 A kind of online method and apparatus for making the manuscript containing audio presentation
CN108241596A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110216693A (en) * 2019-06-21 2019-09-10 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110488979A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of automobile showing system based on augmented reality
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN111798544A (en) * 2020-07-07 2020-10-20 江西科骏实业有限公司 Visual VR content editing system and using method
WO2021008479A1 (en) * 2019-07-18 2021-01-21 乐播新瑞(北京)文化传媒有限公司 Audio generation method and system, audio playing method and system, and central control system and audio playing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10326978B2 (en) * 2010-06-30 2019-06-18 Warner Bros. Entertainment Inc. Method and apparatus for generating virtual or augmented reality presentations with 3D audio positioning

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696923A (en) * 2004-05-10 2005-11-16 北京大学 Networked, multimedia synchronous composed storage and issuance system, and method for implementing the system
CN101213606A (en) * 2005-07-01 2008-07-02 微软公司 Synchronization aspects of interactive multimedia presentation management
CN102449680A (en) * 2009-05-26 2012-05-09 松下电器产业株式会社 Information presentation device
CN102142215A (en) * 2011-03-15 2011-08-03 南京师范大学 Adaptive geographic information voice explanation method based on position and speed
CN105306861A (en) * 2015-10-15 2016-02-03 深圳市时尚德源文化传播有限公司 Online teaching recording and playing method and system
CN105261252A (en) * 2015-11-18 2016-01-20 闫健 Panoramic learning platform system-based real-time action rendering method
CN106530392A (en) * 2016-10-20 2017-03-22 中国农业大学 Method and system for interactive display of cultivation culture virtual scene
CN106790498A (en) * 2016-12-15 2017-05-31 深圳市金溢科技股份有限公司 Vehicle-mounted voice intercommunication method, V2X car-mounted terminals and voice inter-speaking system
CN108241461A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 A kind of online method and apparatus for making the manuscript containing audio presentation
CN108241596A (en) * 2016-12-26 2018-07-03 北京奇虎科技有限公司 The production method and device of a kind of PowerPoint
CN110209957A (en) * 2019-06-06 2019-09-06 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
CN110216693A (en) * 2019-06-21 2019-09-10 北京猎户星空科技有限公司 Explanation method, apparatus, equipment and storage medium based on intelligent robot
WO2021008479A1 (en) * 2019-07-18 2021-01-21 乐播新瑞(北京)文化传媒有限公司 Audio generation method and system, audio playing method and system, and central control system and audio playing device
CN110488979A (en) * 2019-08-23 2019-11-22 北京枭龙科技有限公司 A kind of automobile showing system based on augmented reality
CN111638796A (en) * 2020-06-05 2020-09-08 浙江商汤科技开发有限公司 Virtual object display method and device, computer equipment and storage medium
CN111640171A (en) * 2020-06-10 2020-09-08 浙江商汤科技开发有限公司 Historical scene explaining method and device, electronic equipment and storage medium
CN111798544A (en) * 2020-07-07 2020-10-20 江西科骏实业有限公司 Visual VR content editing system and using method

Also Published As

Publication number Publication date
CN112987921A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
US20220342519A1 (en) Content Presentation and Interaction Across Multiple Displays
CN110908625B (en) Multi-screen display method, device, equipment, system, cabin and storage medium
CN108989691B (en) Video shooting method and device, electronic equipment and computer readable storage medium
US20210014431A1 (en) Method and apparatus for capturing video, electronic device and computer-readable storage medium
US20030160814A1 (en) Slide show presentation and method for viewing same
US9852768B1 (en) Video editing using mobile terminal and remote computer
CN103003789B (en) Adjustable and progressive mobile device street view
CN109275028B (en) Video acquisition method, device, terminal and medium
US7987423B2 (en) Personalized slide show generation
US10534498B2 (en) Media system having three dimensional navigation via dynamic carousel
US20080313541A1 (en) Method and system for personalized segmentation and indexing of media
US7903110B2 (en) Photo mantel view and animation
US20140059418A1 (en) Multimedia annotation editing system and related method and computer program product
US11392287B2 (en) Method, device, and storage mediumfor switching among multimedia resources
CN112987921B (en) VR scene explanation scheme generation method
CN111800668A (en) Bullet screen processing method, device, equipment and storage medium
US9773524B1 (en) Video editing using mobile terminal and remote computer
US20090113352A1 (en) Media System Having Three Dimensional Navigation for Use With Media Data
KR20100137252A (en) Digital album apparatus and method for driving thereof, method for providing object information included movie to digital album and apparatus
US20090113507A1 (en) Media System for Facilitating Interaction with Media Data Across a Plurality of Media Devices
US20140178035A1 (en) Communicating with digital media interaction bundles
CN115243107A (en) Method, device, system, electronic equipment and medium for playing short video
CN116363287A (en) Vehicle spray painting method, system, equipment and medium
CN115904284A (en) Display control method, system, electronic device, and medium
CN117939286A (en) Image preview method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant