WO2022048113A1 - Collaborative performance method and system, terminal device, and storage medium - Google Patents

Collaborative performance method and system, terminal device, and storage medium Download PDF

Info

Publication number
WO2022048113A1
WO2022048113A1 PCT/CN2021/076155 CN2021076155W WO2022048113A1 WO 2022048113 A1 WO2022048113 A1 WO 2022048113A1 CN 2021076155 W CN2021076155 W CN 2021076155W WO 2022048113 A1 WO2022048113 A1 WO 2022048113A1
Authority
WO
WIPO (PCT)
Prior art keywords
performance
user
information
terminal
virtual reality
Prior art date
Application number
PCT/CN2021/076155
Other languages
French (fr)
Chinese (zh)
Inventor
段新
段拙然
Original Assignee
佛山创视嘉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 佛山创视嘉科技有限公司 filed Critical 佛山创视嘉科技有限公司
Publication of WO2022048113A1 publication Critical patent/WO2022048113A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/201User input interfaces for electrophonic musical instruments for movement interpretation, i.e. capturing and recognizing a gesture or a specific kind of movement, e.g. to control a musical instrument
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/211User input interfaces for electrophonic musical instruments for microphones, i.e. control of musical parameters either directly from microphone signals or by physically associated peripherals, e.g. karaoke control switches or rhythm sensing accelerometer within the microphone casing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/155User input interfaces for electrophonic musical instruments
    • G10H2220/321Garment sensors, i.e. musical control means with trigger surfaces or joint angle sensors, worn as a garment by the player, e.g. bracelet, intelligent clothing
    • G10H2220/326Control glove or other hand or palm-attached control device

Definitions

  • the present application belongs to the field of computer technology, and in particular, relates to a collaborative performance method, system, terminal device and storage medium.
  • Embodiments of the present application provide a collaborative performance method, system, terminal device, and storage medium to solve the problem in the prior art of how to easily and effectively enable people in different regions to perform music collaboratively.
  • a first aspect of the embodiments of the present application provides a collaborative performance method, where the method is applied to a first terminal corresponding to a first user, including:
  • the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal; the target virtual reality scene is the service The virtual reality scene determined by the terminal according to the collaborative performance invitation information;
  • the target virtual reality scene is displayed, and the first user is instructed to perform the performance piece together with the second user in the target virtual reality scene.
  • the second aspect of the embodiments of the present application provides another collaborative performance method, the method is applied to the second terminal corresponding to the second user, and includes:
  • the collaborative performance invitation information includes performance repertoire and performance scene information
  • the target virtual reality scene is the virtual reality scene determined by the server according to the collaborative performance invitation information
  • the target virtual reality scene is displayed, and the second user is instructed to perform the performance piece together with the first user in the target virtual reality scene.
  • a third aspect of the embodiments of the present application provides a collaborative performance system, where the system includes a first terminal corresponding to a first user, a second terminal corresponding to a second user, and a server;
  • the first terminal is configured to execute the collaborative performance method as described in the first aspect
  • the second terminal is configured to execute the collaborative performance method as described in the second aspect
  • the server is used for receiving the collaborative performance invitation information, and determining a target virtual reality scene according to the collaborative performance invitation information; and for transmitting data of the target virtual reality scene to the first terminal and the second terminal.
  • a fourth aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program At the time, the terminal device is made to implement the steps of the collaborative performance method described in the first aspect or the second aspect.
  • a fifth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, enables a terminal device to implement the first aspect or The steps of the collaborative performance method described in the second aspect.
  • a sixth aspect of the embodiments of the present application provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the steps of the collaborative performance method described in the first aspect or the second aspect.
  • the first user sends a collaborative performance invitation through the first terminal, and after obtaining the invitation acceptance information returned by the second user through the second terminal, the first user loads the collaborative performance invitation information from the server together with the second terminal.
  • the determined virtual reality scene that is, the target virtual reality scene
  • the target virtual reality scene displays the target virtual reality scene and instructing the first user to perform the repertoire in collaboration with the second user in the target virtual reality scene. Since the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions do not need to arrive at the same place, and based on virtual reality technology, they can easily and effectively realize immersive remote music collaborative performance.
  • FIG. 1 is a schematic diagram of an application scenario of a collaborative performance method provided by an embodiment of the present application
  • Fig. 2 is the realization flow schematic diagram of the first collaborative performance method provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a virtual electronic striking pad provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a virtual MIDI keyboard provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a virtual string provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a virtual hole provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of the present application providing a kind of musical instrument operation prompt information corresponding to a virtual hole according to an embodiment
  • Fig. 8 is the extraction schematic diagram of rhythm tone and melody tone provided by the embodiment of the present application.
  • Fig. 9 is the realization flow schematic diagram of the second kind of collaborative performance method provided by the embodiment of the present application.
  • FIG. 10 is a schematic diagram of a collaborative performance system provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another collaborative performance system provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a first terminal provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a second terminal provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the collaborative performance method includes the first user sending a collaborative performance invitation through the first terminal, and after obtaining the second user After receiving the invitation information returned by the second terminal, load the target virtual reality scene constructed according to the collaborative performance invitation information from the server together with the second terminal, and then display the target virtual reality scene and instruct the first user to play in the target virtual reality scene. and perform the repertoire in collaboration with the second user.
  • the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions can realize remote music entertainment communication based on virtual reality technology.
  • FIG. 1 is a schematic diagram of an application scenario of a collaborative performance method provided by an embodiment of the application, including a server, multiple users and their corresponding terminals and interactive devices (which may include a head-mounted display device, an earphone, a microphone, a handle, etc.). /data gloves, etc.).
  • a server multiple users and their corresponding terminals and interactive devices (which may include a head-mounted display device, an earphone, a microphone, a handle, etc.). /data gloves, etc.).
  • the first terminal that is, the inviter of the collaborative performance
  • the server so that the collaborative performance invitation information is directly or indirectly transmitted through the server to the terminal (referred to as the second terminal) corresponding to at least one second user (that is, other users invited by the first user); then, if The second user accepts the invitation and sends the invitation acceptance information through the second terminal.
  • the terminal referred to as the second terminal
  • the first terminal loads the target virtual reality scene from the server together with the second terminal;
  • the user shows the target virtual reality scene, and instructs the first user to play the preset performance repertoire in collaboration with the second user in the target virtual reality scene (the first terminal may specifically output the information and instruction information of the target virtual reality scene to the first user Wearing a head-mounted display device to display the target virtual reality scene and instruct the first user), similarly, the second terminal displays the target virtual reality scene to the second user, and instructs the second user to interact with the target virtual reality scene in the target virtual reality scene.
  • the first user performs the piece of performance collaboratively, thereby completing the remote collaborative performance of the first user and the second user, so that users in different regions can realize remote music entertainment communication based on the virtual reality technology.
  • FIG. 2 shows a schematic flowchart of a first collaborative performance method provided by an embodiment of the present application. The method is applied to a first terminal, and the details are as follows:
  • the first user is the inviter of the current collaborative performance method
  • the second user is the user invited by the first user
  • the terminal device used by the first user is the first terminal
  • the terminal device used by the second user is the first terminal.
  • the terminal device is the second terminal.
  • the first user, the second user, the first terminal, and the second terminal are only described as differences. Any user can be the first user when he wants to become an inviter, and perform the following steps from S201 to S201 to his corresponding terminal device as the first terminal.
  • any one or more users other than the first user can be used as the second user, and accept the collaborative performance invitation information sent by the first user through the first terminal through its corresponding second terminal, so as to realize the first Remote collaborative performance of a user and a second user.
  • collaborative performance invitation information is sent, where the collaborative performance invitation information includes performance pieces and performance scene information.
  • the first user can log in to the client program preset by the first terminal through his own user account, so that the first terminal can access the collaborative performance system. After that, the first user can set the collaborative performance on the first terminal, generate collaborative performance invitation information, and send the collaborative performance invitation information to the server or the second terminal corresponding to the second user.
  • the collaborative performance invitation information includes at least performance pieces and performance scene information, where the performance scene information is information for setting a virtual environment of a virtual reality scene, and the virtual environment may include K hall, grassland, seaside, and the like.
  • the first terminal can load the repertoire library and the virtual environment library pre-stored by the server from the server, select the current performance repertoire from the repertoire library, and select the current virtual environment from the virtual environment library, thereby generating the collaborative performance invitation information.
  • the collaborative performance invitation information may also include performance difficulty information, account information of the invited second user, the number of users for this collaborative performance, the authority of the second user, the user's avatar setting information, and performance form settings. information, etc.
  • the second terminal corresponding to the second user may directly establish a communication connection with the first terminal, and directly receive the collaborative performance invitation information sent by the first terminal, or the first terminal may send the collaborative performance invitation information to the server. After that, it is forwarded by the server to the second terminal. After the second terminal obtains the collaborative performance invitation information, if the second user accepts the collaborative performance invitation, the second terminal directly sends the acceptance invitation information to the first terminal, or indirectly returns the acceptance invitation information to the first terminal through the server.
  • the first terminal obtains the invitation acceptance information returned by the second user through the second terminal, the first terminal loads the target virtual reality scene corresponding to the above-mentioned performance scene information from the server together with the second terminal. Specifically, after acquiring the invitation acceptance information, the first terminal instructs the second terminal to load the target virtual reality scene while loading the target virtual reality scene from the server; or, after the second terminal sends the invitation acceptance information, The target virtual reality scene is automatically loaded from the server, and the first terminal loads the target virtual reality scene from the server after acquiring the invitation acceptance information.
  • the target virtual reality scene is a virtual reality scene determined by the server according to the collaborative performance invitation information.
  • the server pre-stores a variety of virtual reality scenes including K hall, grassland, seaside and other virtual environments, and the server selects a virtual reality scene from the pre-stored multiple virtual reality scenes according to the performance scene information in the collaborative performance invitation information, and
  • the virtual avatar and virtual performance device corresponding to the first user, and the virtual avatar and virtual performance device corresponding to the second user are added, thereby generating a target virtual reality scene.
  • the collaborative performance invitation information includes, in addition to the performance piece and performance scene information, the user's avatar setting information and performance form setting information, and the user's avatar setting information includes the virtual avatar selected for the first user.
  • the performance form setting information includes the performance form selected for the first user and the information of the corresponding virtual performance device, and the performance form selected for the second user and the corresponding virtual performance equipment information, wherein the virtual performance device may include any one or more of a virtual microphone, a virtual musical instrument, and a virtual simple performance device; after the first terminal sends the collaborative performance invitation information to the server through step S201, the service The terminal selects the current virtual reality scene from the pre-stored virtual reality scene library according to the performance scene information contained in the collaboration invitation information, and adds the corresponding virtual avatar in the current virtual reality scene according to the user's avatar setting information, and according to the performance The form setting information adds a corresponding virtual performance device to the current virtual reality scene, so as to obtain the target virtual reality scene, and after receiving the loading request from the first terminal and the second terminal, the data of the target virtual reality scene is sent.
  • the collaborative performance invitation information only includes performance pieces and performance scene information.
  • the server sends the collaborative performance invitation information to the server according to the collaborative performance invitation information.
  • the current virtual reality scene is selected from the pre-stored virtual reality scene library; after the first terminal obtains the acceptance invitation information returned by the second terminal, the first user sends the server through the first terminal the set by himself.
  • the information of the virtual avatar and the virtual performance device, the second user sends the information of the virtual avatar and the virtual performance device set by himself to the server through the second terminal, and then the server correspondingly adds from the selected current virtual reality scene.
  • the virtual avatar and virtual performance device corresponding to the first user, and the virtual avatar and virtual performance device corresponding to the second user, thereby obtain the target virtual reality scene for subsequent loading by the first terminal and the second terminal.
  • the target virtual reality scene is displayed, and the first user is instructed to perform the performance piece in collaboration with the second user in the target virtual reality scene.
  • the first terminal and the second terminal are both connected to corresponding interactive devices such as a head-mounted display device, an earphone, a microphone, a handle, and a data glove.
  • the first terminal outputs the visual image information of the target virtual reality scene to the head display device worn by the first user to show the first user that the virtual environment, the virtual performance device, the first The target virtual reality scene of the virtual avatars of the user and the second user, and by adding instruction information for instructing the first user to play in the visual image information, instructing the first user to perform the performance through an interactive device such as a handle, a data glove, or a microphone
  • the performance action is performed, so that the first user and the second user can perform the performance piece collaboratively in the target virtual reality scene.
  • the collaborative performance invitation information further includes performance form setting information for setting the performance form of the first user and the performance form of the second user, and the performance forms include singing and musical instrument performance;
  • step S203 includes:
  • S20301 Display the target virtual reality scene, and display performance prompt information according to the performance form setting information and the performance repertoire;
  • S20302 Acquire feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and output the feedback information, so that the first user can perform a performance in the target virtual reality scene.
  • the target virtual reality scene cooperates with the second user to perform the repertoire.
  • the collaborative performance invitation information specifically includes performance form setting information
  • the performance form setting information specifically sets the performance form of the first user and the performance form of the second user.
  • the performance form may include singing and musical instrument performance.
  • the current collaborative performance form is specifically a chorus form
  • the current form of collaborative performance is specifically the ensemble form
  • the first user's performance form and the second user's performance form include both singing and musical instrument performance
  • the current form of collaborative performance is specifically the accompaniment form.
  • step S20301 after the first terminal presents the target virtual reality scene to the first user, the corresponding performance prompt information is determined and displayed according to the performance form setting information and the performance repertoire. If the performance form of the user is singing, the lyrics information corresponding to the performance piece is obtained as the current performance prompt information; if the performance form of the first user set in the performance form setting information is musical instrument performance, then the performance piece corresponding to the performance piece is obtained.
  • the musical instrument operation prompt information is used as the current performance prompt information, thereby instructing the first user to perform a corresponding performance action.
  • the first terminal obtains the performance action performed by the first user according to the performance prompt information through the interactive device, and generates feedback information corresponding to the performance action of the first user through the calculation of the first terminal itself or with the help of the calculation of the server. output, and receive and output feedback information generated by the server or the second terminal according to the performance action performed by the second user.
  • the feedback information includes at least visual feedback information, auditory feedback information, and may also include force feedback information.
  • the feedback information can be output through a head-mounted display device, the auditory feedback information can be output through an earphone, and the force feedback information can be output through a data glove or a handle.
  • the first terminal acquires the information of the performance movements of the first user
  • the second terminal acquires the information of the performance movements of the second user
  • both are uploaded to the server; position, gesture, acceleration and other information) to generate corresponding feedback information and send it to the first terminal and the second terminal
  • the first terminal outputs the feedback information to the interactive device worn by the first user
  • the second terminal outputs the feedback information to the interactive device worn by the first user.
  • the feedback information is output to the interactive device worn by the second user.
  • the steps for the server to generate feedback information according to the information of the performance action may be as follows: (1) According to the information of the performance action, perform collision detection and calculation by the spatial decomposition method or the hierarchical bounding box method, and obtain the collision detection result.
  • the result includes the contact position of the user's virtual hand and the virtual object in the virtual reality scene calculated according to the position of the playing action, and the force on the virtual object calculated according to the gesture and acceleration of the playing action; (2) According to the collision The deformation calculation of the virtual object is performed on the detection result to obtain visual feedback information; (3) the user's playing effect on the virtual performance device is determined according to the collision detection result, and the corresponding audio information is obtained as the auditory feedback information; (4) According to the collision detection result, Combined with Newton's third law and the physical properties of virtual objects, force feedback calculations are performed to obtain force feedback information.
  • the performance prompt information can be displayed according to the performance prompt information.
  • Accurately execute the corresponding performance actions thereby reducing the professional performance requirements for users, improving the universal applicability and ease of operation of the collaborative performance method; and, because the feedback information generated by the performance actions performed by the first user and the second user can be obtained. and output, so that the user can obtain the feedback effect of the current collaborative performance in time, thus enhancing the sense of realism and immersion of the first user when performing the collaborative performance with the second user.
  • the performance prompt information includes lyrics prompt information
  • A1 Acquire the first response data generated by the first user singing in the target virtual reality scene according to the lyrics prompt information and transmit it to the second terminal, and acquire the second user sent by the second terminal second response data generated by performing the performance action;
  • A2 Generate feedback information according to the first response data and the second response data, and output the feedback information; the feedback information includes auditory feedback information and visual feedback information.
  • the performance form of the first user specifically set in the performance form setting information is singing
  • the currently displayed performance prompt information includes lyrics prompt information.
  • the first terminal loads the lyrics data corresponding to the current performance piece from the server. , and output the lyric data to the head display device worn by the first user in time sequence according to the current performance progress.
  • the lyric data includes at least lyric text information, and may also include prompt information such as vocalization, pitch, and rhythm corresponding to the lyrics.
  • step S20302 includes steps A1 to A2. specifically:
  • the first user sings the above-mentioned performance piece according to the prompt of the lyrics prompt information
  • the first terminal captures the sound of the first user singing the performance piece through the microphone, and generates corresponding singing audio data.
  • the pre-stored singing dynamic image data for displaying the virtual avatar of the first user is acquired during the sound, and the singing audio data and the dynamic image data are used as the first response data.
  • the first terminal also receives second response data sent by the second terminal and generated according to the performance action of the second user, the second response data may include audio data generated by the second user singing or playing the virtual musical instrument, and the second response data Video data of singing of the user's avatar or video data of musical instrument performance.
  • the first terminal generates and outputs corresponding feedback information according to the acquired first response data and the second response data. Specifically, according to the acquired audio data in the first response data and the audio data in the second response data, the auditory feedback information is generated and output to the earphone worn by the first user; according to the acquired first response data of the first user In the virtual avatar's singing dynamic image data and the second response data, the second user's singing dynamic image data or musical instrument performance dynamic image data is generated, and visual feedback information is generated and output to the head display device worn by the first user.
  • the user when the performance form is specifically singing, the user can be guided to sing and output corresponding feedback information through the lyrics prompt information in the target virtual reality scene, so as to accurately and effectively realize the remote control of the user in the virtual reality scene. chorus, accompaniment.
  • the target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument, and the performance prompt information Including the musical instrument operation prompt information identified in the virtual musical instrument; the step S20302 includes:
  • B1 Acquire first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information, and transmit it to the second terminal , and obtain the second response data generated by the second user performing the performance action sent by the second terminal;
  • B2 Generate feedback information according to the first response data and the second response data, and output the feedback information, where the feedback information includes auditory feedback information, visual feedback information, and force feedback information.
  • the performance form of the first user specifically set by the performance form setting information is musical instrument performance.
  • the performance form setting information also sets the target musical instrument to be played by the first user.
  • the target musical instruments may include percussion instruments such as gongs, drums, and drums; keyboard instruments such as piano and electronic organ; stringed instruments such as violin, cello, guqin, and zither;
  • the current target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument.
  • the virtual musical instrument is a virtual object having the form of a target musical instrument in the target virtual reality scene (that is, a three-dimensional model constructed by imitating the actual target musical instrument).
  • the virtual simple performance device is a virtual object in the form of a simple performance device for playing the target musical instrument in the target virtual reality scene.
  • the simple performance device can simplify the user's operation on the target musical instrument, so that users who do not understand musical instruments can also Simple and effective implementation of musical instrument performance.
  • the simple performance device can be an electronic percussion pad
  • the virtual simple performance device is a virtual electronic percussion pad, the schematic diagram of which is shown in Figure 3;
  • the target musical instrument selected by the user is When a drum kit is used, multiple hitting positions on the drum kit are mapped to the virtual electronic pad, so that the user can tap the virtual electronic pad in the target virtual reality scene to produce the same sound effects as operating the drum kit.
  • the paddle Compared with the drum kit, the paddle has fewer operating positions and lower requirements on the striking action, so the complexity of the user's operation of the musical instrument can be reduced.
  • the simple performance device may be a Musical Instrument Digital Interface (MIDI) keyboard
  • the virtual simple performance device is a virtual MIDI keyboard, the schematic diagram of which is shown in FIG. 4 ;
  • MIDI Musical Instrument Digital Interface
  • the target instrument is a piano
  • mapping multiple playing positions on the piano to the virtual MIDI keyboard the user can play the virtual MIDI keyboard in the target virtual reality scene to produce sound effects consistent with playing the piano.
  • the virtual MIDI keyboard has fewer keys than a piano, which can reduce the complexity of the user's operation of the musical instrument.
  • the virtual simple performance device may be composed of virtual strings in the form of several strings; exemplarily, if the current virtual stringed instrument is specifically a virtual violin, the virtual simple performance device is shown in FIG. 5 , It is composed of four strings corresponding to the four tone names of G, D, A, and E respectively; exemplarily, when the virtual stringed instrument is specifically a guitar, the virtual simple performance device is composed of four strings corresponding to E, A, D, G, B, E respectively. It consists of six strings with six note names.
  • the virtual simple performance device may be a virtual hole in the form of several round holes, including blow holes and finger holes, as shown in FIG. 6 .
  • the blow hole is only used for indication, not for user operation, and the position of the finger hole can be pressed by the user through an interactive device such as a data glove.
  • an interactive device such as a data glove.
  • the default The blowing hole has already taken effect; that is, the user can perform the blowing and pressing coordination effect equivalent to the actual wind instrument only by instructing the pressing operation, thereby further simplifying the user's musical instrument performance operation.
  • its corresponding virtual simple performance device can be a virtual sound board, and the form of this virtual sound board is similar to the virtual electronic percussion board shown in Figure 3, and the user only needs to tap or press.
  • the sound board can show the sound effects of stringed instruments and wind instruments in the virtual reality scene.
  • the performance prompt information in the embodiment of the present application includes the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device, and the musical instrument operation prompt information specifically includes the time point and operation mode of the operation, and the operation time point is when the repertoire is played.
  • the musical instrument operation prompt information can be the percussion instruction information displayed on the virtual electronic percussion pad in time sequence, for example, at 1 to 6.
  • Graphical information such as highlights, or sparks, is displayed on one of the number plates to instruct the user to strike the plate.
  • the musical instrument operation prompt information can be the pressing instruction information displayed on the virtual MIDI keyboard in time sequence, for example, in the MIDI keyboard. A highlighted or raised graphic message is displayed on a keyboard to instruct the user to press the keyboard.
  • the musical instrument operation prompt information can be the strumming/plucking/pressing instruction information displayed on the virtual string in time sequence, for example, the current need A virtual string to be operated is highlighted, and the text prompt information of "play", "pluck” or "press” is displayed to instruct the user to strum/pick/press the corresponding virtual string.
  • the musical instrument operation prompt information can specifically be that some original state of the hollow finger hole is transformed into a solid state in time sequence,
  • FIG. 7(si) Schematic diagram of the finger hole pressing corresponding to the seven roll calls.
  • step B1 the first user operates the virtual musical instrument in the target virtual reality scene through the interactive device (handle or data glove) according to the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device, so as to realize the musical instrument performance.
  • the first terminal captures the action information of the first user operating the virtual musical instrument or the virtual simple performance device through the interactive device, and calculates the corresponding first response data from the action information through the response data determination algorithm of the first terminal itself , or send the action information to the server, and the server calculates the corresponding first response data through the response data determination algorithm and returns it to the first terminal.
  • the first terminal also receives second response data generated by the second user performing the performance action and sent by the second terminal or the server.
  • the first terminal combines the first response data generated by the first user operating the virtual musical instrument and the second response data generated by the second terminal performing the performance action to determine and output the feedback information that needs to be fed back to the first user.
  • the feedback information also includes information of the reaction force of the first user operating the virtual musical instrument or the virtual simple performance device—force feedback information.
  • the first terminal outputs the auditory feedback information to the headset worn by the first user, outputs the visual feedback information to the head display device worn by the first user, outputs the force feedback information to the handle held by the first user, or Wearing data gloves, so as to accurately feed feedback information to the first user.
  • the performance form is the performance of a musical instrument
  • the user can be instructed to operate the virtual musical instrument or the virtual simple instrument in the target virtual reality scene through the interactive device through the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device.
  • the performance device enables non-professional users to conveniently and accurately perform musical instrument performance in the target virtual reality scene through the musical instrument operation prompt information; and, since auditory feedback information, visual feedback information and The force feedback information is fed back to the user, so it is possible to enhance the sense of realism and immersion of the first user when performing collaborative performance with the second user.
  • obtaining the first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the virtual reality scene according to the musical instrument operation prompt information, and transmitting it to the second user. terminal including:
  • B11 Acquire the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information;
  • B12 Generate the first visual response data and the first force response data according to the action information
  • the pre-stored sound source information is the sound effect performed by the target musical instrument, and the sound effect includes a rhythm sound or a melody sound;
  • step B2 includes:
  • the auditory feedback information, the visual feedback information, and the force feedback information are output.
  • the first response data specifically includes first sound response data, first visual response data, and first force-sensing response data.
  • the first terminal obtains the sensor data on the interactive device to determine the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene through the interactive device according to the musical instrument operation prompt information .
  • the motion information includes motion position, motion gesture, and motion acceleration information.
  • the first terminal determines the performance of the first user and the virtual musical instrument or virtual instrument in the target virtual reality scene through the collision detection algorithm of itself or the server (for example, the spatial decomposition method or the hierarchical bounding box method) according to the action information.
  • Collision detection results generated by virtual objects such as a simple performance device then, according to the collision detection results and the deformation algorithm, determine the first visual response data, which is used to indicate that the first user operates the virtual musical instrument or the virtual simple performance device to the target virtual reality scene
  • the deformation result brought by the virtual object in the virtual reality can be drawn with the NURBS interface provided in the virtual reality construction language OpenGL API; and, according to the collision detection result and the force feedback algorithm (such as the mass-spring model algorithm or the finite element method) , determine the first force sense response data, the first force sense response data is data representing the reaction force of the virtual musical instrument obtained after the first user operates the virtual musical instrument to bring force to the virtual musical instrument.
  • the pre-stored sound source information is a pre-stored sound effect played by the target musical instrument, and the sound effect may be a rhythm sound of a percussion instrument, or a melody sound of a keyboard instrument, a stringed instrument, a wind instrument, or the like.
  • the pre-stored sound source information is to extract the rhythm sound and/or the melody sound in the pre-stored audio file corresponding to the performance piece according to the time sequence, and the pre-stored audio file is to collect in advance the sound of at least one actual musical instrument playing the performance piece.
  • the resulting audio file, the at least one actual instrument includes at least the target instrument.
  • the server stores the pre-stored audio files of the currently played repertoire, and extracts the rhythm sounds or melody sounds played by each instrument at each moment in the pre-stored audio files according to the time sequence, and obtains the pre-stored audio source information arranged in time sequence; , when the action information performed by the user matches the musical instrument operation prompt information, the pre-stored sound source information corresponding to the target musical instrument at this moment is acquired as the current first sound response data.
  • the pre-stored audio file of the current performance includes the rhythm sound of the drum playing the performance and the melody tone of the piano playing the performance, and extract the music from the pre-stored audio file respectively.
  • the performance time of the repertoire is the rhythm sound and melody sound of the 1'20" (1 minute 20 seconds), which are saved separately, and the rhythm sound that appears on the 1'20" when the drum is playing the repertoire, and the piano sound is obtained.
  • the melody sound that appears at the 1'20" when the performance is played; after that, when the user selects the performance and selects the drum kit as the target instrument, if the user's action information at the 1'20" and the instrument operation prompt information indicate The striking position matches, then take the 1'20" rhythm sound stored in advance as the current pre-stored sound source information to generate the first sound response data; Or, when the user selects the piano as the target
  • the action information of 20" is consistent with the flicking position indicated by the musical instrument operation prompt information, then the melody tone of the 1'20" stored in advance is used as the current pre-stored sound source information to generate the first sound response data.
  • Fig. 8 The process of extracting and saving the rhythm sound and melody sound of the 1'30" (1 minute and 30 seconds) shown in Fig. 1 is the same as that of the 1'20". It should be understood that Fig. 8 only illustrates the rhythm sound and Melody tone, the extraction and preservation process of the rhythm tone of other performance moments not shown in the figure, the melody tone are similar to the extraction and preservation process of the two performance moments of the example in the figure.
  • the melody sound is saved as pre-stored sound source information, and the method of using the pre-stored sound source information as the generated first sound response information in the follow-up can be called a "keying restoration" method, through which the actual musical instrument performance current can be accurately restored.
  • the above-mentioned pre-stored sound source information can also be downloaded from the sound source database on the network, or can be downloaded through, for example, professional audio editing software (Adobe Audition, AE), fruit music editing software (Fruity Loops Studio, FL studio) and other music editing software are obtained by synthesizing the sound effect of the target instrument playing the performance.
  • professional audio editing software Adobe Audition, AE
  • fruit music editing software Feluity Loops Studio, FL studio
  • other music editing software are obtained by synthesizing the sound effect of the target instrument playing the performance.
  • step B14 since the second user performing collaboratively with the first user needs to acquire the visual and auditory effects of the virtual musical instrument played by the first user to enhance the realism of the collaborative performance, the above-generated first visual response data and The first sound response data is directly or indirectly transmitted to the second terminal through the server, so that the second terminal generates corresponding feedback information to feed back to the second user, and the first force response data is received by the first user operating the virtual musical instrument.
  • the reaction force data only needs to be fed back to the first user in the follow-up, and does not need to be transmitted to the second user.
  • the first terminal combines the first sound response data determined in step B13 with the second sound response data contained in the second response data (that is, the sound response data generated by the second user performing the performance action) Obtain the auditory feedback information, and output it to the earphone worn by the first user, so that the user can obtain the auditory feedback information; combine the second visual response data included in the second response data of the first visual response data set determined in step B12 Obtain the visual feedback information, and output it to the head display device worn by the second user, so that the user can obtain the visual feedback information; output the first force sense response data determined in step B12 directly to the first user as force sense feedback information Operate the interactive device or wear the data glove, so that the user can obtain the force feedback information.
  • the acquisition of the action information of the first user operating the virtual musical instrument through the interactive device accurately generates the corresponding first visual response data, the first force response data, and the first sound response data generated in combination with the pre-stored sound source information, And then combined with the second response data received by the first terminal to accurately output auditory feedback information, visual feedback information and force feedback information respectively, so it can improve the accuracy of feedback information output, and further enhance the first user's ability to communicate with the second user. Realism when users perform collaborative performances.
  • the pre-stored sound source information is specifically the rhythm sound or melody sound extracted in time sequence from the pre-stored audio file corresponding to the performance piece, the sound of the target musical instrument playing the current piece of performance can be accurately restored through the "keying restoration" method.
  • step S203 it also includes:
  • the first terminal statistically generates and outputs performance evaluation data of the first user according to the performance information recorded during the collaborative performance of the first user.
  • the first terminal uploads the recorded information on the performance of the first user when performing collaboratively to the server, so that the server records the information on the performance of the first user recorded by the first terminal and the information on the performance recorded by the second terminal.
  • the server records the information on the performance of the first user recorded by the first terminal and the information on the performance recorded by the second terminal.
  • Perform statistical analysis on the information of the user's performance actions to obtain the overall performance evaluation data of the collaborative performance, and then the first terminal and the second terminal respectively obtain and output the overall performance evaluation data from the server.
  • the recorded performance action information includes the pitch information and rhythm information recorded in time sequence when the user sings.
  • the pre-stored pitch information and pre-stored rhythm information of the repertoire are compared, and the user's singing score is obtained as the performance evaluation data.
  • the recorded performance action information includes the action information and action frequency when the user operates the virtual musical instrument, and the statistical operation accuracy (the action information of the user operating the virtual musical instrument is consistent with the musical instrument operation prompt information) percentage), and use this accuracy rate as performance evaluation data.
  • the first terminal may convert the performance evaluation data into image information and output it to the head-mounted display device worn by the first user, or convert it into audio information and output it to the first terminal. An earphone worn by the user, so that the first user can obtain the performance evaluation data.
  • the first user can obtain the evaluation feedback information of the collaborative performance in time, and the user can grasp the results of each collaborative performance in time so as to Make corresponding improvements and strengthen training to improve the intelligence and user experience of collaborative playing.
  • the first user sends a collaborative performance invitation through the first terminal, and after obtaining the invitation acceptance information returned by the second user through the second terminal, the first user loads the collaborative performance invitation information from the server together with the second terminal.
  • the determined virtual reality scene that is, the target virtual reality scene
  • the target virtual reality scene displays the target virtual reality scene and instructing the first user to perform the repertoire in collaboration with the second user in the target virtual reality scene. Since the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions can realize remote music entertainment communication based on virtual reality technology.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 9 shows a schematic flowchart of a second collaborative performance method provided by an embodiment of the present application. The method is applied to a second terminal, and the details are as follows:
  • the definitions of the first user, the second user, the first terminal, and the second terminal are exactly the same as those in the previous embodiment, and will not be repeated here.
  • the collaborative performance invitation information is received, and the acceptance invitation information is returned to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance pieces and performance scene information.
  • the second terminal directly receives the collaborative performance invitation information sent by the first terminal, or indirectly receives the collaborative performance invitation information sent by the first terminal through the server, and after the second user inputs the information confirming the reception of the invitation, generates the invitation acceptance information and sending, so that the invitation acceptance information is directly sent to the first terminal corresponding to the first user or indirectly returned to the first terminal through the server.
  • the collaborative performance invitation information includes at least performance repertoire and performance scene information, and may also include performance difficulty information, account information of the invited second user, the number of users for this collaborative performance, the permissions of the second user, and the user's avatar settings. Information, performance form setting information, etc.
  • the specific meaning of the collaborative performance invitation information is the same as the collaborative performance invitation information in the first embodiment, and for details, please refer to the relevant description in the first embodiment.
  • the target virtual reality scene is the virtual reality scene determined by the server according to the collaborative performance invitation information .
  • the second terminal and the first terminal together load the target virtual reality scene corresponding to the performance scene information in the collaborative performance invitation information from the server.
  • the specific meaning of the target virtual reality scene is the same as that in the first embodiment.
  • the collaborative performance invitation information is the same, and for details, please refer to the relevant description in Embodiment 1.
  • the target virtual reality scene is displayed, and the second user is instructed to perform the performance piece together with the first user in the target virtual reality scene.
  • the second terminal After loading the target virtual reality scene, the second terminal outputs the data of the target virtual reality scene to the interactive device worn by the second user, and instructs the second user to perform a performance action through the interactive device. Specifically, the second terminal outputs the visual image information of the target virtual reality scene to the head display device worn by the second user according to the loaded data of the target virtual reality scene, so as to show the second user that the virtual environment and the virtual performance device are included. , the target virtual reality scene of the virtual avatars of the first user and the second user, and by adding instruction information for instructing the second user to perform in the visual image information to instruct the second user to pass the handle, data glove or microphone, etc.
  • the interactive device performs the performance action, so that the first user and the second user can perform the performance piece together in the target virtual reality scene.
  • the collaborative performance invitation information further includes performance form setting information for setting the performance form of the first user and the performance form of the second user, and the performance forms include singing and musical instrument performance;
  • step S903 includes:
  • the performance prompt information includes lyrics prompt information
  • the acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information includes:
  • the feedback information includes auditory feedback information and visual feedback information.
  • the target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument, and the performance prompt information Including musical instrument operation prompt information identified on the virtual musical instrument or the virtual simple performance device;
  • the acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information includes:
  • the second response data includes second sound response data, second visual response data, and second force-sensing response data, and the acquisition of the first user according to the musical instrument operation prompt information in the target virtual
  • the pre-stored sound source information is the pre-stored sound effect played by the target musical instrument, and the sound effect includes a rhythm sound or a melody sound;
  • the generating feedback information according to the first response data and the second response data, and outputting the feedback information includes:
  • the force sense feedback information is output.
  • step S903 it also includes:
  • the second terminal can obtain the collaborative performance invitation information sent by the first terminal, load and display the corresponding target virtual reality scene, and instruct the second user to perform collaborative performance with the first user in the target virtual reality scene.
  • the predetermined performance repertoire can effectively realize the collaborative performance of the second user and the first user through the interaction between the second terminal and the first terminal, so that the first user and the second user can perform collaborative performance in the target virtual reality scene. Therefore, it is convenient and effective to realize remote music entertainment interaction, so that music lovers in different regions can realize remote music entertainment communication based on virtual reality technology.
  • FIG. 10 shows a schematic structural diagram of a collaborative performance system provided by an embodiment of the present application. For the convenience of description, only the part related to the embodiment of the present application is shown:
  • the collaborative performance system includes a first terminal, at least one second terminal and a server.
  • the first terminal is used to execute the collaborative performance method described in the first embodiment
  • the second terminal is used to execute the collaborative performance method described in the second embodiment.
  • the server is used for receiving the collaborative performance invitation information, determining a target virtual reality scene according to the collaborative performance invitation information; and for transmitting the data of the target virtual reality scene to the first terminal and the the second terminal.
  • the server is a terminal device that constructs and stores a virtual reality scene.
  • the server can be built in advance through tools such as Virtual Reality Modeling Language (VRML), Java 3D (a set of application programming interfaces extended by Java language in the field of 3D graphics), or Open Graphics Library (OpenGL). And store various virtual reality scenes including K hall, grassland, seaside and other virtual environments, as well as pre-stored virtual performance equipment database and virtual avatar database.
  • VRML Virtual Reality Modeling Language
  • Java 3D a set of application programming interfaces extended by Java language in the field of 3D graphics
  • OpenGL Open Graphics Library
  • the server determines the corresponding virtual reality scene according to the performance scene information contained in the collaborative performance invitation information, and selects the virtual performance device and the virtual avatar from the virtual reality scene according to the selection of the first user and/or the second user.
  • the performance equipment database selects the corresponding virtual performance equipment, and selects the corresponding virtual avatar from the virtual avatar database to add to the virtual reality scene, thereby obtaining the target virtual reality scene.
  • the server receives the loading request from the first terminal and the second terminal, it transmits the data of the target virtual reality scene (specifically, the VRML file of the target virtual reality scene) to the first terminal and the second terminal, so that the data of the target virtual reality scene can be transmitted to the first terminal and the second terminal.
  • the first terminal and the second terminal display the target virtual reality scene to the corresponding first user and the second user.
  • the server in addition to constructing, storing and transmitting virtual reality scenarios, can also be used as a data transmission medium between the first terminal and the second terminal, and the interaction between the first terminal and the second terminal can also be used. All are realized through the data transfer of the server.
  • the server is also used to monitor the information of the performance of the first user recorded by the first terminal and the information of the performance of the second user recorded by the second terminal, and calculate and generate the corresponding performance according to the information of these performances.
  • feedback information and then return the feedback information to the first terminal and the second terminal, so that the first terminal outputs the feedback information to the first user, and the second terminal outputs the feedback information to the second user, so that the first terminal outputs the feedback information and feeds back the feedback information to the second user.
  • the user and the second user can timely and accurately obtain the execution effect of their own and each other's performance actions, so as to enhance the sense of realism and immersion of the first user when performing collaborative performance with the second user.
  • the server is also used to calculate and generate the performance evaluation data of the collaborative performance according to the information of the performance of the first user and the information of the performance of the second user, and output it to the first terminal and the second terminal for feedback. to the corresponding first user and second user, so that the user can grasp the evaluation feedback information of the current collaborative performance in time, and improve the intelligence and user experience of the collaborative performance.
  • the terminal collaborative performance system in this embodiment of the present application can be implemented based on a Web3D virtual reality network platform, and the first terminal, the second terminal (collectively referred to as user terminals), and the server can establish a communication connection through a 5G network.
  • the server may include a collaboration server and a web server, wherein the web server stores the files of the virtual reality scene in advance, specifically including VRML files with the suffix .wrl and java with the suffix .class (an object-oriented programming language) file.
  • the user can log in to the corresponding user account through the browser on the user terminal, establish a communication connection with the server, and load the file of the corresponding target virtual reality scene, and display the target virtual reality scene according to the VRML file output of the target virtual reality scene.
  • the java file of the virtual reality scene generate the Java used to realize the interaction between the inside and the outside of the target virtual reality scene Applet (a small application written in the Java language that can be directly embedded in a web page).
  • the monitoring thread in the collaborative server on the server side monitors the change information inside the target virtual reality scene recorded by the Java Applet on each user terminal in real time (it can be the collision detection result information brought by the user performing the performance action in the target virtual reality scene) , and pass the change information to the Java Applet programs of other user terminals through the communication thread of the collaborative server, so that the Java Applet programs of the other user terminals act the change information on the target virtual reality scene displayed respectively, and generate corresponding
  • the feedback information is fed back to the corresponding user, so that each user can obtain the change effect produced by other users acting on the target virtual reality scene.
  • FIG. 11 shows the process that the user terminal a loads the VRML file with the suffix of .wrl and the java file with the suffix of .class from the Web server, and the server monitors the change information on the user terminal a through the monitoring thread And the process of feeding back to other user terminals through the communication thread.
  • the collaborative performance system of the embodiment of the present application realizes the transmission of collaborative performance invitation information, the determination, loading and display of the target virtual reality scene, and the output of feedback information through the interaction of the first terminal, the second terminal and the server, so that different users can Through the target virtual reality scene, the remote collaborative performance can be realized immersively, and the remote music entertainment interaction can be realized conveniently and effectively.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • FIG. 12 shows a schematic structural diagram of a first terminal provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown:
  • the first terminal includes: a collaborative performance invitation information sending unit 121 , an invitation acceptance information obtaining unit 122 , and a first displaying unit 123 . in:
  • the collaborative performance invitation information sending unit 121 is configured to send collaborative performance invitation information, where the collaborative performance invitation information includes performance pieces and performance scene information.
  • the acceptance invitation information obtaining unit 122 is configured to, if the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal;
  • the target virtual reality scene is a virtual reality scene determined by the server according to the collaborative performance invitation information.
  • the first display unit 123 is configured to display the target virtual reality scene, and instruct the first user to perform the performance piece together with the second user in the target virtual reality scene.
  • FIG. 13 shows a schematic structural diagram of a second terminal provided by the present application. For the convenience of description, only the parts related to the embodiments of the present application are shown:
  • the second terminal includes: a collaborative performance invitation information receiving unit 131 , a loading unit 132 , and a second displaying unit 133 . in:
  • the collaborative performance invitation information receiving unit 131 is configured to receive collaborative performance invitation information, and return the acceptance invitation information to the first terminal corresponding to the first user;
  • the collaborative performance invitation information includes performance repertoire and performance scene information;
  • the loading unit 132 is configured to load the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality determined by the server according to the collaborative performance invitation information. real scene;
  • the second display unit 133 is configured to display the target virtual reality scene, and instruct the second user to perform the performance piece together with the first user in the target virtual reality scene.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • FIG. 14 is a schematic diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 14 of this embodiment includes: a processor 140 , a memory 141 , and a computer program 142 stored in the memory 141 and executable on the processor 140 , such as a collaborative performance program.
  • the processor 140 executes the computer program 142, the steps in each of the above-mentioned embodiments of the collaborative performance method are implemented, such as steps 201 to S203 shown in FIG. 2 or steps S901 to S903 shown in FIG. 3 .
  • the functions of the modules/units in the above device embodiments are implemented, for example, the functions of the units 121 to 123 shown in FIG. 12 or the functions of the units 131 to 133 shown in FIG. 13 . Function.
  • the computer program 142 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 141 and executed by the processor 140 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 142 in the terminal device 14 .
  • the computer program 142 can be divided into a collaborative performance invitation information sending unit, an invitation acceptance information acquisition unit, and a first presentation unit; or, the computer program 142 can be divided into a collaborative performance invitation information receiving unit, a loading unit, Second display unit.
  • the terminal device 14 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, the processor 140 and the memory 141 .
  • FIG. 14 is only an example of the terminal device 14, and does not constitute a limitation on the terminal device 14, and may include more or less components than those shown, or combine some components, or different components
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 140 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, Digital Signal Processors (Digital Signal Processors, DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (Field-Programmable Gate Arrays) Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 141 may be an internal storage unit of the terminal device 14 , such as a hard disk or a memory of the terminal device 14 .
  • the memory 141 may also be an external storage device of the terminal device 14, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the terminal device 14 card, flash memory card (Flash Card), etc.
  • the memory 141 may also include both an internal storage unit of the terminal device 14 and an external storage device.
  • the memory 141 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 141 may also be used to temporarily store data that has been output or will be output.
  • the disclosed apparatus/terminal device and method may be implemented in other manners.
  • the apparatus/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.

Abstract

The present application is suitable for use in the technical field of computers, and provides a collaborative performance method and system, a terminal device, and a storage medium, comprising: a first user sending collaborative performance invitation information by means of a first terminal, the collaborative performance invitation information comprising performance scene information; when acquiring invitation acceptance information returned by a second terminal, the first terminal and the second terminal together loading a target virtual reality scene corresponding to the performance scene information from a server; displaying the target virtual reality scene, and instructing the first user to implement collaborative performance with a second user in the target virtual reality scene. The embodiments of the present application enable people to implement remote music communication.

Description

协同演奏方法、系统、终端设备及存储介质Collaborative performance method, system, terminal device and storage medium
本申请要求于2020年09月07日在中国专利局提交的、申请号为202010927502.4、发明名称为“协同演奏方法、系统、终端设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202010927502.4 and the invention title "Cooperative performance method, system, terminal device and storage medium" filed in the China Patent Office on September 7, 2020, the entire contents of which are approved by Reference is incorporated in this application.
技术领域technical field
本申请属于计算机技术领域,尤其涉及一种协同演奏方法、系统、终端设备及存储介质。The present application belongs to the field of computer technology, and in particular, relates to a collaborative performance method, system, terminal device and storage medium.
背景技术Background technique
随着社会的发展及生活水平的提高,人们的娱乐方式越来越丰富,其中唱歌、乐器演奏等音乐娱乐方式为一种广受喜爱的娱乐方式。但现实生活中,音乐作品的表现往往需要大家协同合作,而位于不同地域的人们通常难以抵达同一地点进行音乐作品的协同合作,导致音乐作品的协同演奏实现较难。With the development of society and the improvement of living standards, people's entertainment methods are becoming more and more abundant, among which music entertainment such as singing and musical instrument performance is a popular entertainment method. However, in real life, the performance of musical works often requires everyone's cooperation, and it is often difficult for people in different regions to reach the same place to collaborate on musical works, which makes it difficult to achieve collaborative performance of musical works.
技术问题technical problem
本申请实施例提供了一种协同演奏方法、系统、终端设备及存储介质,以解决现有技术中如何简便有效地让处于不同地域的人们实现音乐的协同演奏的问题。Embodiments of the present application provide a collaborative performance method, system, terminal device, and storage medium to solve the problem in the prior art of how to easily and effectively enable people in different regions to perform music collaboratively.
技术解决方案technical solutions
为解决上述技术问题,本申请实施例采用的技术方案是:In order to solve the above-mentioned technical problems, the technical solutions adopted in the embodiments of the present application are:
本申请实施例的第一方面提供了一种协同演奏方法,所述方法应用于第一用户对应的第一终端,包括:A first aspect of the embodiments of the present application provides a collaborative performance method, where the method is applied to a first terminal corresponding to a first user, including:
发送协同演奏邀请信息,所述协同演奏邀请信息包括演奏曲目及演奏场景信息;Sending collaborative performance invitation information, where the collaborative performance invitation information includes performance repertoire and performance scene information;
若获取到第二用户通过第二终端返回的接受邀请信息,则与所述第二终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;If the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal; the target virtual reality scene is the service The virtual reality scene determined by the terminal according to the collaborative performance invitation information;
展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目。The target virtual reality scene is displayed, and the first user is instructed to perform the performance piece together with the second user in the target virtual reality scene.
本申请实施例的第二方面提供了另一种协同演奏方法,所述方法应用于第二用户对应的第二终端,包括:The second aspect of the embodiments of the present application provides another collaborative performance method, the method is applied to the second terminal corresponding to the second user, and includes:
接收协同演奏邀请信息,并返回接受邀请信息至第一用户对应的第一终端;所述协同演奏邀请信息包括演奏曲目及演奏场景信息;Receive the collaborative performance invitation information, and return the acceptance invitation information to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance repertoire and performance scene information;
与所述第一终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;Loading the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality scene determined by the server according to the collaborative performance invitation information;
展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目。The target virtual reality scene is displayed, and the second user is instructed to perform the performance piece together with the first user in the target virtual reality scene.
本申请实施例的第三方面提供了一种协同演奏系统,所述系统包括第一用户对应的第一终端,第二用户对应的第二终端以及服务端;A third aspect of the embodiments of the present application provides a collaborative performance system, where the system includes a first terminal corresponding to a first user, a second terminal corresponding to a second user, and a server;
所述第一终端,用于执行如第一方面所述的协同演奏方法;The first terminal is configured to execute the collaborative performance method as described in the first aspect;
所述第二终端,用于执行如第二方面所述的协同演奏方法;The second terminal is configured to execute the collaborative performance method as described in the second aspect;
所述服务端,用于接收所述协同演奏邀请信息,根据所述协同演奏邀请信息确定目标虚拟现实场景;以及用于将所述目标虚拟现实场景的数据传送至所述第一终端及所述第二终端。The server is used for receiving the collaborative performance invitation information, and determining a target virtual reality scene according to the collaborative performance invitation information; and for transmitting data of the target virtual reality scene to the first terminal and the second terminal.
本申请实施例的第四方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,当所述处理器执行所述计算机程序时,使得终端设备实现如第一方面或者第二方面所述协同演奏方法的步骤。A fourth aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, when the processor executes the computer program At the time, the terminal device is made to implement the steps of the collaborative performance method described in the first aspect or the second aspect.
本申请实施例的第五方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得终端设备实现如第一方面或者第二方面所述协同演奏方法的步骤。A fifth aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, enables a terminal device to implement the first aspect or The steps of the collaborative performance method described in the second aspect.
本申请实施例的第六方面提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行如第一方面或者第二方面所述协同演奏方法的步骤。A sixth aspect of the embodiments of the present application provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the steps of the collaborative performance method described in the first aspect or the second aspect.
有益效果beneficial effect
本申请实施例中,第一用户通过第一终端发送协同演奏邀请,并在获取到第二用户通过第二终端返回的接受邀请信息后,与第二终端一同从服务端加载根据协同演奏邀请信息确定的虚拟现实场景(即目标虚拟现实场景),并在之后展示目标虚拟现实场景以及指示第一用户在目标虚拟现实场景中与所述第二用户协同演奏演奏曲目。由于能够根据协同演奏邀请信息邀请第二用户以及从服务端加载目标虚拟现实场景,使得第一用户和第二用户能够通过在该目标虚拟现实场景中协同演奏指定的演奏曲目,因此能够方便有效地实现远程音乐娱乐交互,使得处于不同地域的人们无需抵达同一地点,也能够基于虚拟现实技术,简便有效地实现身临其境的远程音乐协同演奏。In this embodiment of the present application, the first user sends a collaborative performance invitation through the first terminal, and after obtaining the invitation acceptance information returned by the second user through the second terminal, the first user loads the collaborative performance invitation information from the server together with the second terminal. The determined virtual reality scene (that is, the target virtual reality scene), and then displaying the target virtual reality scene and instructing the first user to perform the repertoire in collaboration with the second user in the target virtual reality scene. Since the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions do not need to arrive at the same place, and based on virtual reality technology, they can easily and effectively realize immersive remote music collaborative performance.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that are required to be used in the description of the embodiments or the prior art.
图1是本申请实施例提供的一种协同演奏方法的应用场景示意图;1 is a schematic diagram of an application scenario of a collaborative performance method provided by an embodiment of the present application;
图2是本申请实施例提供的第一种协同演奏方法的实现流程示意图;Fig. 2 is the realization flow schematic diagram of the first collaborative performance method provided by the embodiment of the present application;
图3是本申请实施例提供的一种虚拟电子打击板的示意图;3 is a schematic diagram of a virtual electronic striking pad provided by an embodiment of the present application;
图4是本申请实施例提供的一种虚拟MIDI键盘的示意图;4 is a schematic diagram of a virtual MIDI keyboard provided by an embodiment of the present application;
图5是本申请实施例提供的一种虚拟弦的示意图;5 is a schematic diagram of a virtual string provided by an embodiment of the present application;
图6是本申请实施例提供的一种虚拟孔的示意图;6 is a schematic diagram of a virtual hole provided by an embodiment of the present application;
图7是本申请是实施例提供一种虚拟孔对应的乐器操作提示信息的示意图;7 is a schematic diagram of the present application providing a kind of musical instrument operation prompt information corresponding to a virtual hole according to an embodiment;
图8是本申请实施例提供的节奏音和旋律音的提取示意图;Fig. 8 is the extraction schematic diagram of rhythm tone and melody tone provided by the embodiment of the present application;
图9是本申请实施例提供的第二种协同演奏方法的实现流程示意图;Fig. 9 is the realization flow schematic diagram of the second kind of collaborative performance method provided by the embodiment of the present application;
图10是本申请实施例提供的一种协同演奏系统的示意图;10 is a schematic diagram of a collaborative performance system provided by an embodiment of the present application;
图11是本申请实施例提供的另一种协同演奏系统的示意图;11 is a schematic diagram of another collaborative performance system provided by an embodiment of the present application;
图12是本申请实施例提供的一种第一终端的示意图。FIG. 12 is a schematic diagram of a first terminal provided by an embodiment of the present application.
图13是本申请实施例提供的一种第二终端的示意图。FIG. 13 is a schematic diagram of a second terminal provided by an embodiment of the present application.
图14是本申请实施例提供的终端设备的示意图。FIG. 14 is a schematic diagram of a terminal device provided by an embodiment of the present application.
本发明的实施方式Embodiments of the present invention
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are set forth in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to those skilled in the art that the present application may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to illustrate the technical solutions described in the present application, the following specific embodiments are used for description.
在目前的音乐娱乐方式中,需要多个音乐爱好者到达约定的场所才能实现,而处于不同地域、距离较远的音乐爱好者难以同步进行相互的音乐娱乐交流。为了解决该技术问题,本申请实施例提供了一种协同演奏方法、系统、终端设备及存储介质,该协同演奏方法包括第一用户通过第一终端发送协同演奏邀请,并在获取到第二用户通过第二终端返回的接受邀请信息后,与第二终端一同从服务端加载根据协同演奏邀请信息构建的目标虚拟现实场景,并在之后展示目标虚拟现实场景以及指示第一用户在目标虚拟现实场景中与所述第二用户协同演奏演奏曲目。由于能够根据协同演奏邀请信息邀请第二用户以及从服务端加载目标虚拟现实场景,使得第一用户和第二用户能够通过在该目标虚拟现实场景中协同演奏指定的演奏曲目,因此能够方便有效地实现远程音乐娱乐交互,使得处于不同地域的人们能够基于虚拟现实技术,身临其境地实现远程音乐娱乐交流。In the current music entertainment method, it is necessary for multiple music lovers to arrive at the agreed place to realize it, and it is difficult for music lovers in different regions and far away to communicate with each other in music entertainment synchronously. In order to solve this technical problem, the embodiments of the present application provide a collaborative performance method, system, terminal device, and storage medium. The collaborative performance method includes the first user sending a collaborative performance invitation through the first terminal, and after obtaining the second user After receiving the invitation information returned by the second terminal, load the target virtual reality scene constructed according to the collaborative performance invitation information from the server together with the second terminal, and then display the target virtual reality scene and instruct the first user to play in the target virtual reality scene. and perform the repertoire in collaboration with the second user. Since the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions can realize remote music entertainment communication based on virtual reality technology.
示例性地,图1为本申请实施例提供的一种协同演奏方法的应用场景示意图,包括服务端、多个用户及其对应的终端和交互设备(可以包括头显设备、耳机、麦克风、手柄/数据手套等)。当多个用户需要通过协同演奏实现音乐娱乐交流时,由其中一个用户作为第一用户(即协同演奏的邀请方),通过第一用户对应的终端(称为第一终端)发送协同演奏邀请信息至服务端,以使该协同演奏邀请信息直接或者通过服务端间接地传送至至少一个第二用户(即被第一用户邀请的其它用户)对应的终端(称为第二终端);接着,若第二用户接受邀请,则通过第二终端发送接受邀请信息,第一终端在接收到该接受邀请信息后,与第二终端一同从服务端加载目标虚拟现实场景;之后,第一终端向第一用户展示目标虚拟现实场景,并指示第一用户在目标虚拟现实场景中与第二用户协同演奏预设的演奏曲目(第一终端具体可以将目标虚拟现实场景的信息和指示信息输出至第一用户佩戴的头显设备来实现目标虚拟现实场景的展示以及对第一用户的指示),同样地,第二终端向第二用户展示目标虚拟现实场景,并指示第二用户在目标虚拟现实场景中与第一用户协同演奏该演奏曲目,从而完成第一用户和第二用户远程协同演奏,使得处于不同地域的用户能够基于虚拟现实技术,身临其境地实现远程音乐娱乐交流。Exemplarily, FIG. 1 is a schematic diagram of an application scenario of a collaborative performance method provided by an embodiment of the application, including a server, multiple users and their corresponding terminals and interactive devices (which may include a head-mounted display device, an earphone, a microphone, a handle, etc.). /data gloves, etc.). When multiple users need to realize music entertainment exchange through collaborative performance, one of the users acts as the first user (that is, the inviter of the collaborative performance), and sends the collaborative performance invitation information through the terminal corresponding to the first user (referred to as the first terminal). to the server, so that the collaborative performance invitation information is directly or indirectly transmitted through the server to the terminal (referred to as the second terminal) corresponding to at least one second user (that is, other users invited by the first user); then, if The second user accepts the invitation and sends the invitation acceptance information through the second terminal. After receiving the invitation acceptance information, the first terminal loads the target virtual reality scene from the server together with the second terminal; The user shows the target virtual reality scene, and instructs the first user to play the preset performance repertoire in collaboration with the second user in the target virtual reality scene (the first terminal may specifically output the information and instruction information of the target virtual reality scene to the first user Wearing a head-mounted display device to display the target virtual reality scene and instruct the first user), similarly, the second terminal displays the target virtual reality scene to the second user, and instructs the second user to interact with the target virtual reality scene in the target virtual reality scene. The first user performs the piece of performance collaboratively, thereby completing the remote collaborative performance of the first user and the second user, so that users in different regions can realize remote music entertainment communication based on the virtual reality technology.
实施例一:Example 1:
图2示出了本申请实施例提供的第一种协同演奏方法的流程示意图,该方法应用于第一终端,详述如下:2 shows a schematic flowchart of a first collaborative performance method provided by an embodiment of the present application. The method is applied to a first terminal, and the details are as follows:
本申请实施例中,第一用户为当前协同演奏方法的邀请者,第二用户为被第一用户所邀请的用户;第一用户所使用的终端设备为第一终端,第二用户所使用的终端设备为第二终端。第一用户、第二用户、第一终端、第二终端仅作为区别描述,任意一个用户均可以在想成为邀请者时作为第一用户,以其对应的终端设备作为第一终端执行以下S201至S203的步骤;除该第一用户以外的其它任意一个或者多个用户均可以作为第二用户,通过其对应的第二终端接受第一用户通过第一终端发送的协同演奏邀请信息,实现第一用户和第二用户的远程协同演奏。In the embodiment of the present application, the first user is the inviter of the current collaborative performance method, the second user is the user invited by the first user; the terminal device used by the first user is the first terminal, and the terminal device used by the second user is the first terminal. The terminal device is the second terminal. The first user, the second user, the first terminal, and the second terminal are only described as differences. Any user can be the first user when he wants to become an inviter, and perform the following steps from S201 to S201 to his corresponding terminal device as the first terminal. The step of S203: any one or more users other than the first user can be used as the second user, and accept the collaborative performance invitation information sent by the first user through the first terminal through its corresponding second terminal, so as to realize the first Remote collaborative performance of a user and a second user.
在S201中,发送协同演奏邀请信息,所述协同演奏邀请信息包括演奏曲目及演奏场景信息。In S201, collaborative performance invitation information is sent, where the collaborative performance invitation information includes performance pieces and performance scene information.
第一用户可以通过自身的用户账号登录第一终端预设的客户端程序,以使第一终端接入协同演奏系统。之后,第一用户可以在第一终端上进行协同演奏的设定,生成协同演奏邀请信息,并将该协同演奏邀请信息发送至服务端或者第二用户对应的第二终端。The first user can log in to the client program preset by the first terminal through his own user account, so that the first terminal can access the collaborative performance system. After that, the first user can set the collaborative performance on the first terminal, generate collaborative performance invitation information, and send the collaborative performance invitation information to the server or the second terminal corresponding to the second user.
具体地,该协同演奏邀请信息至少包括演奏曲目和演奏场景信息,演奏场景信息为用于设定虚拟现实场景的虚拟环境的信息,该虚拟环境可以包括K厅、草原、海边等。具体地,第一终端可以从服务端加载服务端预存的曲目库、虚拟环境库,从该曲目库中选择当前的演奏曲目,从虚拟环境库中选择当前的虚拟环境,从而生成该协同演奏邀请信息。可选地,该协同演奏邀请信息还可以包括演奏难度信息、被邀请的第二用户的账号信息、此次协同演奏的用户数量、第二用户的权限、用户的虚拟化身设置信息、演奏形式设置信息等。Specifically, the collaborative performance invitation information includes at least performance pieces and performance scene information, where the performance scene information is information for setting a virtual environment of a virtual reality scene, and the virtual environment may include K hall, grassland, seaside, and the like. Specifically, the first terminal can load the repertoire library and the virtual environment library pre-stored by the server from the server, select the current performance repertoire from the repertoire library, and select the current virtual environment from the virtual environment library, thereby generating the collaborative performance invitation information. Optionally, the collaborative performance invitation information may also include performance difficulty information, account information of the invited second user, the number of users for this collaborative performance, the authority of the second user, the user's avatar setting information, and performance form settings. information, etc.
在S202中,若获取到第二用户通过第二终端返回的接受邀请信息,则与所述第二终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景。In S202, if the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal; the target virtual reality scene It is the virtual reality scene determined by the server according to the collaborative performance invitation information.
本申请实施例中,第二用户对应的第二终端可以直接与第一终端建立通讯连接,直接接收第一终端发送的协同演奏邀请信息,或者,第一终端将协同演奏邀请信息发送至服务端后,由服务端转发至第二终端。第二终端获取该协同演奏邀请信息后,若第二用户接受协同演奏邀请,则通过第二终端直接向第一终端发出接受邀请信息,或者通过服务端间接地向第一终端返回接受邀请信息。In this embodiment of the present application, the second terminal corresponding to the second user may directly establish a communication connection with the first terminal, and directly receive the collaborative performance invitation information sent by the first terminal, or the first terminal may send the collaborative performance invitation information to the server. After that, it is forwarded by the server to the second terminal. After the second terminal obtains the collaborative performance invitation information, if the second user accepts the collaborative performance invitation, the second terminal directly sends the acceptance invitation information to the first terminal, or indirectly returns the acceptance invitation information to the first terminal through the server.
第一终端若获取到第二用户通过第二终端返回的该接受邀请信息,则与第二终端一同从服务端加载上述的演奏场景信息对应的目标虚拟现实场景。具体地,第一终端在获取到该接受邀请信息后,在从服务端加载目标虚拟现实场景的同时也指示第二终端加载该目标虚拟现实场景;或者,第二终端在发送接受邀请信息后,自动从服务端加载目标虚拟现实场景,而第一终端在获取到接受邀请信息后,从服务端加载该目标虚拟现实场景。If the first terminal obtains the invitation acceptance information returned by the second user through the second terminal, the first terminal loads the target virtual reality scene corresponding to the above-mentioned performance scene information from the server together with the second terminal. Specifically, after acquiring the invitation acceptance information, the first terminal instructs the second terminal to load the target virtual reality scene while loading the target virtual reality scene from the server; or, after the second terminal sends the invitation acceptance information, The target virtual reality scene is automatically loaded from the server, and the first terminal loads the target virtual reality scene from the server after acquiring the invitation acceptance information.
具体地,该目标虚拟现实场景为服务端根据协同演奏邀请信息确定的虚拟现实场景。服务端预存了包含K厅、草原、海边等虚拟环境的多种虚拟现实场景,服务端根据该协同演奏邀请信息中的演奏场景信息从预存的多个虚拟现实场景中选择一个虚拟现实场景,并添加第一用户对应的虚拟化身及虚拟演奏设备、第二用户对应的虚拟化身及虚拟演奏设备,从而生成目标虚拟现实场景。在一个实施例中,协同演奏邀请信息中除了包含演奏曲目和演奏场景信息外,还包括用户的虚拟化身设置信息和演奏形式设置信息,该用户的虚拟化身设置信息包括为第一用户选择的虚拟化身及为第二用户选择的虚拟化身的信息,该演奏形式设置信息包括为第一用户选择的演奏形式及对应的虚拟演奏设备的信息,以及为第二用户选择的演奏形式及对应的虚拟演奏设备的信息,其中该虚拟演奏设备可以包括虚拟麦克风、虚拟乐器、虚拟简易演奏装置中的任意一项或者多项;在第一终端通过步骤S201将该协同演奏邀请信息发送至服务端后,服务端根据该协同邀请信息中包含的演奏场景信息从预存的虚拟现实场景库中选择当前的虚拟现实场景,根据用户的虚拟化身设置信息在该当前的虚拟现实场景中添加对应的虚拟化身,根据演奏形式设置信息在该当前的虚拟现实场景中添加对应的虚拟演奏设备,从而得到目标虚拟现实场景,并在接收到第一终端、第二终端的加载请求后,将该目标虚拟现实场景的数据发送至该第一终端、第二终端,从而完成第一终端、第二终端对目标虚拟现实场景的加载。在另一个实施例中,协同演奏邀请信息中只包含了演奏曲目和演奏场景信息,在第一终端通过步骤S201将该协同演奏邀请信息发送至服务端后,服务端根据该协同邀请信息中包含的演奏场景信息从预存的虚拟现实场景库中选择当前的虚拟现实场景;在第一终端获取到第二终端返回的接受邀请信息后,第一用户通过第一终端向服务端发送自身设定的虚拟化身和虚拟演奏设备的信息,第二用户通过第二终端向服务端发送自身设定的虚拟化身和虚拟演奏设备的信息,之后,服务端再相应地从已选择的当前的虚拟现实场景添加第一用户对应的虚拟化身和虚拟演奏设备,以及第二用户对应的虚拟化身和虚拟演奏设备,从而得到目标虚拟现实场景,以供后续第一终端和第二终端的加载。Specifically, the target virtual reality scene is a virtual reality scene determined by the server according to the collaborative performance invitation information. The server pre-stores a variety of virtual reality scenes including K hall, grassland, seaside and other virtual environments, and the server selects a virtual reality scene from the pre-stored multiple virtual reality scenes according to the performance scene information in the collaborative performance invitation information, and The virtual avatar and virtual performance device corresponding to the first user, and the virtual avatar and virtual performance device corresponding to the second user are added, thereby generating a target virtual reality scene. In one embodiment, the collaborative performance invitation information includes, in addition to the performance piece and performance scene information, the user's avatar setting information and performance form setting information, and the user's avatar setting information includes the virtual avatar selected for the first user. Avatar and information of the virtual avatar selected for the second user, the performance form setting information includes the performance form selected for the first user and the information of the corresponding virtual performance device, and the performance form selected for the second user and the corresponding virtual performance equipment information, wherein the virtual performance device may include any one or more of a virtual microphone, a virtual musical instrument, and a virtual simple performance device; after the first terminal sends the collaborative performance invitation information to the server through step S201, the service The terminal selects the current virtual reality scene from the pre-stored virtual reality scene library according to the performance scene information contained in the collaboration invitation information, and adds the corresponding virtual avatar in the current virtual reality scene according to the user's avatar setting information, and according to the performance The form setting information adds a corresponding virtual performance device to the current virtual reality scene, so as to obtain the target virtual reality scene, and after receiving the loading request from the first terminal and the second terminal, the data of the target virtual reality scene is sent. to the first terminal and the second terminal, thereby completing the loading of the target virtual reality scene by the first terminal and the second terminal. In another embodiment, the collaborative performance invitation information only includes performance pieces and performance scene information. After the first terminal sends the collaborative performance invitation information to the server through step S201, the server sends the collaborative performance invitation information to the server according to the collaborative performance invitation information. The current virtual reality scene is selected from the pre-stored virtual reality scene library; after the first terminal obtains the acceptance invitation information returned by the second terminal, the first user sends the server through the first terminal the set by himself. The information of the virtual avatar and the virtual performance device, the second user sends the information of the virtual avatar and the virtual performance device set by himself to the server through the second terminal, and then the server correspondingly adds from the selected current virtual reality scene. The virtual avatar and virtual performance device corresponding to the first user, and the virtual avatar and virtual performance device corresponding to the second user, thereby obtain the target virtual reality scene for subsequent loading by the first terminal and the second terminal.
在S203中,展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目。In S203, the target virtual reality scene is displayed, and the first user is instructed to perform the performance piece in collaboration with the second user in the target virtual reality scene.
本申请实施例中,第一终端和第二终端均连接了对应的头显设备、耳机、麦克风、手柄、数据手套等交互设备。第一终端根据加载的目标虚拟现实场景的数据,向第一用户佩戴的头显设备输出该目标虚拟现实场景的视觉图像信息来向第一用户展示该包含了虚拟环境、虚拟演奏设备、第一用户和第二用户的虚拟化身的目标虚拟现实场景,并通过在该视觉图像信息中添加用于指示第一用户进行演奏的指示信息来指示第一用户通过手柄、数据手套或者麦克风等交互设备执行演奏动作,从而实现第一用户与第二用户在目标虚拟现实场景中协同演奏该演奏曲目。In this embodiment of the present application, the first terminal and the second terminal are both connected to corresponding interactive devices such as a head-mounted display device, an earphone, a microphone, a handle, and a data glove. According to the loaded data of the target virtual reality scene, the first terminal outputs the visual image information of the target virtual reality scene to the head display device worn by the first user to show the first user that the virtual environment, the virtual performance device, the first The target virtual reality scene of the virtual avatars of the user and the second user, and by adding instruction information for instructing the first user to play in the visual image information, instructing the first user to perform the performance through an interactive device such as a handle, a data glove, or a microphone The performance action is performed, so that the first user and the second user can perform the performance piece collaboratively in the target virtual reality scene.
具体地,所述协同演奏邀请信息还包括用于设置所述第一用户的演奏形式及所述第二用户的演奏形式的演奏形式设置信息,所述演奏形式包括唱歌和乐器演奏;Specifically, the collaborative performance invitation information further includes performance form setting information for setting the performance form of the first user and the performance form of the second user, and the performance forms include singing and musical instrument performance;
对应地,所述步骤S203,包括:Correspondingly, the step S203 includes:
S20301:展示所述目标虚拟现实场景,并根据所述演奏形式设置信息及所述演奏曲目显示演奏提示信息;S20301: Display the target virtual reality scene, and display performance prompt information according to the performance form setting information and the performance repertoire;
S20302:获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,实现所述第一用户在所述目标虚拟现实场景中协同所述第二用户所述演奏曲目。S20302: Acquire feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and output the feedback information, so that the first user can perform a performance in the target virtual reality scene. The target virtual reality scene cooperates with the second user to perform the repertoire.
本申请实施例中,协同演奏邀请信息具体包括演奏形式设置信息,该演奏形式设置信息具体设定了第一用户的演奏形式和第二用户的演奏形式。具体地,该演奏形式可以包括唱歌和乐器演奏。示例性地,当第一用户的演奏形式和第二用户演奏形式均为唱歌时,当前协同演奏的形式具体为合唱形式;当第一用户的演奏形式和第二用户的演奏形式均为乐器演奏时,当前协同演奏的形式具体为合奏形式;当第一用户的演奏形式和第二用户演奏形式既包含唱歌,也包含乐器演奏,则当前的协同演奏的形式具体为伴奏形式。In the embodiment of the present application, the collaborative performance invitation information specifically includes performance form setting information, and the performance form setting information specifically sets the performance form of the first user and the performance form of the second user. Specifically, the performance form may include singing and musical instrument performance. Exemplarily, when both the first user's performance form and the second user's performance form are singing, the current collaborative performance form is specifically a chorus form; when both the first user's performance form and the second user's performance form are musical instrument performances , the current form of collaborative performance is specifically the ensemble form; when the first user's performance form and the second user's performance form include both singing and musical instrument performance, the current form of collaborative performance is specifically the accompaniment form.
在步骤S20301中,在第一终端向第一用户展示目标虚拟现实场景后,根据演奏形式设置信息和演奏曲目确定并显示对应的演奏提示信息,具体地,若演奏形式设置信息中设定的第一用户的演奏形式为唱歌,则获取该演奏曲目对应的歌词信息作为当前的演奏提示信息;若演奏形式设置信息中设定的第一用户的演奏形式为乐器演奏,则获取该演奏曲目对应的乐器操作提示信息作为当前的演奏提示信息,从而指示第一用户执行对应的演奏动作。In step S20301, after the first terminal presents the target virtual reality scene to the first user, the corresponding performance prompt information is determined and displayed according to the performance form setting information and the performance repertoire. If the performance form of the user is singing, the lyrics information corresponding to the performance piece is obtained as the current performance prompt information; if the performance form of the first user set in the performance form setting information is musical instrument performance, then the performance piece corresponding to the performance piece is obtained. The musical instrument operation prompt information is used as the current performance prompt information, thereby instructing the first user to perform a corresponding performance action.
在步骤S20302中,第一终端通过交互设备获取第一用户根据演奏提示信息执行的演奏动作,并通过第一终端自身的运算或者借助服务端的运算,生成第一用户的演奏动作对应的反馈信息并输出,以及接收服务端或者第二终端根据第二用户执行的演奏动作生成的反馈信息并输出。该反馈信息至少包括视觉反馈信息、听觉反馈信息,还可以包括力觉反馈信息,具体可以通过头显设备输出该反馈信息、通过耳机输出该听觉反馈信息,通过数据手套或者手柄输出该力觉反馈信息,从而实时地向第一用户反馈自身或者第二用户执行演奏动作后在该目标虚拟现实场景产生的视觉、听觉、力觉效果,以增强第一用户在与第二用户进行协同演奏时的真实感和沉浸感。In step S20302, the first terminal obtains the performance action performed by the first user according to the performance prompt information through the interactive device, and generates feedback information corresponding to the performance action of the first user through the calculation of the first terminal itself or with the help of the calculation of the server. output, and receive and output feedback information generated by the server or the second terminal according to the performance action performed by the second user. The feedback information includes at least visual feedback information, auditory feedback information, and may also include force feedback information. Specifically, the feedback information can be output through a head-mounted display device, the auditory feedback information can be output through an earphone, and the force feedback information can be output through a data glove or a handle. information, so as to feed back to the first user in real time the visual, auditory, and force perception effects generated in the target virtual reality scene after performing the performance action by the first user or the second user, so as to enhance the performance of the first user when performing collaborative performance with the second user. Realism and immersion.
可选地,第一终端获取第一用户的演奏动作的信息、第二终端获取第二用户的演奏动作的信息后,均上传至服务端;服务端根据获取的演奏动作的信息(演奏动作的位置、手势、加速度等信息)生成对应的反馈信息,并发送至第一终端和第二终端,再由第一终端将该反馈信息输出至第一用户佩戴的交互设备、由第二终端将该反馈信息输出至第二用户佩戴的交互设备。其中,服务端可以根据该演奏动作的信息生成反馈信息的步骤可以如下:(1)根据演奏动作的信息,通过空间分解法或者层次包围盒法进行碰撞检测计算,得到碰撞检测结果,该碰撞检测结果包含根据演奏动作的位置计算出的用户的虚拟手与虚拟现实场景中的虚拟物体的接触位置以及根据演奏动作的手势和加速度计算出的对虚拟物体的作用力度等信息;(2)根据碰撞检测结果进行虚拟物体的形变计算,得到视觉反馈信息;(3)根据碰撞检测结果确定用户对虚拟演奏设备的弹奏效果,获取对应的音频信息作为听觉反馈信息;(4)根据碰撞检测结果,结合牛顿第三定律及虚拟物体的物理属性,进行力反馈计算,得到力觉反馈信息。Optionally, after the first terminal acquires the information of the performance movements of the first user, and the second terminal acquires the information of the performance movements of the second user, both are uploaded to the server; position, gesture, acceleration and other information) to generate corresponding feedback information and send it to the first terminal and the second terminal, and then the first terminal outputs the feedback information to the interactive device worn by the first user, and the second terminal outputs the feedback information to the interactive device worn by the first user. The feedback information is output to the interactive device worn by the second user. The steps for the server to generate feedback information according to the information of the performance action may be as follows: (1) According to the information of the performance action, perform collision detection and calculation by the spatial decomposition method or the hierarchical bounding box method, and obtain the collision detection result. The result includes the contact position of the user's virtual hand and the virtual object in the virtual reality scene calculated according to the position of the playing action, and the force on the virtual object calculated according to the gesture and acceleration of the playing action; (2) According to the collision The deformation calculation of the virtual object is performed on the detection result to obtain visual feedback information; (3) the user's playing effect on the virtual performance device is determined according to the collision detection result, and the corresponding audio information is obtained as the auditory feedback information; (4) According to the collision detection result, Combined with Newton's third law and the physical properties of virtual objects, force feedback calculations are performed to obtain force feedback information.
本申请实施例中,由于能够根据协同演奏邀请信息中的演奏形式设置信息和演奏曲目准确地显示对应的演奏提示信息,即使用户不懂乐理和乐器的专业演奏方法,也可以根据该演奏提示信息准确地执行相应的演奏动作,从而降低了对用户的专业演奏要求,提高协同演奏方法的普遍适用性和操作简易性;并且,由于能够获取第一用户、第二用户执行演奏动作生成的反馈信息并输出,使得用户能够及时获得当前协同演奏的反馈效果,因此能够增强第一用户在与第二用户进行协同演奏时的真实感和沉浸感。In the embodiment of the present application, since the corresponding performance prompt information can be accurately displayed according to the performance form setting information and the performance repertoire in the collaborative performance invitation information, even if the user does not understand music theory and the professional performance method of musical instruments, the performance prompt information can be displayed according to the performance prompt information. Accurately execute the corresponding performance actions, thereby reducing the professional performance requirements for users, improving the universal applicability and ease of operation of the collaborative performance method; and, because the feedback information generated by the performance actions performed by the first user and the second user can be obtained. and output, so that the user can obtain the feedback effect of the current collaborative performance in time, thus enhancing the sense of realism and immersion of the first user when performing the collaborative performance with the second user.
可选地,若根据所述演奏形式设置信息确定所述第一用户的演奏形式包括唱歌,则所述演奏提示信息包括歌词提示信息;所述步骤S20302,包括:Optionally, if it is determined according to the performance form setting information that the performance form of the first user includes singing, the performance prompt information includes lyrics prompt information; the step S20302 includes:
A1:获取所述第一用户根据所述歌词提示信息在所述目标虚拟现实场景中演唱生成的第一响应数据并传递至所述第二终端,以及获取第二终端发送的所述第二用户执行演奏动作生成的第二响应数据;A1: Acquire the first response data generated by the first user singing in the target virtual reality scene according to the lyrics prompt information and transmit it to the second terminal, and acquire the second user sent by the second terminal second response data generated by performing the performance action;
A2:根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息;所述反馈信息包括听觉反馈信息及视觉反馈信息。A2: Generate feedback information according to the first response data and the second response data, and output the feedback information; the feedback information includes auditory feedback information and visual feedback information.
本申请实施中,演奏形式设置信息具体设定的第一用户的演奏形式为唱歌,则当前显示的演奏提示信息包括歌词提示信息,具体地,第一终端从服务器加载当前演奏曲目对应的歌词数据,并根据当前的演奏进度按时序将该歌词数据输出至第一用户佩戴的头显设备,该歌词数据至少包括歌词文字信息,还可以包括歌词对应的发声、音高、节奏等提示信息。In the implementation of this application, the performance form of the first user specifically set in the performance form setting information is singing, and the currently displayed performance prompt information includes lyrics prompt information. Specifically, the first terminal loads the lyrics data corresponding to the current performance piece from the server. , and output the lyric data to the head display device worn by the first user in time sequence according to the current performance progress. The lyric data includes at least lyric text information, and may also include prompt information such as vocalization, pitch, and rhythm corresponding to the lyrics.
对应地,步骤S20302包括步骤A1至步骤A2。具体地:Correspondingly, step S20302 includes steps A1 to A2. specifically:
在A1中,第一用户根据歌词提示信息的提示演唱上述的演奏曲目,第一终端通过麦克风捕获该第一用户演唱该演奏曲目的声音,生成对应的演唱音频数据,同时,可以在捕获到该声音时获取预存的用于显示第一用户的虚拟化身的演唱动态图像数据,将该演唱音频数据和动态图像数据作为第一响应数据。并且,第一终端还接收第二终端发送的根据第二用户的演奏动作生成的第二响应数据,该第二响应数据可以包括第二用户演唱或者弹奏虚拟乐器生成的音频数据,以及第二用户的虚拟化身的演唱动态图像数据或者乐器演奏动态图像数据。In A1, the first user sings the above-mentioned performance piece according to the prompt of the lyrics prompt information, and the first terminal captures the sound of the first user singing the performance piece through the microphone, and generates corresponding singing audio data. The pre-stored singing dynamic image data for displaying the virtual avatar of the first user is acquired during the sound, and the singing audio data and the dynamic image data are used as the first response data. In addition, the first terminal also receives second response data sent by the second terminal and generated according to the performance action of the second user, the second response data may include audio data generated by the second user singing or playing the virtual musical instrument, and the second response data Video data of singing of the user's avatar or video data of musical instrument performance.
在A2中,第一终端根据获取的第一响应数据及第二响应数据,生成对应的反馈信息并输出。具体地,根据获取的第一响应数据中的音频数据以及第二响应数据中的音频数据,生成听觉反馈信息并输出至第一用户佩戴的耳机;根据获取的第一响应数据中的第一用户的虚拟化身的演唱动态图像数据以及第二响应数据中的第二用户的演唱动态图像数据或者乐器演奏动态图像数据,生成视觉反馈信息并输出至第一用户佩戴的头显设备。In A2, the first terminal generates and outputs corresponding feedback information according to the acquired first response data and the second response data. Specifically, according to the acquired audio data in the first response data and the audio data in the second response data, the auditory feedback information is generated and output to the earphone worn by the first user; according to the acquired first response data of the first user In the virtual avatar's singing dynamic image data and the second response data, the second user's singing dynamic image data or musical instrument performance dynamic image data is generated, and visual feedback information is generated and output to the head display device worn by the first user.
本申请实施例中,在演奏形式具体为唱歌时,能够在目标虚拟现实场景中通过歌词提示信息来指导用户进行演唱并输出对应的反馈信息,从而准确有效地实现用户在虚拟现实场景中实现远程的合唱、伴唱。In the embodiment of the present application, when the performance form is specifically singing, the user can be guided to sing and output corresponding feedback information through the lyrics prompt information in the target virtual reality scene, so as to accurately and effectively realize the remote control of the user in the virtual reality scene. chorus, accompaniment.
可选地,若根据所述演奏形式设置信息确定所述第一用户的演奏形式包括乐器演奏,则所述目标虚拟现实场景包括目标乐器对应的虚拟乐器或者虚拟简易演奏装置,所述演奏提示信息包括标识于所述虚拟乐器的乐器操作提示信息;所述步骤S20302,包括:Optionally, if it is determined according to the performance form setting information that the performance form of the first user includes musical instrument performance, the target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument, and the performance prompt information Including the musical instrument operation prompt information identified in the virtual musical instrument; the step S20302 includes:
B1:获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,以及获取第二终端发送的所述第二用户执行演奏动作生成的第二响应数据;B1: Acquire first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information, and transmit it to the second terminal , and obtain the second response data generated by the second user performing the performance action sent by the second terminal;
B2:根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息,所述反馈信息包括听觉反馈信息、视觉反馈信息及力觉反馈信息。B2: Generate feedback information according to the first response data and the second response data, and output the feedback information, where the feedback information includes auditory feedback information, visual feedback information, and force feedback information.
本申请实施例中,演奏形式设置信息具体设定的第一用户的演奏形式为乐器演奏,具体地,该演奏形式设置信息还设定了第一用户所要演奏的目标乐器。该目标乐器可以包括锣鼓、架子鼓等打击乐器;钢琴、电子琴等键盘乐器;小提琴、大提琴、七弦古琴、古筝等弦乐器;或者长笛、短笛、萧、双簧管、萨克斯等管乐器。对应地,当前目标虚拟现实场景包括目标乐器对应的虚拟乐器或者虚拟简易演奏装置。具体地,虚拟乐器为在目标虚拟现实场景中具有目标乐器形式的虚拟物体(即仿照实际的目标乐器而构建的三维模型),例如,若当前的目标乐器为锣鼓,则在目标虚拟现实场景中可以设置具有锣鼓的形式的虚拟锣鼓。具体地,虚拟简易演奏装置为目标虚拟现实场景中具有用于演奏目标乐器的简易演奏装置的形式的虚拟物体,该简易演奏装置可以简化用户对目标乐器的操作,使得不懂乐器的用户也能够简单有效地实现乐器演奏。其中,若目标乐器为复杂打击乐器,则该简易演奏装置可以为电子打击板,虚拟简易演奏装置为虚拟电子打击板,其示意图如图3所示;示例性地,当用户选择的目标乐器为架子鼓时,将架子鼓上的多个打击位置映射到虚拟电子打击板上,使得用户在目标虚拟现实场景中敲击该虚拟电子打击板即可发出与操作架子鼓一致的音效,由于虚拟电子打击板相对于架子鼓来说操作位置较少、对打击动作的要求较低,因此能够降低用户操作乐器的复杂度。若目标乐器为键盘乐器,则简易演奏装置可为乐器数字接口(Musical Instrument Digital Interface,MIDI)键盘,虚拟简易演奏装置为虚拟MIDI键盘,其示意图如图4所示;示例性地,当用户选择的目标乐器为钢琴时,通过将钢琴上的多个弹奏位置映射到虚拟MIDI键盘上,使得用户在目标虚拟现实场景中弹奏该虚拟MIDI键盘即可发出与弹奏钢琴一致的音效,由于虚拟MIDI键盘的琴键相对于钢琴来说较少,从而能够降低用户操作乐器的复杂度。若目标乐器为弦乐器,则该虚拟简易演奏装置可以由几根琴弦形式组成的虚拟弦;示例性地,若当前的虚拟弦乐器具体为虚拟小提琴,则该虚拟简易演奏装置如图5所示,由分别对应G、D、A、E四个音名的四根弦组成;示例性地,当虚拟弦乐器具体为吉他时,虚拟简易演奏装置由分别对应E、A、D、G、B、E六个音名的六根弦组成。若目标乐器为管乐器,则该虚拟简易演奏装置可以为由数个圆孔的形式组成的虚拟孔,其中包括吹孔及指孔,如图6所示。可选地,吹孔只用作示意,不供用户操作使用,而指孔所在的位置可供用户通过数据手套等交互设备进行按压操作,当用户在指孔所在位置进行按压操作时,即默认吹孔已经生效;即,用户仅通过指控按压操作即可等效于现实中管乐器的吹按配合效果,从而进一步简化用户的乐器演奏操作。进一步地,由于弦乐器和管乐器的操作较为复杂,其对应的虚拟简易演奏装置可以为虚拟音效板,该虚拟音效板的形式与图3所示的虚拟电子打击板相似,用户只需要敲击或者按压音效板即可在虚拟现实场景中表现出弦乐器、管乐器的弹奏音效。In the embodiment of the present application, the performance form of the first user specifically set by the performance form setting information is musical instrument performance. Specifically, the performance form setting information also sets the target musical instrument to be played by the first user. The target musical instruments may include percussion instruments such as gongs, drums, and drums; keyboard instruments such as piano and electronic organ; stringed instruments such as violin, cello, guqin, and zither; Correspondingly, the current target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument. Specifically, the virtual musical instrument is a virtual object having the form of a target musical instrument in the target virtual reality scene (that is, a three-dimensional model constructed by imitating the actual target musical instrument). For example, if the current target musical instrument is a gong and drum, then in the target virtual reality scene Virtual gongs and drums in the form of gongs and drums may be set. Specifically, the virtual simple performance device is a virtual object in the form of a simple performance device for playing the target musical instrument in the target virtual reality scene. The simple performance device can simplify the user's operation on the target musical instrument, so that users who do not understand musical instruments can also Simple and effective implementation of musical instrument performance. Wherein, if the target musical instrument is a complex percussion instrument, the simple performance device can be an electronic percussion pad, and the virtual simple performance device is a virtual electronic percussion pad, the schematic diagram of which is shown in Figure 3; Exemplarily, when the target musical instrument selected by the user is When a drum kit is used, multiple hitting positions on the drum kit are mapped to the virtual electronic pad, so that the user can tap the virtual electronic pad in the target virtual reality scene to produce the same sound effects as operating the drum kit. Compared with the drum kit, the paddle has fewer operating positions and lower requirements on the striking action, so the complexity of the user's operation of the musical instrument can be reduced. If the target musical instrument is a keyboard musical instrument, the simple performance device may be a Musical Instrument Digital Interface (MIDI) keyboard, and the virtual simple performance device is a virtual MIDI keyboard, the schematic diagram of which is shown in FIG. 4 ; When the target instrument is a piano, by mapping multiple playing positions on the piano to the virtual MIDI keyboard, the user can play the virtual MIDI keyboard in the target virtual reality scene to produce sound effects consistent with playing the piano. The virtual MIDI keyboard has fewer keys than a piano, which can reduce the complexity of the user's operation of the musical instrument. If the target musical instrument is a stringed musical instrument, the virtual simple performance device may be composed of virtual strings in the form of several strings; exemplarily, if the current virtual stringed instrument is specifically a virtual violin, the virtual simple performance device is shown in FIG. 5 , It is composed of four strings corresponding to the four tone names of G, D, A, and E respectively; exemplarily, when the virtual stringed instrument is specifically a guitar, the virtual simple performance device is composed of four strings corresponding to E, A, D, G, B, E respectively. It consists of six strings with six note names. If the target musical instrument is a wind instrument, the virtual simple performance device may be a virtual hole in the form of several round holes, including blow holes and finger holes, as shown in FIG. 6 . Optionally, the blow hole is only used for indication, not for user operation, and the position of the finger hole can be pressed by the user through an interactive device such as a data glove. When the user presses at the position of the finger hole, the default The blowing hole has already taken effect; that is, the user can perform the blowing and pressing coordination effect equivalent to the actual wind instrument only by instructing the pressing operation, thereby further simplifying the user's musical instrument performance operation. Further, because the operation of stringed instruments and wind instruments is more complicated, its corresponding virtual simple performance device can be a virtual sound board, and the form of this virtual sound board is similar to the virtual electronic percussion board shown in Figure 3, and the user only needs to tap or press. The sound board can show the sound effects of stringed instruments and wind instruments in the virtual reality scene.
对应地,本申请实施例中演奏提示信息包括标识于虚拟乐器或者虚拟简易演奏装置的乐器操作提示信息,该乐器操作提示信息具体包括操作的时间点和操作方式,该操作时间点为在演奏曲目中目标乐器的音效出现的时间点,该操作方式可以为打击、按压或者弹拨等。示例性地,当虚拟简易演奏装置具体为如图3所示的虚拟电子打击板时,则乐器操作提示信息可以为按时序在该虚拟电子打击板上显示的打击指示信息,例如在1~6号板中的一个板上显示高亮、或者火花等图像信息,以指示用户打击该板。示例性地,当虚拟简易演奏装置具体为如图4所示的虚拟MIDI键盘时,则乐器操作提示信息可以为按时序在该虚拟MIDI键盘上显示的按压指示信息,例如在MIDI键盘上的其中一个键盘上显示高亮或者凸起的图像信息,以指示用户按压该键盘。示例性地,当虚拟简易演奏装置具体为如图5所示的虚拟弦时,则乐器操作提示信息可以为按时序在该虚拟弦上显示的弹/拨/按弦指示信息,例如将当前需要操作的一根虚拟弦高亮显示,并显示“弹”“拨”或者“按”的文字提示信息,以指示用户弹/拨/按对应的虚拟弦。示例性地,当虚拟简易演奏装置具体由如图6所的吹孔和指孔组成时,则乐器操作提示信息具体可以为按时序使某几个原始状态为空心的指孔变换为实心状态,以指示用户按压对应的指孔,示例性地,如图7所示为发出1(do),2(re),3(mi),4(fa),5(sol),6(la),7(si)这七个唱名对应的指孔按压示意图。Correspondingly, the performance prompt information in the embodiment of the present application includes the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device, and the musical instrument operation prompt information specifically includes the time point and operation mode of the operation, and the operation time point is when the repertoire is played. The time point when the sound effect of the target instrument appears in the middle, the operation mode can be hitting, pressing or plucking, etc. Exemplarily, when the virtual simple performance device is specifically a virtual electronic percussion pad as shown in FIG. 3 , then the musical instrument operation prompt information can be the percussion instruction information displayed on the virtual electronic percussion pad in time sequence, for example, at 1 to 6. Graphical information such as highlights, or sparks, is displayed on one of the number plates to instruct the user to strike the plate. Exemplarily, when the virtual simple performance device is specifically a virtual MIDI keyboard as shown in FIG. 4, then the musical instrument operation prompt information can be the pressing instruction information displayed on the virtual MIDI keyboard in time sequence, for example, in the MIDI keyboard. A highlighted or raised graphic message is displayed on a keyboard to instruct the user to press the keyboard. Exemplarily, when the virtual simple performance device is specifically a virtual string as shown in FIG. 5 , then the musical instrument operation prompt information can be the strumming/plucking/pressing instruction information displayed on the virtual string in time sequence, for example, the current need A virtual string to be operated is highlighted, and the text prompt information of "play", "pluck" or "press" is displayed to instruct the user to strum/pick/press the corresponding virtual string. Exemplarily, when the virtual simple performance device is specifically made up of a blow hole and a finger hole as shown in Figure 6, then the musical instrument operation prompt information can specifically be that some original state of the hollow finger hole is transformed into a solid state in time sequence, To instruct the user to press the corresponding finger hole, exemplarily, as shown in FIG. 7(si) Schematic diagram of the finger hole pressing corresponding to the seven roll calls.
在步骤B1中,第一用户根据标识于虚拟乐器或者虚拟简易演奏装置上的乐器操作提示信息,通过交互设备(手柄或者数据手套)实现在目标虚拟现实场景中操作该虚拟乐器,实现乐器演奏。此时,第一终端通过该交互设备捕获第一用户操作该虚拟乐器或者虚拟简易演奏装置的动作信息,并将该动作信息通过第一终端自身的响应数据确定算法计算出对应的第一响应数据,或者将该动作信息发送至服务端,由服务端通过响应数据确定算法计算出对应的第一响应数据并返回至第一终端。并且第一终端还接收第二终端或者服务端发送的第二用户执行演奏动作生成的第二响应数据。In step B1, the first user operates the virtual musical instrument in the target virtual reality scene through the interactive device (handle or data glove) according to the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device, so as to realize the musical instrument performance. At this time, the first terminal captures the action information of the first user operating the virtual musical instrument or the virtual simple performance device through the interactive device, and calculates the corresponding first response data from the action information through the response data determination algorithm of the first terminal itself , or send the action information to the server, and the server calculates the corresponding first response data through the response data determination algorithm and returns it to the first terminal. And the first terminal also receives second response data generated by the second user performing the performance action and sent by the second terminal or the server.
在步骤B2中,第一终端结合第一用户操作虚拟乐器生成的第一响应数据以及第二终端执行演奏动作生成的第二响应数据,确定当前需要反馈给第一用户的反馈信息并输出。具体地,该反馈信息除了包括听觉反馈信息和视觉反馈信息外,还包括第一用户操作虚拟乐器或者虚拟简易演奏装置的反作用力的信息——力觉反馈信息。具体地,第一终端将该听觉反馈信息输出至第一用户佩戴的耳机、将该视觉反馈信息输出第一用户佩戴的头显设备、将该力觉反馈信息输出至第一用户手持的手柄或者穿戴的数据手套,从而将反馈信息准确地反馈至第一用户。In step B2, the first terminal combines the first response data generated by the first user operating the virtual musical instrument and the second response data generated by the second terminal performing the performance action to determine and output the feedback information that needs to be fed back to the first user. Specifically, in addition to the auditory feedback information and the visual feedback information, the feedback information also includes information of the reaction force of the first user operating the virtual musical instrument or the virtual simple performance device—force feedback information. Specifically, the first terminal outputs the auditory feedback information to the headset worn by the first user, outputs the visual feedback information to the head display device worn by the first user, outputs the force feedback information to the handle held by the first user, or Wearing data gloves, so as to accurately feed feedback information to the first user.
本申请实施例中,在演奏形式具体为乐器演奏时,能够通过标识于虚拟乐器或者虚拟简易演奏装置的乐器操作提示信息来指导用户通过交互设备实现在目标虚拟现实场景中操作虚拟乐器或者虚拟简易演奏装置,使得非专业的用户也能够通过该乐器操作提示信息方便准确地实现在目标虚拟现实场景中的乐器演奏;并且,由于能够根据用户操作生成的响应数据生成听觉反馈信息、视觉反馈信息和力觉反馈信息反馈至用户,因此能够增强第一用户在与第二用户进行协同演奏时的真实感和沉浸感。进一步地,当目标虚拟现实场景中具体以虚拟电子打击板、虚拟MIDI键盘、虚拟音效板等虚拟简易演奏装置作为虚拟演奏设备时,能够进一步简化用户的操作,使得不懂乐器的用户也能够简单有效地在虚拟现实场景中实现乐器演奏。In the embodiment of the present application, when the performance form is the performance of a musical instrument, the user can be instructed to operate the virtual musical instrument or the virtual simple instrument in the target virtual reality scene through the interactive device through the musical instrument operation prompt information marked on the virtual musical instrument or the virtual simple performance device. The performance device enables non-professional users to conveniently and accurately perform musical instrument performance in the target virtual reality scene through the musical instrument operation prompt information; and, since auditory feedback information, visual feedback information and The force feedback information is fed back to the user, so it is possible to enhance the sense of realism and immersion of the first user when performing collaborative performance with the second user. Further, when using virtual simple performance devices such as virtual electronic percussion pads, virtual MIDI keyboards, and virtual sound effect boards as virtual performance equipment in the target virtual reality scene, the user's operation can be further simplified, so that users who do not understand musical instruments can also simply Effectively implement instrumental performance in virtual reality scenarios.
可选地,所述获取所述第一用户根据所述乐器操作提示信息在所述虚拟现实场景中操作所述虚拟乐器或者虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,包括:Optionally, obtaining the first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the virtual reality scene according to the musical instrument operation prompt information, and transmitting it to the second user. terminal, including:
B11:获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置的动作信息;B11: Acquire the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information;
B12:根据所述动作信息生成所述第一视觉响应数据及所述第一力觉响应数据;B12: Generate the first visual response data and the first force response data according to the action information;
B13:根据所述动作信息获取对应的预存音源信息,生成第一声音响应数据;所述预存音源信息为将目标乐器演奏的音效,所述音效包括节奏音或者旋律音;B13: Acquire corresponding pre-stored sound source information according to the action information, and generate first sound response data; the pre-stored sound source information is the sound effect performed by the target musical instrument, and the sound effect includes a rhythm sound or a melody sound;
B14:将所述第一视觉响应数据及所述第一声音响应数据传递至所述第二终端;B14: Transmit the first visual response data and the first sound response data to the second terminal;
对应地,所述步骤B2包括:Correspondingly, the step B2 includes:
根据所述第一声音响应数据及所述第二响应数据中的第二声音响应数据生成听觉反馈信息;generating auditory feedback information according to the first sound response data and the second sound response data in the second response data;
根据所述第一视觉响应数据及所述第二响应数据中的第二视觉响应数据生成视觉反馈信息;generating visual feedback information according to the first visual response data and the second visual response data in the second response data;
根据所述第一力觉响应数据生成力觉反馈信息;generating force feedback information according to the first force response data;
输出所述听觉反馈信息、所述视觉反馈信息及所述力觉反馈信息。The auditory feedback information, the visual feedback information, and the force feedback information are output.
本申请实施例中,第一响应数据具体包括第一声音响应数据、第一视觉响应数据及第一力觉响应数据。具体地,在步骤B11中,第一终端通过获取交互设备上的传感器数据来确定第一用户根据乐器操作提示信息、通过交互设备在目标虚拟现实场景中操作虚拟乐器或者虚拟简易演奏装置的动作信息。具体地,该动作信息包括动作位置、动作手势及动作加速度信息。在步骤B12中,第一终端根据动作信息,通过自身或者服务端的碰撞检测算法(例如空间分解法或者层次包围盒法),确定第一用户的演奏动作与目标虚拟现实场景中的虚拟乐器或者虚拟简易演奏装置等虚拟物体产生的碰撞检测结果;之后,根据该碰撞检测结果和形变算法,确定第一视觉响应数据,即用于表示第一用户操作虚拟乐器或者虚拟简易演奏装置给目标虚拟现实场景中的虚拟物体带来的变形结果,该变形结果可以用虚拟现实构造语言OpenGL API中提供的NURBS接口绘制;并且,根据该碰撞检测结果和力反馈算法(例如质点弹簧模型算法或者有限元方法),确定第一力觉响应数据,该第一力觉响应数据即表示第一用户操作虚拟乐器给虚拟乐器带来作用力后获得的该虚拟乐器的反作用力的数据。在步骤B13中,具体根据动作信息中的动作位置确定当前用户的操作是否与乐器操作提示信息指示的位置相符,若相符,则获取按时序存储的预存音源信息,生成对应的第一声音响应数据。具体地,该预存音源信息为预存的目标乐器演奏的音效,该音效可以为打击乐器的节奏音,或者键盘乐器、弦乐器、管乐器等乐器的旋律音。可选地,该预存音源信息为按时序提取所述演奏曲目对应的预存音频文件中的节奏音和/或旋律音,所述预存音频文件为提前采集至少一个实际乐器演奏所述演奏曲目的声音得到的音频文件,所述至少一个实际乐器至少包括所述目标乐器。服务端存储了当前演奏曲目的预存音频文件,并按照时序分别提取该预存音频文件中各个时刻各个乐器演奏的节奏音或者旋律音,得到按时序排列的预存音源信息;之后,若在某一时刻,用户执行的动作信息与乐器操作提示信息相符时,则获取该时刻目标乐器对应的预存音源信息作为当前的第一声音响应数据。示例性地,如图8所示,设当前演奏曲目的预存音频文件中包括了架子鼓演奏该演奏曲目的节奏音和钢琴弹奏该演奏曲目的旋律音,从该预存音频文件中分别提取该演奏曲目的演奏时间为第1'20"(1分20秒)的节奏音、旋律音,分别进行保存,得到架子鼓演奏该演奏曲目时第1'20"出现的节奏音,以及得到钢琴弹奏该演奏曲目时第1'20"出现的旋律音;之后,在用户选中该演奏曲目,以及选中架子鼓作为目标乐器时,若用户在第1'20"的动作信息与乐器操作提示信息指示的打击位置相符,则以该提前存储的第1'20"的节奏音作为当前的预存音源信息,生成第一声音响应数据;或者,当用户选中钢琴作为目标乐器时,若用户在第1'20"的动作信息与乐器操作提示信息指示的弹按位置相符,则以该提前存储的第1'20"的旋律音作为当前的预存音源信息,生成第一声音响应数据。同样地,图8中示出的第1'30"(1分30秒)的节奏音、旋律音的提取保存过程与第1'20"的相同。应理解,图8仅示例了两个演奏时刻的节奏音和旋律音,图中未示出的其它演奏时刻的节奏音、旋律音的提取保存过程与图中示例的两个演奏时刻的提取保存过程相似。这种从演奏曲目的预存音频文件中提取节奏音、旋律音保存为预存音源信息,并在后续以该预存音源信息作为生成的第一声音响应信息的方法,可以称为“抠音还原”方法,通过该方法,能够准确地还原实际乐器演奏当前演奏曲目的声音。可选地,上述的预存音源信息还可以从网络上的音源资料库下载,或者可以通过例如专业音频编辑软件(Adobe Audition,AE)、水果音乐编辑软件(Fruity Loops Studio,FL studio)等音乐编辑软件合成目标乐器演奏该演奏曲目的音效而得到。在步骤B14中,由于与第一用户协同演奏的第二用户需要获取第一用户演奏虚拟乐器的视觉效果和听觉效果来增强协同演奏的真实感,因此需要将上述生成的第一视觉响应数据和第一声音响应数据直接或者通过服务端间接地传递至第二终端,以使第二终端生成相应的反馈信息反馈至第二用户,而第一力觉响应数据是第一用户操作虚拟乐器而受到的反作用力数据,只需在后续反馈给第一用户,而无需向第二用户传递。In this embodiment of the present application, the first response data specifically includes first sound response data, first visual response data, and first force-sensing response data. Specifically, in step B11, the first terminal obtains the sensor data on the interactive device to determine the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene through the interactive device according to the musical instrument operation prompt information . Specifically, the motion information includes motion position, motion gesture, and motion acceleration information. In step B12, the first terminal determines the performance of the first user and the virtual musical instrument or virtual instrument in the target virtual reality scene through the collision detection algorithm of itself or the server (for example, the spatial decomposition method or the hierarchical bounding box method) according to the action information. Collision detection results generated by virtual objects such as a simple performance device; then, according to the collision detection results and the deformation algorithm, determine the first visual response data, which is used to indicate that the first user operates the virtual musical instrument or the virtual simple performance device to the target virtual reality scene The deformation result brought by the virtual object in the virtual reality can be drawn with the NURBS interface provided in the virtual reality construction language OpenGL API; and, according to the collision detection result and the force feedback algorithm (such as the mass-spring model algorithm or the finite element method) , determine the first force sense response data, the first force sense response data is data representing the reaction force of the virtual musical instrument obtained after the first user operates the virtual musical instrument to bring force to the virtual musical instrument. In step B13, specifically according to the action position in the action information, it is determined whether the current user's operation matches the position indicated by the musical instrument operation prompt information. . Specifically, the pre-stored sound source information is a pre-stored sound effect played by the target musical instrument, and the sound effect may be a rhythm sound of a percussion instrument, or a melody sound of a keyboard instrument, a stringed instrument, a wind instrument, or the like. Optionally, the pre-stored sound source information is to extract the rhythm sound and/or the melody sound in the pre-stored audio file corresponding to the performance piece according to the time sequence, and the pre-stored audio file is to collect in advance the sound of at least one actual musical instrument playing the performance piece. The resulting audio file, the at least one actual instrument includes at least the target instrument. The server stores the pre-stored audio files of the currently played repertoire, and extracts the rhythm sounds or melody sounds played by each instrument at each moment in the pre-stored audio files according to the time sequence, and obtains the pre-stored audio source information arranged in time sequence; , when the action information performed by the user matches the musical instrument operation prompt information, the pre-stored sound source information corresponding to the target musical instrument at this moment is acquired as the current first sound response data. Exemplarily, as shown in Figure 8, it is assumed that the pre-stored audio file of the current performance includes the rhythm sound of the drum playing the performance and the melody tone of the piano playing the performance, and extract the music from the pre-stored audio file respectively. The performance time of the repertoire is the rhythm sound and melody sound of the 1'20" (1 minute 20 seconds), which are saved separately, and the rhythm sound that appears on the 1'20" when the drum is playing the repertoire, and the piano sound is obtained. The melody sound that appears at the 1'20" when the performance is played; after that, when the user selects the performance and selects the drum kit as the target instrument, if the user's action information at the 1'20" and the instrument operation prompt information indicate The striking position matches, then take the 1'20" rhythm sound stored in advance as the current pre-stored sound source information to generate the first sound response data; Or, when the user selects the piano as the target The action information of 20" is consistent with the flicking position indicated by the musical instrument operation prompt information, then the melody tone of the 1'20" stored in advance is used as the current pre-stored sound source information to generate the first sound response data. Similarly, Fig. 8 The process of extracting and saving the rhythm sound and melody sound of the 1'30" (1 minute and 30 seconds) shown in Fig. 1 is the same as that of the 1'20". It should be understood that Fig. 8 only illustrates the rhythm sound and Melody tone, the extraction and preservation process of the rhythm tone of other performance moments not shown in the figure, the melody tone are similar to the extraction and preservation process of the two performance moments of the example in the figure.This extracts the rhythm tone from the pre-stored audio file of the performance repertoire , The melody sound is saved as pre-stored sound source information, and the method of using the pre-stored sound source information as the generated first sound response information in the follow-up can be called a "keying restoration" method, through which the actual musical instrument performance current can be accurately restored. The sound of playing the track. Optionally, the above-mentioned pre-stored sound source information can also be downloaded from the sound source database on the network, or can be downloaded through, for example, professional audio editing software (Adobe Audition, AE), fruit music editing software (Fruity Loops Studio, FL studio) and other music editing software are obtained by synthesizing the sound effect of the target instrument playing the performance. In step B14, since the second user performing collaboratively with the first user needs to acquire the visual and auditory effects of the virtual musical instrument played by the first user to enhance the realism of the collaborative performance, the above-generated first visual response data and The first sound response data is directly or indirectly transmitted to the second terminal through the server, so that the second terminal generates corresponding feedback information to feed back to the second user, and the first force response data is received by the first user operating the virtual musical instrument. The reaction force data only needs to be fed back to the first user in the follow-up, and does not need to be transmitted to the second user.
对应地,在步骤B2中,第一终端将步骤B13中确定的第一声音响应数据以及第二响应数据中包含的第二声音响应数据(即第二用户执行演奏动作生成的声音响应数据)结合得到听觉反馈信息,并输出至第一用户佩戴的耳机,以使用户获取到该听觉反馈信息;将步骤B12中确定的第一视觉响应数据集第二响应数据中包含的第二视觉响应数据结合得到视觉反馈信息,并输出至第二用户佩戴的头显设备,以使用户获取到该视觉反馈信息;将步骤B12中确定的第一力觉响应数据直接作为力觉反馈信息输出至第一用户操作的交互设备或者穿戴的数据手套,以使用户获取到该力觉反馈信息。Correspondingly, in step B2, the first terminal combines the first sound response data determined in step B13 with the second sound response data contained in the second response data (that is, the sound response data generated by the second user performing the performance action) Obtain the auditory feedback information, and output it to the earphone worn by the first user, so that the user can obtain the auditory feedback information; combine the second visual response data included in the second response data of the first visual response data set determined in step B12 Obtain the visual feedback information, and output it to the head display device worn by the second user, so that the user can obtain the visual feedback information; output the first force sense response data determined in step B12 directly to the first user as force sense feedback information Operate the interactive device or wear the data glove, so that the user can obtain the force feedback information.
本申请实施例中,由于获取第一用户通过交互设备操作虚拟乐器的动作信息准确地生成对应的第一视觉响应数据、第一力觉响应数据以及结合预存音源信息生成的第一声音响应数据,并在之后结合第一终端接收到的第二响应数据准确地分别输出听觉反馈信息、视觉反馈信息和力觉反馈信息,因此能够提高反馈信息输出的准确性,进一步增强第一用户在与第二用户进行协同演奏时的真实感。进一步地,当预存音源信息具体为从演奏曲目对应的预存音频文件按时序提取得到的节奏音或者旋律音时,能够通过“抠音还原”方法准确地还原目标乐器演奏当前演奏曲目的声音。In the embodiment of the present application, since the acquisition of the action information of the first user operating the virtual musical instrument through the interactive device accurately generates the corresponding first visual response data, the first force response data, and the first sound response data generated in combination with the pre-stored sound source information, And then combined with the second response data received by the first terminal to accurately output auditory feedback information, visual feedback information and force feedback information respectively, so it can improve the accuracy of feedback information output, and further enhance the first user's ability to communicate with the second user. Realism when users perform collaborative performances. Further, when the pre-stored sound source information is specifically the rhythm sound or melody sound extracted in time sequence from the pre-stored audio file corresponding to the performance piece, the sound of the target musical instrument playing the current piece of performance can be accurately restored through the "keying restoration" method.
可选地,在所述步骤S203之后,还包括:Optionally, after the step S203, it also includes:
获取并输出演奏测评数据。Acquire and output performance evaluation data.
本申请实施例中,在第一用户和第二用户协同演奏之后,第一终端根据第一用户协同演奏时记录的演奏动作的信息,统计生成第一用户的演奏测评数据并输出。或者,第一终端将记录的第一用户协同演奏时的演奏动作的信息上传至服务端,以使服务端根据第一终端记录的第一用户的演奏动作的信息以及根据第二终端记录的第二用户的演奏动作的信息,进行统计分析,得到此次协同演奏总体的演奏测评数据,之后第一终端、第二终端分别从服务端获取该总体的演奏测评数据并输出。具体地,当演奏动作包括演唱时,记录的演奏动作的信息包括按时序记录的用户演唱时的音高信息和节奏信息,通过将该按时序记录的音高信息和节奏信息与按时序预存的演奏曲目的预存音高信息和预存节奏信息进行比较,得出用户的演唱得分作为演奏测评数据。具体地,当演奏动作包括操作虚拟乐器时,记录的演奏动作的信息包括用户操作虚拟乐器时的动作信息和动作频次,统计操作准确率(该用户操作虚拟乐器的动作信息与乐器操作提示信息相符的百分数),并以该准确率作为演奏测评数据。可选地,本申请实施例中,第一终端在获取演奏测评数据后,可以将该演奏测评数据转化为图像信息输出至第一用户佩戴的头显设备,或者转换为音频信息输出至第一用户佩戴的耳机,以使第一用户能够获取到该演奏测评数据。In this embodiment of the present application, after the first user and the second user perform collaboratively, the first terminal statistically generates and outputs performance evaluation data of the first user according to the performance information recorded during the collaborative performance of the first user. Alternatively, the first terminal uploads the recorded information on the performance of the first user when performing collaboratively to the server, so that the server records the information on the performance of the first user recorded by the first terminal and the information on the performance recorded by the second terminal. 2. Perform statistical analysis on the information of the user's performance actions to obtain the overall performance evaluation data of the collaborative performance, and then the first terminal and the second terminal respectively obtain and output the overall performance evaluation data from the server. Specifically, when the performance action includes singing, the recorded performance action information includes the pitch information and rhythm information recorded in time sequence when the user sings. The pre-stored pitch information and pre-stored rhythm information of the repertoire are compared, and the user's singing score is obtained as the performance evaluation data. Specifically, when the performance action includes operating a virtual musical instrument, the recorded performance action information includes the action information and action frequency when the user operates the virtual musical instrument, and the statistical operation accuracy (the action information of the user operating the virtual musical instrument is consistent with the musical instrument operation prompt information) percentage), and use this accuracy rate as performance evaluation data. Optionally, in this embodiment of the present application, after acquiring the performance evaluation data, the first terminal may convert the performance evaluation data into image information and output it to the head-mounted display device worn by the first user, or convert it into audio information and output it to the first terminal. An earphone worn by the user, so that the first user can obtain the performance evaluation data.
本申请实施例中,由于在协同演奏之后,还能够获取演奏测评数据并输出,因此能够使得第一用户及时获取到此次协同演奏的评价反馈信息,让用户及时掌握每次的协同演奏结果以便做出相应的改进和强化训练,从而提高协同演奏的智能性和用户体验。In the embodiment of the present application, since the performance evaluation data can be obtained and output after the collaborative performance, the first user can obtain the evaluation feedback information of the collaborative performance in time, and the user can grasp the results of each collaborative performance in time so as to Make corresponding improvements and strengthen training to improve the intelligence and user experience of collaborative playing.
本申请实施例中,第一用户通过第一终端发送协同演奏邀请,并在获取到第二用户通过第二终端返回的接受邀请信息后,与第二终端一同从服务端加载根据协同演奏邀请信息确定的虚拟现实场景(即目标虚拟现实场景),并在之后展示目标虚拟现实场景以及指示第一用户在目标虚拟现实场景中与所述第二用户协同演奏演奏曲目。由于能够根据协同演奏邀请信息邀请第二用户以及从服务端加载目标虚拟现实场景,使得第一用户和第二用户能够通过在该目标虚拟现实场景中协同演奏指定的演奏曲目,因此能够方便有效地实现远程音乐娱乐交互,使得处于不同地域的人们能够基于虚拟现实技术,身临其境地实现远程音乐娱乐交流。In this embodiment of the present application, the first user sends a collaborative performance invitation through the first terminal, and after obtaining the invitation acceptance information returned by the second user through the second terminal, the first user loads the collaborative performance invitation information from the server together with the second terminal. The determined virtual reality scene (that is, the target virtual reality scene), and then displaying the target virtual reality scene and instructing the first user to perform the repertoire in collaboration with the second user in the target virtual reality scene. Since the second user can be invited according to the collaborative performance invitation information and the target virtual reality scene can be loaded from the server, the first user and the second user can collaboratively play the specified performance piece in the target virtual reality scene, which can be convenient and effective. Realize remote music entertainment interaction, so that people in different regions can realize remote music entertainment communication based on virtual reality technology.
实施例二:Embodiment 2:
图9示出了本申请实施例提供的第二种协同演奏方法的流程示意图,该方法应用于第二终端,详述如下:FIG. 9 shows a schematic flowchart of a second collaborative performance method provided by an embodiment of the present application. The method is applied to a second terminal, and the details are as follows:
本申请实施例中,第一用户、第二用户、第一终端、第二终端的定义与上一实施例的完全相同,此次不再赘述。In this embodiment of the present application, the definitions of the first user, the second user, the first terminal, and the second terminal are exactly the same as those in the previous embodiment, and will not be repeated here.
在S901中,接收协同演奏邀请信息,并返回接受邀请信息至第一用户对应的第一终端;所述协同演奏邀请信息包括演奏曲目及演奏场景信息。In S901, the collaborative performance invitation information is received, and the acceptance invitation information is returned to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance pieces and performance scene information.
第二终端直接接收第一终端发送的协同演奏邀请信息,或者通过服务端间接地接收第一终端发送的协同演奏邀请信息,并在第二用户输入确认接收邀请的信息后,生成接受邀请信息并发送,以使该接受邀请信息直接发送至第一用户对应的第一终端或者通过服务端间接地返回至第一终端。该协同演奏邀请信息至少包括演奏曲目及演奏场景信息,还可以包括演奏难度信息、被邀请的第二用户的账号信息、此次协同演奏的用户数量、第二用户的权限、用户的虚拟化身设置信息、演奏形式设置信息等,该协同演奏邀请信息的具体含义与实施例一中的协同演奏邀请信息相同,具体可以参见实施例一中的相关描述。The second terminal directly receives the collaborative performance invitation information sent by the first terminal, or indirectly receives the collaborative performance invitation information sent by the first terminal through the server, and after the second user inputs the information confirming the reception of the invitation, generates the invitation acceptance information and sending, so that the invitation acceptance information is directly sent to the first terminal corresponding to the first user or indirectly returned to the first terminal through the server. The collaborative performance invitation information includes at least performance repertoire and performance scene information, and may also include performance difficulty information, account information of the invited second user, the number of users for this collaborative performance, the permissions of the second user, and the user's avatar settings. Information, performance form setting information, etc. The specific meaning of the collaborative performance invitation information is the same as the collaborative performance invitation information in the first embodiment, and for details, please refer to the relevant description in the first embodiment.
在S902中,与所述第一终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景。In S902, load the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality scene determined by the server according to the collaborative performance invitation information .
在返回接受邀请信息后,第二终端和第一终端一同从服务端加载与协同演奏邀请信息中的演奏场景信息相对应的目标虚拟现实场景,该目标虚拟现实场景的具体含义与实施例一中的协同演奏邀请信息相同,具体可以参见实施例一中的相关描述。After returning to accept the invitation information, the second terminal and the first terminal together load the target virtual reality scene corresponding to the performance scene information in the collaborative performance invitation information from the server. The specific meaning of the target virtual reality scene is the same as that in the first embodiment. The collaborative performance invitation information is the same, and for details, please refer to the relevant description in Embodiment 1.
在S903中,展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目。In S903, the target virtual reality scene is displayed, and the second user is instructed to perform the performance piece together with the first user in the target virtual reality scene.
在加载目标虚拟现实场景后,第二终端将该目标虚拟现实场景的数据输出值第二用户佩戴的交互设备,并通过该交互设备指示第二用户执行演奏动作。具体地,第二终端根据加载的目标虚拟现实场景的数据,向第二用户佩戴的头显设备输出该目标虚拟现实场景的视觉图像信息来向第二用户展示该包含了虚拟环境、虚拟演奏设备、第一用户和第二用户的虚拟化身的目标虚拟现实场景,并通过在该视觉图像信息中添加用于指示第二用户进行演奏的指示信息来指示第二用户通过手柄、数据手套或者麦克风等交互设备执行演奏动作,从而实现第一用户与第二用户在目标虚拟现实场景中协同演奏该演奏曲目。After loading the target virtual reality scene, the second terminal outputs the data of the target virtual reality scene to the interactive device worn by the second user, and instructs the second user to perform a performance action through the interactive device. Specifically, the second terminal outputs the visual image information of the target virtual reality scene to the head display device worn by the second user according to the loaded data of the target virtual reality scene, so as to show the second user that the virtual environment and the virtual performance device are included. , the target virtual reality scene of the virtual avatars of the first user and the second user, and by adding instruction information for instructing the second user to perform in the visual image information to instruct the second user to pass the handle, data glove or microphone, etc. The interactive device performs the performance action, so that the first user and the second user can perform the performance piece together in the target virtual reality scene.
可选地,所述协同演奏邀请信息还包括用于设置所述第一用户的演奏形式及所述第二用户的演奏形式的演奏形式设置信息,所述演奏形式包括唱歌和乐器演奏;Optionally, the collaborative performance invitation information further includes performance form setting information for setting the performance form of the first user and the performance form of the second user, and the performance forms include singing and musical instrument performance;
对应地,所述步骤S903包括:Correspondingly, the step S903 includes:
展示所述目标虚拟现实场景,并根据所述演奏形式设置信息及所述演奏曲目显示演奏提示信息;Displaying the target virtual reality scene, and displaying performance prompt information according to the performance form setting information and the performance repertoire;
获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,实现所述第一用户在所述目标虚拟现实场景中协同所述第二用户所述演奏曲目。Acquire feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and output the feedback information, so that the first user can perform the performance in the target virtual reality scene. Cooperate with the second user to perform the piece in the target virtual reality scene.
可选地,若根据所述演奏形式设置信息确定所述第二用户的演奏形式包括唱歌,则所述演奏提示信息包括歌词提示信息;Optionally, if it is determined according to the performance form setting information that the performance form of the second user includes singing, the performance prompt information includes lyrics prompt information;
所述获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,包括:The acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information, includes:
获取所述第二用户根据所述歌词提示信息在所述目标虚拟现实场景中演唱生成的第二响应数据并传递至所述第一终端,以及获取第一终端发送的所述第二用户执行演奏动作生成的第二响应数据;Acquiring second response data generated by the second user singing in the target virtual reality scene according to the lyrics prompt information and transmitting it to the first terminal, and acquiring the performance performed by the second user sent by the first terminal second response data generated by the action;
根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息;所述反馈信息包括听觉反馈信息及视觉反馈信息。Generate feedback information according to the first response data and the second response data, and output the feedback information; the feedback information includes auditory feedback information and visual feedback information.
可选地,若根据所述演奏形式设置信息确定所述第二用户的演奏形式包括乐器演奏,则所述目标虚拟现实场景包括目标乐器对应的虚拟乐器或者虚拟简易演奏装置,所述演奏提示信息包括标识于所述虚拟乐器或者所述虚拟简易演奏装置的乐器操作提示信息;Optionally, if it is determined according to the performance form setting information that the performance form of the second user includes musical instrument performance, the target virtual reality scene includes a virtual musical instrument or a virtual simple performance device corresponding to the target musical instrument, and the performance prompt information Including musical instrument operation prompt information identified on the virtual musical instrument or the virtual simple performance device;
所述获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,包括:The acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information, includes:
获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,以及获取第二终端发送的所述第二用户执行演奏动作生成的第二响应数据;acquiring first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information, and transmitting it to the second terminal, and Acquiring second response data sent by the second terminal and generated by the second user performing the performance action;
根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息,所述反馈信息包括听觉反馈信息、视觉反馈信息及力觉反馈信息。Generate feedback information according to the first response data and the second response data, and output the feedback information, where the feedback information includes auditory feedback information, visual feedback information, and force feedback information.
可选地,所述第二响应数据包括第二声音响应数据、第二视觉响应数据及第二力觉响应数据,所述获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,包括:Optionally, the second response data includes second sound response data, second visual response data, and second force-sensing response data, and the acquisition of the first user according to the musical instrument operation prompt information in the target virtual The first response data generated by operating the virtual musical instrument or the virtual simple performance device in the real scene and transmitted to the second terminal, including:
获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置的动作信息;Acquiring the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information;
根据所述动作信息生成所述第二视觉响应数据及所述第二力觉响应数据;generating the second visual response data and the second force response data according to the motion information;
根据所述动作信息获取对应的预存音源信息,生成第一声音响应数据;所述预存音源信息为预存的所述目标乐器演奏的音效,所述音效包括节奏音或者旋律音;Acquire corresponding pre-stored sound source information according to the action information, and generate first sound response data; the pre-stored sound source information is the pre-stored sound effect played by the target musical instrument, and the sound effect includes a rhythm sound or a melody sound;
将所述第二视觉响应数据及所述第二声音响应数据传递至所述第一终端;transmitting the second visual response data and the second sound response data to the first terminal;
对应地,所述根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息,包括:Correspondingly, the generating feedback information according to the first response data and the second response data, and outputting the feedback information, includes:
根据所述第二声音响应数据及所述第一响应数据中的第一声音响应数据,输出所述听觉反馈信息;outputting the auditory feedback information according to the second sound response data and the first sound response data in the first response data;
根据所述第二视觉响应数据及所述第一响应数据中的第一视觉响应数据,输出所述视觉反馈信息;outputting the visual feedback information according to the second visual response data and the first visual response data in the first response data;
根据所述第二力觉响应数据,输出所述力觉反馈信息。According to the second force sense response data, the force sense feedback information is output.
可选地,在所述步骤S903之后,还包括:Optionally, after the step S903, it also includes:
获取并输出演奏测评数据。Acquire and output performance evaluation data.
本申请实施例中,第二终端执行上述步骤的具体过程与实施例一中第一终端的相关步骤相似或者相同,具体请参见实施例一中的相关描述,此次不再赘述。In this embodiment of the present application, the specific process of the second terminal performing the above steps is similar or the same as the related steps of the first terminal in Embodiment 1. For details, please refer to the relevant description in Embodiment 1, which will not be repeated here.
本申请实施例中,由于第二终端能够获取第一终端发送的协同演奏邀请信息,加载并展示对应的目标虚拟现实场景,以及指示第二用户在该目标虚拟现实场景与第一用户协同演奏设定的演奏曲目,从而能够通过第二终端与第一终端的交互有效地实现第二用户与第一用户的协同演奏,使得第一用户和第二用户能够通过在该目标虚拟现实场景中协同演奏指定的演奏曲目,因此能够方便有效地实现远程音乐娱乐交互,使得处于不同地域的音乐爱好者能够基于虚拟现实技术,身临其境地实现远程音乐娱乐交流。In the embodiment of the present application, because the second terminal can obtain the collaborative performance invitation information sent by the first terminal, load and display the corresponding target virtual reality scene, and instruct the second user to perform collaborative performance with the first user in the target virtual reality scene. The predetermined performance repertoire can effectively realize the collaborative performance of the second user and the first user through the interaction between the second terminal and the first terminal, so that the first user and the second user can perform collaborative performance in the target virtual reality scene. Therefore, it is convenient and effective to realize remote music entertainment interaction, so that music lovers in different regions can realize remote music entertainment communication based on virtual reality technology.
实施例三:Embodiment three:
图10示出了本申请实施例提供的一种协同演奏系统的结构示意图,为了便于说明,仅示出了与本申请实施例相关的部分:FIG. 10 shows a schematic structural diagram of a collaborative performance system provided by an embodiment of the present application. For the convenience of description, only the part related to the embodiment of the present application is shown:
该协同演奏系统包括第一终端、至少一个第二终端和服务端。其中,第一终端用于执行如实施例一描述的协同演奏方法,第二终端用于执行如实施例二描述的协同演奏方法,具体可以参见实施例一、实施例二的相关描述,此处不赘述。The collaborative performance system includes a first terminal, at least one second terminal and a server. The first terminal is used to execute the collaborative performance method described in the first embodiment, and the second terminal is used to execute the collaborative performance method described in the second embodiment. For details, please refer to the relevant descriptions of the first and second embodiments. I won't go into details.
具体地,该服务端用于接收所述协同演奏邀请信息,根据所述协同演奏邀请信息确定目标虚拟现实场景;以及用于将所述目标虚拟现实场景的数据传送至所述第一终端及所述第二终端。Specifically, the server is used for receiving the collaborative performance invitation information, determining a target virtual reality scene according to the collaborative performance invitation information; and for transmitting the data of the target virtual reality scene to the first terminal and the the second terminal.
本申请实施例中,服务端为构建、存储虚拟现实场景的终端设备。服务端可以通过虚拟现实建模语言(Virtual Reality Modeling Language,VRML)、Java 3D(Java语言在三维图形领域扩展的一组应用编程接口)或者开放图形库(Open Graphics Library,OpenGL)等工具提前构建并存储分别包含K厅、草原、海边等各种虚拟环境的各种虚拟现实场景,以及预存虚拟演奏设备数据库和虚拟化身数据库。当服务端接收到协同演奏邀请信息后,根据协同演奏邀请信息中包含的演奏场景信息,确定对应虚拟现实场景,并根据第一用户和/或第二用户选择的虚拟演奏设备、虚拟化身从虚拟演奏设备数据库选择对应的虚拟演奏设备,从虚拟化身数据库选择对应的虚拟化身添加至该虚拟现实场景中,从而得到目标虚拟现实场景。之后,服务端接收到第一终端、第二终端的加载请求时,将目标虚拟现实场景的数据(具体可以为该目标虚拟现实场景的VRML文件)传送至第一终端、第二终端,以使第一终端、第二终端将目标虚拟现实场景展示给对应的第一用户、第二用户。In the embodiment of the present application, the server is a terminal device that constructs and stores a virtual reality scene. The server can be built in advance through tools such as Virtual Reality Modeling Language (VRML), Java 3D (a set of application programming interfaces extended by Java language in the field of 3D graphics), or Open Graphics Library (OpenGL). And store various virtual reality scenes including K hall, grassland, seaside and other virtual environments, as well as pre-stored virtual performance equipment database and virtual avatar database. After receiving the collaborative performance invitation information, the server determines the corresponding virtual reality scene according to the performance scene information contained in the collaborative performance invitation information, and selects the virtual performance device and the virtual avatar from the virtual reality scene according to the selection of the first user and/or the second user. The performance equipment database selects the corresponding virtual performance equipment, and selects the corresponding virtual avatar from the virtual avatar database to add to the virtual reality scene, thereby obtaining the target virtual reality scene. After that, when the server receives the loading request from the first terminal and the second terminal, it transmits the data of the target virtual reality scene (specifically, the VRML file of the target virtual reality scene) to the first terminal and the second terminal, so that the data of the target virtual reality scene can be transmitted to the first terminal and the second terminal. The first terminal and the second terminal display the target virtual reality scene to the corresponding first user and the second user.
可选地,本申请实施例中,服务端除了用于构建、存储传送虚拟现实场景外,还可以作为第一终端和第二终端的数据传送媒介,第一终端和第二终端之间的交互均通过该服务端的数据转送实现。Optionally, in this embodiment of the present application, in addition to constructing, storing and transmitting virtual reality scenarios, the server can also be used as a data transmission medium between the first terminal and the second terminal, and the interaction between the first terminal and the second terminal can also be used. All are realized through the data transfer of the server.
可选地,服务端还用于监听第一终端记录并的第一用户的演奏动作的信息、第二终端记录的第二用户的演奏动作的信息,并根据这些演奏动作的信息计算生成对应的反馈信息,之后将反馈信息返回至第一终端和第二终端,以使第一终端将该反馈信息输出反馈至第一用户,第二终端将该反馈信息输出反馈至第二用户,使得第一用户和第二用户能够及时准确地获得自身以及对方演奏动作的执行效果,以增强第一用户在与第二用户进行协同演奏时的真实感和沉浸感。Optionally, the server is also used to monitor the information of the performance of the first user recorded by the first terminal and the information of the performance of the second user recorded by the second terminal, and calculate and generate the corresponding performance according to the information of these performances. feedback information, and then return the feedback information to the first terminal and the second terminal, so that the first terminal outputs the feedback information to the first user, and the second terminal outputs the feedback information to the second user, so that the first terminal outputs the feedback information and feeds back the feedback information to the second user. The user and the second user can timely and accurately obtain the execution effect of their own and each other's performance actions, so as to enhance the sense of realism and immersion of the first user when performing collaborative performance with the second user.
可选地,服务端还用于根据第一用户的演奏动作的信息和第二用户的演奏动作的信息,计算生成协同演奏的演奏测评数据,并输出至第一终端、第二终端,以反馈至对应的第一用户、第二用户,使得用户能够及时掌握当前协同演奏的评价反馈信息,提高协同演奏的智能性和用户体验。Optionally, the server is also used to calculate and generate the performance evaluation data of the collaborative performance according to the information of the performance of the first user and the information of the performance of the second user, and output it to the first terminal and the second terminal for feedback. to the corresponding first user and second user, so that the user can grasp the evaluation feedback information of the current collaborative performance in time, and improve the intelligence and user experience of the collaborative performance.
作为示例而非限定,本申请实施例终端协同演奏系统可以基于Web3D的虚拟现实网络平台实现,第一终端、第二终端(统称为用户终端)以及服务端可以通过5G网络建立通讯连接。具体地,如图11所示,服务端可以包括协同服务器和Web服务器,其中Web服务器中提前存储了虚拟现实场景的文件,具体包括后缀名为.wrl的VRML文件和后缀名为.class的java(一门面向对象编程语言)文件。用户可以通过用户终端上的浏览器登录其对应的用户账号,与服务端建立通讯连接,并加载对应的目标虚拟现实场景的文件,具体根据目标虚拟现实场景的VRML文件输出展示该目标虚拟现实场景,根据虚拟现实场景的java文件生成用于实现目标虚拟现实场景内部与外部的交互的Java Applet(用Java语言编写的小应用程序,可以直接嵌入到网页中)。之后,服务端的协同服务器中的监听线程实时监听各个用户终端上的Java Applet记录的目标虚拟现实场景内部的变化信息(可以为用户在目标虚拟现实场景中执行演奏动作带来的碰撞检测结果信息),并通过该协同服务器的通讯线程将该变化信息传递给其它用户终端的Java Applet程序,以使该其它用户终端的Java Applet程序将该变化信息作用于各自展示的目标虚拟现实场景,并生成对应的反馈信息反馈至对应的用户,以使各个用户能够获得其它用户作用于目标虚拟现实场景而产生的变化效果。示例性地,图11示出了用户终端a从Web服务器加载后缀名为.wrl的VRML文件和后缀名为.class的java文件的过程,以及服务端通过监听线程监听用户终端a上的变化信息并通过通讯线程反馈至其它用户终端的过程。As an example and not a limitation, the terminal collaborative performance system in this embodiment of the present application can be implemented based on a Web3D virtual reality network platform, and the first terminal, the second terminal (collectively referred to as user terminals), and the server can establish a communication connection through a 5G network. Specifically, as shown in FIG. 11 , the server may include a collaboration server and a web server, wherein the web server stores the files of the virtual reality scene in advance, specifically including VRML files with the suffix .wrl and java with the suffix .class (an object-oriented programming language) file. The user can log in to the corresponding user account through the browser on the user terminal, establish a communication connection with the server, and load the file of the corresponding target virtual reality scene, and display the target virtual reality scene according to the VRML file output of the target virtual reality scene. , according to the java file of the virtual reality scene, generate the Java used to realize the interaction between the inside and the outside of the target virtual reality scene Applet (a small application written in the Java language that can be directly embedded in a web page). After that, the monitoring thread in the collaborative server on the server side monitors the change information inside the target virtual reality scene recorded by the Java Applet on each user terminal in real time (it can be the collision detection result information brought by the user performing the performance action in the target virtual reality scene) , and pass the change information to the Java Applet programs of other user terminals through the communication thread of the collaborative server, so that the Java Applet programs of the other user terminals act the change information on the target virtual reality scene displayed respectively, and generate corresponding The feedback information is fed back to the corresponding user, so that each user can obtain the change effect produced by other users acting on the target virtual reality scene. Exemplarily, FIG. 11 shows the process that the user terminal a loads the VRML file with the suffix of .wrl and the java file with the suffix of .class from the Web server, and the server monitors the change information on the user terminal a through the monitoring thread And the process of feeding back to other user terminals through the communication thread.
本申请实施例的协同演奏系统,通过第一终端、第二终端及服务端的交互,实现协同演奏邀请信息的传送、目标虚拟现实场景的确定、加载和展示和反馈信息的输出,使得不同用户能够通过该目标虚拟现实场景身临其境地实现远程的协同演奏,方便有效地实现远程音乐娱乐交互。The collaborative performance system of the embodiment of the present application realizes the transmission of collaborative performance invitation information, the determination, loading and display of the target virtual reality scene, and the output of feedback information through the interaction of the first terminal, the second terminal and the server, so that different users can Through the target virtual reality scene, the remote collaborative performance can be realized immersively, and the remote music entertainment interaction can be realized conveniently and effectively.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence numbers of the steps in the above embodiments does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
实施例四:Embodiment 4:
图12示出了本申请实施例提供的一种第一终端的结构示意图,为了便于说明,仅示出了与本申请实施例相关的部分:FIG. 12 shows a schematic structural diagram of a first terminal provided by an embodiment of the present application. For convenience of description, only parts related to the embodiment of the present application are shown:
该第一终端包括:协同演奏邀请信息发送单元121、接受邀请信息获取单元122、第一展示单元123。其中:The first terminal includes: a collaborative performance invitation information sending unit 121 , an invitation acceptance information obtaining unit 122 , and a first displaying unit 123 . in:
协同演奏邀请信息发送单元121,用于发送协同演奏邀请信息,所述协同演奏邀请信息包括演奏曲目及演奏场景信息。The collaborative performance invitation information sending unit 121 is configured to send collaborative performance invitation information, where the collaborative performance invitation information includes performance pieces and performance scene information.
接受邀请信息获取单元122,用于若获取到第二用户通过第二终端返回的接受邀请信息,则与所述第二终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景。The acceptance invitation information obtaining unit 122 is configured to, if the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal; The target virtual reality scene is a virtual reality scene determined by the server according to the collaborative performance invitation information.
第一展示单元123,用于展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目。The first display unit 123 is configured to display the target virtual reality scene, and instruct the first user to perform the performance piece together with the second user in the target virtual reality scene.
图13示出了本申请提供的一种第二终端的结构示意图,为了便于说明,仅示出了与本申请实施例相关的部分:FIG. 13 shows a schematic structural diagram of a second terminal provided by the present application. For the convenience of description, only the parts related to the embodiments of the present application are shown:
该第二终端包括:协同演奏邀请信息接收单元131、加载单元132、第二展示单元133。其中:The second terminal includes: a collaborative performance invitation information receiving unit 131 , a loading unit 132 , and a second displaying unit 133 . in:
协同演奏邀请信息接收单元131,用于接收协同演奏邀请信息,并返回接受邀请信息至第一用户对应的第一终端;所述协同演奏邀请信息包括演奏曲目及演奏场景信息;The collaborative performance invitation information receiving unit 131 is configured to receive collaborative performance invitation information, and return the acceptance invitation information to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance repertoire and performance scene information;
加载单元132,用于与所述第一终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;The loading unit 132 is configured to load the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality determined by the server according to the collaborative performance invitation information. real scene;
第二展示单元133,用于展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目。The second display unit 133 is configured to display the target virtual reality scene, and instruct the second user to perform the performance piece together with the first user in the target virtual reality scene.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information exchange, execution process and other contents between the above-mentioned devices/units are based on the same concept as the method embodiments of the present application. For specific functions and technical effects, please refer to the method embodiments section. It is not repeated here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
实施例五:Embodiment 5:
图14是本申请一实施例提供的终端设备的示意图。如图14所示,该实施例的终端设备14包括:处理器140、存储器141以及存储在所述存储器141中并可在所述处理器140上运行的计算机程序142,例如协同演奏程序。所述处理器140执行所述计算机程序142时实现上述各个协同演奏方法实施例中的步骤,例如图2所示的步骤201至S203或例如图3所示的步骤S901至S903。或者,所述处理器140执行所述计算机程序142时实现上述各装置实施例中各模块/单元的功能,例如图12所示单元121至123的功能或例如图13所示单元131至133的功能。FIG. 14 is a schematic diagram of a terminal device provided by an embodiment of the present application. As shown in FIG. 14 , the terminal device 14 of this embodiment includes: a processor 140 , a memory 141 , and a computer program 142 stored in the memory 141 and executable on the processor 140 , such as a collaborative performance program. When the processor 140 executes the computer program 142, the steps in each of the above-mentioned embodiments of the collaborative performance method are implemented, such as steps 201 to S203 shown in FIG. 2 or steps S901 to S903 shown in FIG. 3 . Alternatively, when the processor 140 executes the computer program 142, the functions of the modules/units in the above device embodiments are implemented, for example, the functions of the units 121 to 123 shown in FIG. 12 or the functions of the units 131 to 133 shown in FIG. 13 . Function.
示例性的,所述计算机程序142可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器141中,并由所述处理器140执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序142在所述终端设备14中的执行过程。例如,所述计算机程序142可以被分割成协同演奏邀请信息发送单元、接受邀请信息获取单元、第一展示单元;或者,所述计算机程序142可以被分割成协同演奏邀请信息接收单元、加载单元、第二展示单元。Exemplarily, the computer program 142 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 141 and executed by the processor 140 to complete the this application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 142 in the terminal device 14 . For example, the computer program 142 can be divided into a collaborative performance invitation information sending unit, an invitation acceptance information acquisition unit, and a first presentation unit; or, the computer program 142 can be divided into a collaborative performance invitation information receiving unit, a loading unit, Second display unit.
所述终端设备14可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器140、存储器141。本领域技术人员可以理解,图14仅仅是终端设备14的示例,并不构成对终端设备14的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。The terminal device 14 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, the processor 140 and the memory 141 . Those skilled in the art can understand that FIG. 14 is only an example of the terminal device 14, and does not constitute a limitation on the terminal device 14, and may include more or less components than those shown, or combine some components, or different components For example, the terminal device may further include an input and output device, a network access device, a bus, and the like.
所称处理器140可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 140 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, Digital Signal Processors (Digital Signal Processors, DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate Arrays (Field-Programmable Gate Arrays) Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
所述存储器141可以是所述终端设备14的内部存储单元,例如终端设备14的硬盘或内存。所述存储器141也可以是所述终端设备14的外部存储设备,例如所述终端设备14上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器141还可以既包括所述终端设备14的内部存储单元也包括外部存储设备。所述存储器141用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器141还可以用于暂时地存储已经输出或者将要输出的数据。The memory 141 may be an internal storage unit of the terminal device 14 , such as a hard disk or a memory of the terminal device 14 . The memory 141 may also be an external storage device of the terminal device 14, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the terminal device 14 card, flash memory card (Flash Card), etc. Further, the memory 141 may also include both an internal storage unit of the terminal device 14 and an external storage device. The memory 141 is used to store the computer program and other programs and data required by the terminal device. The memory 141 may also be used to temporarily store data that has been output or will be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。The integrated modules/units, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium. Based on this understanding, the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the above-mentioned embodiments, those of ordinary skill in the art should understand that: it can still be used for the above-mentioned implementations. The technical solutions described in the examples are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the application, and should be included in the within the scope of protection of this application.

Claims (15)

  1. 一种协同演奏方法,所述方法应用于第一用户对应的第一终端,其特征在于,包括: A collaborative performance method, the method being applied to a first terminal corresponding to a first user, characterized in that the method includes:
    发送协同演奏邀请信息,所述协同演奏邀请信息包括演奏曲目及演奏场景信息;Sending collaborative performance invitation information, where the collaborative performance invitation information includes performance repertoire and performance scene information;
    若获取到第二用户通过第二终端返回的接受邀请信息,则与所述第二终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;If the acceptance invitation information returned by the second user through the second terminal is obtained, load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal; the target virtual reality scene is the service The virtual reality scene determined by the terminal according to the collaborative performance invitation information;
    展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目。The target virtual reality scene is displayed, and the first user is instructed to perform the performance piece together with the second user in the target virtual reality scene.
  2. 如权利要求1所述的协同演奏方法,其特征在于,所述协同演奏邀请信息还包括用于设置所述第一用户的演奏形式及所述第二用户的演奏形式的演奏形式设置信息,所述演奏形式包括唱歌和乐器演奏; The collaborative performance method according to claim 1, wherein the collaborative performance invitation information further comprises performance form setting information for setting the performance form of the first user and the performance form of the second user, The performance forms described include singing and musical instrument performance;
    对应地,所述展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目,包括:Correspondingly, the displaying the target virtual reality scene and instructing the first user to perform the repertoire in collaboration with the second user in the target virtual reality scene includes:
    展示所述目标虚拟现实场景,并根据所述演奏形式设置信息及所述演奏曲目显示演奏提示信息;Displaying the target virtual reality scene, and displaying performance prompt information according to the performance form setting information and the performance repertoire;
    获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,实现所述第一用户在所述目标虚拟现实场景中协同所述第二用户演奏所述演奏曲目。Acquire feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and output the feedback information, so that the first user can perform the performance in the target virtual reality scene. Cooperate with the second user to perform the performance piece in the target virtual reality scene.
  3. 如权利要求2所述的协同演奏方法,其特征在于,若根据所述演奏形式设置信息确定所述第一用户的演奏形式包括唱歌,则所述演奏提示信息包括歌词提示信息; The collaborative performance method according to claim 2, wherein if it is determined according to the performance form setting information that the performance form of the first user includes singing, the performance prompt information includes lyrics prompt information;
    所述获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,包括:The acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information, includes:
    获取所述第一用户根据所述歌词提示信息在所述目标虚拟现实场景中演唱生成的第一响应数据并传递至所述第二终端,以及获取第二终端发送的所述第二用户执行演奏动作生成的第二响应数据;Acquiring first response data generated by the first user singing in the target virtual reality scene according to the lyrics prompt information and transmitting it to the second terminal, and acquiring the performance performed by the second user sent by the second terminal second response data generated by the action;
    根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息;所述反馈信息包括听觉反馈信息及视觉反馈信息。Generate feedback information according to the first response data and the second response data, and output the feedback information; the feedback information includes auditory feedback information and visual feedback information.
  4. 如权利要求2所述的协同演奏方法,其特征在于,若根据所述演奏形式设置信息确定所述第一用户的演奏形式包括乐器演奏,则所述目标虚拟现实场景包括目标乐器对应的虚拟乐器或者虚拟简易演奏装置,所述演奏提示信息包括标识于所述虚拟乐器或者所述虚拟简易演奏装置的乐器操作提示信息; The collaborative performance method according to claim 2, wherein if it is determined according to the performance form setting information that the performance form of the first user includes musical instrument performance, the target virtual reality scene includes a virtual musical instrument corresponding to the target musical instrument Or a virtual simple performance device, the performance prompt information includes musical instrument operation prompt information identified on the virtual musical instrument or the virtual simple performance device;
    所述获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,包括:The acquiring feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and outputting the feedback information, includes:
    获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,以及获取第二终端发送的所述第二用户执行演奏动作生成的第二响应数据;acquiring first response data generated by the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information, and transmitting it to the second terminal, and Acquiring second response data sent by the second terminal and generated by the second user performing the performance action;
    根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息,所述反馈信息包括听觉反馈信息、视觉反馈信息及力觉反馈信息。Generate feedback information according to the first response data and the second response data, and output the feedback information, where the feedback information includes auditory feedback information, visual feedback information, and force feedback information.
  5. 如权利要求4所述的协同演奏方法,其特征在于,所述第一响应数据包括第一声音响应数据、第一视觉响应数据及第一力觉响应数据,所述获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置生成的第一响应数据,并传递至所述第二终端,包括: The collaborative performance method according to claim 4, wherein the first response data comprises first sound response data, first visual response data and first force-sensing response data, and the acquiring the first user according to The musical instrument operation prompt information is first response data generated by operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene, and transmitted to the second terminal, including:
    获取所述第一用户根据所述乐器操作提示信息在所述目标虚拟现实场景中操作所述虚拟乐器或者所述虚拟简易演奏装置的动作信息;Acquiring the action information of the first user operating the virtual musical instrument or the virtual simple performance device in the target virtual reality scene according to the musical instrument operation prompt information;
    根据所述动作信息生成所述第一视觉响应数据及所述第一力觉响应数据;generating the first visual response data and the first force response data according to the motion information;
    根据所述动作信息获取对应的预存音源信息,生成第一声音响应数据;所述预存音源信息为预存的所述目标乐器演奏的音效,所述音效包括节奏音或者旋律音;Acquire corresponding pre-stored sound source information according to the action information, and generate first sound response data; the pre-stored sound source information is the pre-stored sound effect played by the target musical instrument, and the sound effect includes a rhythm sound or a melody sound;
    将所述第一视觉响应数据及所述第一声音响应数据传递至所述第二终端;transmitting the first visual response data and the first sound response data to the second terminal;
    对应地,所述根据所述第一响应数据及所述第二响应数据生成反馈信息,并输出所述反馈信息,包括:Correspondingly, the generating feedback information according to the first response data and the second response data, and outputting the feedback information, includes:
    根据所述第一声音响应数据及所述第二响应数据中的第二声音响应数据,输出所述听觉反馈信息;outputting the auditory feedback information according to the first sound response data and the second sound response data in the second response data;
    根据所述第一视觉响应数据及所述第二响应数据中的第二视觉响应数据,输出所述视觉反馈信息;outputting the visual feedback information according to the first visual response data and the second visual response data in the second response data;
    根据所述第一力觉响应数据,输出所述力觉反馈信息。According to the first force sense response data, the force sense feedback information is output.
  6. 如权利要求1所述的协同演奏方法,其特征在于,在所述展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目之后,还包括: The collaborative performance method according to claim 1, characterized in that when the target virtual reality scene is displayed, and the first user is instructed to perform collaboratively with the second user in the target virtual reality scene After reciting the repertoire, it also includes:
    获取并输出演奏测评数据。Acquire and output performance evaluation data.
  7. 一种协同演奏方法,所述方法应用于第二用户对应的第二终端,其特征在于,包括: A collaborative performance method, the method being applied to a second terminal corresponding to a second user, comprising:
    接收协同演奏邀请信息,并返回接受邀请信息至第一用户对应的第一终端;所述协同演奏邀请信息包括演奏曲目及演奏场景信息;Receive the collaborative performance invitation information, and return the acceptance invitation information to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance repertoire and performance scene information;
    与所述第一终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;Loading the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality scene determined by the server according to the collaborative performance invitation information;
    展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目。The target virtual reality scene is displayed, and the second user is instructed to perform the performance piece together with the first user in the target virtual reality scene.
  8. 如权利要求7所述的协同演奏方法,其特征在于,所述协同演奏邀请信息还包括用于设置所述第一用户的演奏形式及所述第二用户的演奏形式的演奏形式设置信息,所述演奏形式包括唱歌和乐器演奏; The collaborative performance method according to claim 7, wherein the collaborative performance invitation information further comprises performance form setting information for setting the performance form of the first user and the performance form of the second user, The performance forms described include singing and musical instrument performance;
    对应地,所述展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目,包括:Correspondingly, the displaying the target virtual reality scene and instructing the second user to perform the repertoire in collaboration with the first user in the target virtual reality scene includes:
    展示所述目标虚拟现实场景,并根据所述演奏形式设置信息及所述演奏曲目显示演奏提示信息;Displaying the target virtual reality scene, and displaying performance prompt information according to the performance form setting information and the performance repertoire;
    获取所述第一用户及所述第二用户根据所述演奏提示信息在所述目标虚拟现实场景中执行演奏动作生成的反馈信息,并输出所述反馈信息,实现所述第一用户在所述目标虚拟现实场景中协同所述第二用户所述演奏曲目。Acquire feedback information generated by the first user and the second user performing performance actions in the target virtual reality scene according to the performance prompt information, and output the feedback information, so that the first user can perform the performance in the target virtual reality scene. Cooperate with the second user to perform the piece in the target virtual reality scene.
  9. 一种协同演奏系统,所述系统包括第一用户对应的第一终端,第二用户对应的第二终端以及服务端; A collaborative performance system, comprising a first terminal corresponding to a first user, a second terminal corresponding to a second user, and a server;
    所述第一终端,用于执行如权利要求1至6任意一项所述的方法;the first terminal, configured to execute the method according to any one of claims 1 to 6;
    所述第二终端,用于执行如权利要求7所述的方法;the second terminal, configured to execute the method of claim 7;
    所述服务端,用于接收所述协同演奏邀请信息,根据所述协同演奏邀请信息确定目标虚拟现实场景;以及用于将所述目标虚拟现实场景的数据传送至所述第一终端及所述第二终端。The server is used for receiving the collaborative performance invitation information, and determining a target virtual reality scene according to the collaborative performance invitation information; and for transmitting data of the target virtual reality scene to the first terminal and the second terminal.
  10. 如权利要求9所述的协同演奏系统,其特征在于,所述服务端,还用于监听所述第一终端记录的所述第一用户的演奏动作的信息和所述第二终端记录的所述第二用户的演奏动作的信息;根据所述第一用户的演奏动作的信息和所述第二用户的演奏动作的信息,计算生成对应的反馈信息;将所述反馈信息返回至所述第一终端和所述第二终端。 The collaborative performance system according to claim 9, wherein the server is further configured to monitor the performance information of the first user recorded by the first terminal and the information recorded by the second terminal. the information of the performance movements of the second user; calculate and generate corresponding feedback information according to the information of the performance movements of the first user and the performance movements of the second user; return the feedback information to the first user a terminal and the second terminal.
  11. 如权利要求10所述的协同演奏系统,其特征在于,所述服务端,还用于根据所述第一用户的演奏动作的信息和所述第二用户的演奏动作的信息,计算生成协同演奏的演奏测评数据,并输出所述演奏测评数据至所述第一终端和所述第二终端。 The collaborative performance system according to claim 10, wherein the server is further configured to calculate and generate the collaborative performance according to the information of the performance of the first user and the information of the performance of the second user and output the performance evaluation data to the first terminal and the second terminal.
  12. 一种第一终端,所述第一终端为第一用户对应的终端,其特征在于,包括: A first terminal, wherein the first terminal is a terminal corresponding to a first user, characterized by comprising:
    协同演奏邀请信息发送单元,用于发送协同演奏邀请信息,所述协同演奏邀请信息包括演奏曲目及演奏场景信息;a collaborative performance invitation information sending unit, configured to send collaborative performance invitation information, where the collaborative performance invitation information includes performance repertoire and performance scene information;
    接受邀请信息获取单元,用于若获取到第二用户通过第二终端返回的接受邀请信息,则与所述第二终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;an invitation acceptance information acquisition unit, configured to load the target virtual reality scene corresponding to the performance scene information from the server together with the second terminal if the acceptance invitation information returned by the second user through the second terminal is obtained; the The target virtual reality scene is a virtual reality scene determined by the server according to the collaborative performance invitation information;
    第一展示单元,用于展示所述目标虚拟现实场景,并指示所述第一用户在所述目标虚拟现实场景中与所述第二用户协同演奏所述演奏曲目。The first display unit is configured to display the target virtual reality scene, and instruct the first user to perform the performance piece in collaboration with the second user in the target virtual reality scene.
  13. 一种第二终端,所述第二终端为第二用户对应的终端其特征在于,包括: A second terminal, wherein the second terminal is a terminal corresponding to a second user, characterized by comprising:
    协同演奏邀请信息接收单元,用于接收协同演奏邀请信息,并返回接受邀请信息至第一用户对应的第一终端;所述协同演奏邀请信息包括演奏曲目及演奏场景信息;a collaborative performance invitation information receiving unit, configured to receive the collaborative performance invitation information, and return the acceptance invitation information to the first terminal corresponding to the first user; the collaborative performance invitation information includes performance repertoire and performance scene information;
    加载单元,用于与所述第一终端一同从服务端加载所述演奏场景信息对应的目标虚拟现实场景;所述目标虚拟现实场景为所述服务端根据所述协同演奏邀请信息确定的虚拟现实场景;a loading unit, configured to load the target virtual reality scene corresponding to the performance scene information from the server together with the first terminal; the target virtual reality scene is the virtual reality determined by the server according to the collaborative performance invitation information Scenes;
    第二展示单元,用于展示所述目标虚拟现实场景,并指示所述第二用户在所述目标虚拟现实场景中与所述第一用户协同演奏所述演奏曲目。The second display unit is configured to display the target virtual reality scene, and instruct the second user to perform the performance piece together with the first user in the target virtual reality scene.
  14. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,当所述处理器执行所述计算机程序时,使得终端设备实现如权利要求1至8任一项所述方法的步骤。 A terminal device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, characterized in that when the processor executes the computer program, the terminal device realizes The steps of the method of any one of claims 1 to 8.
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,当所述计算机程序被处理器执行时,使得终端设备实现如权利要求1至8任一项所述方法的步骤。A computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, a terminal device is made to realize the implementation of any one of claims 1 to 8 steps of the method.
PCT/CN2021/076155 2020-09-07 2021-02-09 Collaborative performance method and system, terminal device, and storage medium WO2022048113A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010927502.4A CN112203114B (en) 2020-09-07 2020-09-07 Collaborative playing method, system, terminal device and storage medium
CN202010927502.4 2020-09-07

Publications (1)

Publication Number Publication Date
WO2022048113A1 true WO2022048113A1 (en) 2022-03-10

Family

ID=74006364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076155 WO2022048113A1 (en) 2020-09-07 2021-02-09 Collaborative performance method and system, terminal device, and storage medium

Country Status (2)

Country Link
CN (1) CN112203114B (en)
WO (1) WO2022048113A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112203114B (en) * 2020-09-07 2022-07-12 佛山创视嘉科技有限公司 Collaborative playing method, system, terminal device and storage medium
CN113485559A (en) * 2021-07-23 2021-10-08 王皓 Virtual musical instrument playing method and system based on panoramic roaming platform
CN114927026A (en) * 2022-02-15 2022-08-19 湖北省民间工艺技师学院 Auxiliary method and device for playing Guqin, storage medium and Guqin
CN117298590A (en) * 2022-06-21 2023-12-29 腾讯科技(深圳)有限公司 Virtual reality interaction method, related device, equipment and storage medium
CN115713924B (en) * 2022-11-15 2023-06-27 广州珠江艾茉森数码乐器股份有限公司 Intelligent piano control method and system based on Internet of things

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342564A1 (en) * 2012-06-25 2013-12-26 Peter Tobias Kinnebrew Configured virtual environments
EP2838020A1 (en) * 2013-08-16 2015-02-18 Disney Enterprises, Inc. Cross platform sharing of user-generated content
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
CN109799903A (en) * 2018-12-21 2019-05-24 段新 Percussion music method, terminal device and system based on virtual reality
CN111402844A (en) * 2020-03-26 2020-07-10 广州酷狗计算机科技有限公司 Song chorusing method, device and system
CN112203114A (en) * 2020-09-07 2021-01-08 佛山创视嘉科技有限公司 Collaborative playing method, system, terminal device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018005019A (en) * 2016-07-05 2018-01-11 株式会社エム・ティー・ケー Playing and staging device
KR102354274B1 (en) * 2017-11-17 2022-01-20 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 Role play simulation method and terminal device in VR scenario
CN109166565A (en) * 2018-08-23 2019-01-08 百度在线网络技术(北京)有限公司 Virtual musical instrument processing method, device, virtual musical instrument equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130342564A1 (en) * 2012-06-25 2013-12-26 Peter Tobias Kinnebrew Configured virtual environments
EP2838020A1 (en) * 2013-08-16 2015-02-18 Disney Enterprises, Inc. Cross platform sharing of user-generated content
CN106716306A (en) * 2014-09-30 2017-05-24 索尼互动娱乐股份有限公司 Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space
CN109799903A (en) * 2018-12-21 2019-05-24 段新 Percussion music method, terminal device and system based on virtual reality
CN111402844A (en) * 2020-03-26 2020-07-10 广州酷狗计算机科技有限公司 Song chorusing method, device and system
CN112203114A (en) * 2020-09-07 2021-01-08 佛山创视嘉科技有限公司 Collaborative playing method, system, terminal device and storage medium

Also Published As

Publication number Publication date
CN112203114A (en) 2021-01-08
CN112203114B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN112203114B (en) Collaborative playing method, system, terminal device and storage medium
US9558727B2 (en) Performance method of electronic musical instrument and music
CN102576524A (en) System and method of receiving, analyzing, and editing audio to create musical compositions
US20120144979A1 (en) Free-space gesture musical instrument digital interface (midi) controller
US20030014262A1 (en) Network based music playing/song accompanying service system and method
US10748515B2 (en) Enhanced real-time audio generation via cloud-based virtualized orchestra
Hayes et al. Imposing a networked vibrotactile communication system for improvisational suggestion
JP4131279B2 (en) Ensemble parameter display device
Lyons et al. Creating new interfaces for musical expression
WO2021176925A1 (en) Method, system and program for inferring audience evaluation of performance data
KR100757399B1 (en) Method for Idol Star Management Service using Network based music playing/song accompanying service system
Hashida et al. Rencon: Performance rendering contest for automated music systems
JP5847048B2 (en) Piano roll type score display apparatus, piano roll type score display program, and piano roll type score display method
JP6073618B2 (en) Karaoke equipment
JP2010169925A (en) Speech processing device, chat system, speech processing method and program
KR20210026656A (en) Musical ensemble performance platform system based on user link
JP2002006900A (en) Method and system for reducing and reproducing voice
CN112435644B (en) Audio signal output method and device, storage medium and computer equipment
JP5847049B2 (en) Instrument sound output device
JP7092537B2 (en) Fingering display device and fingering display program
JP3922207B2 (en) Net session performance device and program
JP3171818B2 (en) Music performance support device
Tomczak On the development of an interface framework in chipmusic: theoretical context, case studies and creative outcomes
Bryan-Kinns Computers in support of musical expression
Angell Combining Acoustic Percussion Performance with Gesture Control Electronics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21863184

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18-07-2023)