CN116382466A - Virtual space interaction method, system, equipment and medium for off-line scenario killing - Google Patents

Virtual space interaction method, system, equipment and medium for off-line scenario killing Download PDF

Info

Publication number
CN116382466A
CN116382466A CN202310132709.6A CN202310132709A CN116382466A CN 116382466 A CN116382466 A CN 116382466A CN 202310132709 A CN202310132709 A CN 202310132709A CN 116382466 A CN116382466 A CN 116382466A
Authority
CN
China
Prior art keywords
information
scenario
head
mounted display
display device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310132709.6A
Other languages
Chinese (zh)
Inventor
赵维奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Smart Boy Technology Co ltd
Original Assignee
Sichuan Smart Boy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Smart Boy Technology Co ltd filed Critical Sichuan Smart Boy Technology Co ltd
Priority to CN202310132709.6A priority Critical patent/CN116382466A/en
Publication of CN116382466A publication Critical patent/CN116382466A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/847Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a virtual space interaction method, a system, equipment and a medium for killing offline scenario. The method is applied to the head-mounted display device, and one embodiment of the method comprises the following steps: responsive to detecting that the player user is wearing the head mounted display device, playing scenario presentation information; in response to detecting the starting information of the target hosting device for the target script, playing the role script node information; updating the node information of the character script played in advance according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device; storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to behavior data in response to detecting the ending information of the corresponding target scenario; in response to determining that the behavior data includes ending information, compound disk data is generated. The embodiment improves the immersion sense of the off-line script killing, and can realize automatic re-coiling after the play and automatically record the game progress.

Description

Virtual space interaction method, system, equipment and medium for off-line scenario killing
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a virtual space interaction method, a system, equipment and a medium for killing off-line scenario.
Background
Scenario killing is a process in which players play a role in a scenario to solve a puzzle in the scenario. At present, the general way of killing the offline scenario is as follows: paper script is adopted.
However, the inventor finds that the following technical problems often exist in the manner of killing the offline scenario: the user is required to read longer script content by himself and complete the game by means of real object props, so that the immersion is poor, automatic back-play is not realized, and the game progress cannot be recorded automatically.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose virtual space interaction methods, systems, head-mounted display devices, and computer-readable media for offline scenario killing to address one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a virtual space interaction method for offline scenario killing, applied to a head-mounted display device, the method comprising: in response to detecting that a player user wears the head-mounted display device, playing scenario introduction information of a target scenario through a multimedia playing device of the head-mounted display device; responding to the detection of the starting information of target hosting equipment which is in communication connection with the head-mounted display equipment and aims at the target script, and playing the node information of the character script through the multimedia playing device according to the character script information of the corresponding player user in the target script; updating the role script node information previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device; storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to behavior data in response to detecting the ending information corresponding to the target scenario; in response to determining that the behavior data includes ending information, complex disk data is generated from the behavior data.
Optionally, the multimedia playing device includes the display screen and an audio playing device; and the scenario introduction information for playing the target scenario by the multimedia playing device of the head-mounted display device comprises: displaying visual information of the scenario introduction information in the display screen, and playing audio of the scenario introduction information through the audio playing device; and in response to detecting that the head pose information of the player user meets a preset walking state, moving visual information of the scenario introduction information displayed in the display screen to a side position.
Optionally, after the playing of the scenario introduction information of the target scenario by the multimedia playing apparatus of the head-mounted display device in response to the detection that the player user wears the head-mounted display device, the method further includes: and binding the arbitrary character with the head-mounted display device according to the selection operation of the player user on the arbitrary character in the various characters displayed on the display screen of the head-mounted display device.
Optionally, the binding process for the arbitrary character and the head-mounted display device includes: binding the arbitrary roles with the player users; and binding the player user and the head-mounted display device.
Optionally, the binding process for the arbitrary character and the head-mounted display device includes: determining the character information of the head-mounted display device as the arbitrary character; acquiring a script node tree corresponding to any role as role script information; and storing the character script information.
Optionally, the character script information includes a script node tree corresponding to the character script information; and updating the character script node information previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device, wherein the method comprises the following steps: converting the input speech to speech text in response to detecting the input speech of the player user; determining the voice text as interactive operation information of the head-mounted display device; selecting a scenario node matched with the interactive operation information of the head-mounted display equipment from the scenario node tree as a target scenario node; and playing the character script node information corresponding to the target script node through the multimedia playing device so as to update the character script node information previously played by the multimedia playing device.
Optionally, before the playing of the scenario introduction information of the target scenario by the multimedia playing apparatus of the head-mounted display device in response to detecting that the player user wears the head-mounted display device, the method further includes: in response to receiving scenario confirmation information sent by target hosting equipment in communication connection with the head-mounted display equipment, determining a scenario corresponding to the scenario confirmation information as a target scenario; and acquiring scenario information of the target scenario, wherein the scenario information comprises a character scenario information set.
Optionally, the method further comprises: and in response to detecting that the position information of the head-mounted display device meets the position condition corresponding to any script node in the script node tree, playing role script node information corresponding to the any script node through the multimedia playing device.
Optionally, the method further comprises: identifying the acquired scene object image to obtain scene object information; and in response to determining that the character script node information corresponding to the scene object information exists in the character script information, playing the character script node information corresponding to the scene object information through the multimedia playing device.
Optionally, the method further comprises: responding to the chat information sent by the associated head-mounted display equipment corresponding to the target script, and displaying the chat information in the display screen; and sending reply information of the player user aiming at the chat information to the associated head-mounted display device.
In a second aspect, some embodiments of the present disclosure provide a head mounted display device comprising: one or more processors; and a storage device, on which one or more programs are stored, and a multimedia playing device for playing multimedia information, wherein the multimedia information includes scenario introduction information and character scenario node information, and when the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any implementation manner of the first aspect.
In a third aspect, some embodiments of the present disclosure provide a virtual space interaction system for offline scenario killing, comprising: a set of player clients, wherein each player client in the set of player clients comprises a head-mounted display device configured to perform the method described in any implementation manner of the first aspect, and each player client in the set of player clients is communicatively connected to each other; and the hosting user side is in communication connection with each player user side, and is configured to send starting information of the target script to each player user side.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the virtual space interaction method for the offline scenario killing, which is disclosed by the embodiment of the invention, the immersion of the offline scenario killing is improved, automatic after-play re-coiling can be realized, and the game progress is automatically recorded. Specifically, the reasons for the poor immersion of the offline scenario, the inability to realize automatic post-play re-coiling and the inability to automatically record the game progress are as follows: the user is required to read longer script content by himself and complete the game by means of real object props, so that the immersion is poor, automatic back-play is not realized, and the game progress cannot be recorded automatically. Based on this, the virtual space interaction method for offline scenario killing of some embodiments of the present disclosure is applied to a head-mounted display device. First, in response to detecting that a player user wears the head-mounted display device, scenario introduction information of a target scenario is played through a multimedia playing apparatus of the head-mounted display device. Therefore, the user can directly follow the automatic prompt to watch the script introduction information through the head-mounted display device, and does not need to automatically browse the paper script. And then, in response to detecting the starting information of the target hosting device which is in communication connection with the head-mounted display device and aims at the target scenario, playing the role scenario node information through the multimedia playing device according to the role scenario information corresponding to the player user in the target scenario. Thus, the user can view the scenario flow related information conforming to the determined character. And then, according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device, updating the role script node information previously played by the multimedia playing device. Therefore, the user can conduct interaction of the line-down script killing through the head-mounted display device, and can follow the guidance of the interaction operation of the hosting device to continuously follow the subsequent script killing flow. And then, in response to detecting the ending information corresponding to the target scenario, storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device into behavior data. Therefore, when the scenario killing is finished, the current interactive operation information of the host device and the head-mounted display device can be automatically recorded. Finally, in response to determining that the behavior data includes ending information, compound disk data is generated from the behavior data. Thus, the composite disc data can be automatically generated when the ending information appears in the row data. Also because the user can directly watch the script information through the head-mounted display device, the paper script can be prevented from being read by himself. And because the user can watch the related information of the script flow of the set role in the head-mounted display device, the related script scene can be directly displayed in the virtual space without depending on physical props, and the immersive sense of killing the offline script is improved. And because the interactive operation information of the hosting device and the head-mounted display device when the scenario killing is finished is stored in the behavior data, the game progress can be automatically recorded. And the compound disc data can be directly generated through the behavior data, so that automatic back compound disc can be realized. Therefore, the immersion sense of the off-line script killing is improved, automatic re-coiling after the play can be realized, and the game progress can be automatically recorded.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of a virtual space interaction method applied to offline scenario killing according to the present disclosure;
FIG. 2 is a flow chart of further embodiments of a virtual space interaction method applied to offline scenario killing according to the present disclosure;
FIG. 3 is a schematic structural diagram of some embodiments of a virtual space interaction system applied to offline scenario killing according to the present disclosure;
fig. 4 is a schematic structural diagram of a head mounted display device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Operations such as collection, storage, use, etc. of personal information (e.g., behavior data and multi-disc data) of a user involved in the present disclosure, and before performing the corresponding operations, the relevant organization or individual is up to the end to include developing personal information security impact assessment, fulfilling informed obligations to the personal information body, soliciting authorized consent of the personal information body in advance, etc.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a virtual space interaction method applied to offline scenario kills according to the present disclosure. The virtual space interaction method applied to the offline scenario killing is applied to the head-mounted display equipment and comprises the following steps of:
in step 101, in response to detecting that the player user wears the head-mounted display device, scenario introduction information of the target scenario is played through a multimedia playing apparatus of the head-mounted display device.
In some embodiments, an execution subject (e.g., a head-mounted display device) of a virtual space interaction method applied to offline scenario-killing may play scenario introduction information of a target scenario through a multimedia playing apparatus of the head-mounted display device in response to detecting that a player user wears the head-mounted display device. The head-mounted display device may be, but is not limited to: AR glasses, MR glasses, VR glasses. The multimedia playing device may be a device for playing multimedia information. For example, the multimedia device may include a display unit and a speaker. The display unit may include a micro display and optical elements that enable the display to be imaged in front of the eye of the user. The target scenario may be a current offline scenario-taken scenario. The scenario introduction information may be introduction-related information of the target scenario. In practice, the execution subject may display the scenario presentation information in the display unit in response to detecting that the player user wears the head-mounted display device. It is understood that the executing body may detect whether the player user wears the head-mounted display device through the inertial measurement unit of the head-mounted display device, the elastic piece at the temple, or the laser sensor.
Alternatively, the multimedia playing apparatus may include the display screen and the audio playing device.
In some optional implementations of some embodiments, first, the executing body may display visual information of the scenario introduction information in the display screen, and play audio of the scenario introduction information through the audio playing device. The visual information may be displayable information included in the scenario introduction information. For example, the visual information may be text and/or images. The audio playing device may be a speaker. In practice, the execution subject may display the visual information at a central position of the display screen. Then, visual information of the scenario introduction information displayed in the display screen may be moved to a side position in response to detecting that the head pose information of the player user satisfies a preset walking state. The head pose information may be head pose data of the wearing user, which is collected by an inertial measurement unit of the head-mounted display device. The preset walking state may be a state in which the head pose information indicates that the player user is walking. The side position may be one side of the display screen. For example, the side position may be, but is not limited to, one of the following: left side edge position, right side edge position, upper side edge position, lower side edge position. Therefore, when the user does not walk, the script introduction information can be directly displayed in the middle position for the user to watch. When a user walks, the script introduction information is automatically moved to the side for display, and the script introduction information is not required to be presented and the attaching effect of a real object is not required, so that the displayed script introduction information is prevented from being overlaid before the real object of the real scene, and the watching experience of the user is improved.
Alternatively, after step 101, the executing body may perform binding processing on an arbitrary character from among the characters displayed on the display screen of the head-mounted display device according to a selection operation performed by the player user on the arbitrary character. Wherein, the character can be a game character in the target scenario. Therefore, the player user can select the game role by himself.
In some optional implementations of some embodiments, first, the executing entity may perform a binding process on the arbitrary character and the player user. The player user may then be bound to the head mounted display device. Thus, binding between the player user, the selected character, and the head-mounted display device to each other can be achieved.
It can be understood that the binding process may be a process of directly binding the roles with the device, or a process of switching the roles of the device to the roles to be bound.
In some optional implementations of some embodiments, first, the role information of the head-mounted display device is determined to be any of the roles. The character information may represent a character tag corresponding to the head-mounted display device. Then, the scenario node tree corresponding to the arbitrary character may be acquired as character scenario information. In practice, the execution body may acquire, from the server, scenario node trees corresponding to the arbitrary roles from the scenario node trees of the target scenario. Wherein the scenario node tree may be a decision tree of each game node characterizing the character. Each node of the script node tree may correspond to character script node information. Thereafter, the character scenario information may be stored. In practice, the executing body may store the character scenario information to a local cache. Thus, after the role of the player user is bound to the head-mounted display device, the role script information of the role can be automatically acquired.
Alternatively, before step 101, first, the executing body may determine, as the target scenario, a scenario corresponding to scenario confirmation information sent by the target hosting device communicatively connected to the head-mounted display device in response to receiving the scenario confirmation information. The scenario confirmation information may be information characterizing a scenario selected by the hosting user. The scenario corresponding to the scenario confirmation information may be a scenario selected by the host user. Then, scenario information of the above-described target scenario may be acquired. Wherein the scenario information includes a set of character scenario information. Each character script information in the character script information set corresponds to a character. In practice, the execution subject may acquire scenario information of the target scenario from a server. Thus, the user can select the scenario of the offline scenario killing.
Step 102, in response to detecting the starting information of the target hosting device in communication connection with the head-mounted display device for the target scenario, according to the role scenario information of the corresponding player user in the target scenario, playing the role scenario node information through the multimedia playing device.
In some embodiments, the executing body may play the character script node information through the multimedia playing device according to the character script information corresponding to the player user in the target script in response to detecting the start information of the target hosting device communicatively connected to the head-mounted display device for the target script. The target hosting device may be a device for controlling the progress of scenario killing. For example, the above-described target hosting device may be, but is not limited to, one of the following: cell phone, tablet computer. The user corresponding to the target hosting device may be a hosting user of the target scenario. The connection mode of the target hosting device and the head-mounted display device may be a wireless connection mode. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, local area network connections, wide area network connections, and other now known or later developed wireless connection means. The start information may be information characterizing a game progress of starting the target scenario. The scenario information of the characters corresponding to the player users may be scenario-related information of the characters bound by the player users. The character script information may include a character script node information sequence. Each character script node information may correspond to a game node. The character script node information may be information to be played at a corresponding game node. For example, the character script node information may include, but is not limited to, at least one of: text, images, video, audio. In practice, the executing body may play, by using the multimedia playing apparatus, the first character script node information in the character script node information sequence in response to detecting the start information of the target hosting device communicatively connected to the head-mounted display device for the target script.
And step 103, updating the role script node information previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device.
In some embodiments, the executing body may update the role scenario node information previously played by the multimedia playing apparatus according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device. Wherein, the interactive operation information of the target hosting device can be information representing interactive operation of the hosting user and the target supporting device. The interactive operation information of the head-mounted display device may be information characterizing the interactive operation of the player user with the head-mounted display device. Here, the interaction of the hosting user with the target hosting device may include, but is not limited to, at least one of: click, slide, drag, hover, voice control. For example, the interactive operation information may be "click control a". The player user interaction with the head mounted display device may include, but is not limited to, at least one of: gesture control, head control, sound control and touch control. Here, the specific representation form of the interactive operation information is not limited. In practice, the executing body may first determine the game node corresponding to the interactive operation information. Here, the correspondence between the interactive operation information and the game node may be preset according to scenario logic. Then, the character script node information of the game node corresponding to the determined character script node in the character script node information sequence may be determined as updated character script node information. Finally, the updated character script node information can be played through the multimedia playing device.
And step 104, storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to the behavior data in response to detecting the ending information of the corresponding target scenario.
In some embodiments, the executing body may store the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to the behavior data in response to detecting the end information corresponding to the target scenario. The ending information may be information indicating a game progress for ending the target scenario. The end information may be sent by the target hosting device or may be generated by the execution subject. The behavior data may be information related to interactive behavior involved in a game progress of the target scenario. The behavior data may be stored in the form of a file or may be stored directly in the form of data. In practice, the execution subject may store, to behavior data, respective pieces of interaction information of the target hosting device and respective pieces of interaction information of the head-mounted display device. Here, each piece of the interactive operation information may be each piece of the interactive operation information during the period from the start to the end of the current game progress of the target scenario.
In response to determining that the behavior data includes ending information, compound disk data is generated from the behavior data, step 105.
In some embodiments, the execution body may generate the compound disc data from the behavior data in response to determining that the behavior data includes ending information. The ending information may represent that the game progress of the target scenario enters the final ending node, and may include scenario ending information. The ending information may be stored in the execution body in the behavior data when the execution body detects that the current game node is the ending node. In practice, first, the execution body may sort each piece of interaction information stored in the behavior data according to a storage time, to obtain an interaction information sequence. In particular, the ordering may be in order of ascending order of storage time. Then, for each piece of the interactive operation information in the interactive operation information sequence, the interactive operation information and the game node information corresponding to the interactive operation information may be combined into multi-disc information. Wherein the game node information may characterize a game node, and may include, but is not limited to, at least one of: node description information and character script node information. And then, marking the obtained multiple disc information corresponding to the player user in the multiple disc information so as to update the multiple disc information. In practice, the executing body may add a user tag to the multi-disc information corresponding to the player user. Finally, the updated individual multi-disc information may be determined as multi-disc data. Here, the multi-disc data may be used to reproduce the entire flow of participation of individual player users in scenario kills. The multiple disc data can be played in the form of graphics or video. The marking process of the multi-disc information may be used to highlight the current player user's operation while participating in the scenario killing.
Optionally, the executing body may play, by using the multimedia playing device, the character script node information corresponding to any script node in the script node tree in response to detecting that the position information of the head-mounted display device meets the position condition corresponding to the any script node. Wherein the location information may characterize a real-time location of the head mounted display device. Here, the position information may be a position coordinate of the head-mounted display device in the three-dimensional space, a longitude and latitude coordinate in a geographic coordinate system, or a coordinate determined by a bluetooth positioning technology. The location condition may be a location related condition that needs to be met to trigger a scenario node. For example, the location condition may be that the location information is within a preset range corresponding to the arbitrary scenario node information. Here, the specific setting of the preset range is not limited. Therefore, when the player user wears the head-mounted display device and reaches a specified place, the role script node information corresponding to the specified place can be automatically played.
Alternatively, first, the execution body may identify the acquired scene object image to obtain scene object information. The scene object image may be an image of a scene object in a display scene captured by a camera of the head-mounted display device. Here, the scene object may be, but is not limited to: non-player user, object, previously set scene. In practice, the execution subject may determine the recognition result of the recognized scene object as scene object information. And then, in response to determining that the character script node information corresponding to the scene object information exists in the character script information, playing the character script node information corresponding to the scene object information through the multimedia playing device. The character script node information corresponding to the scene object information may be character script node information of a game node to be executed after the scene object is identified. Therefore, after the pre-configured scene object is identified, the corresponding role script node information can be automatically played.
Alternatively, first, the executing body may present the chat information in the display screen in response to detecting the chat information transmitted by the associated head-mounted display device corresponding to the target scenario. The associated head-mounted display device may be a head-mounted display device corresponding to the same scenario as the head-mounted display device. The associated head mounted display device and the head mounted display device may be communicatively coupled. Here, the connection may be a wireless connection. The chat information may be private chat information sent by the player user corresponding to the associated head-mounted display device. In practice, the executing body may display the chat information in a chat area in the display screen. The reply message of the player user to the chat message may then be sent to the associated head mounted display device. Therefore, the player user can be private chat with other player users through the head-mounted display device, and the communication privacy is ensured.
The above embodiments of the present disclosure have the following advantageous effects: by the virtual space interaction method for the offline scenario killing, which is disclosed by the embodiment of the invention, the immersion of the offline scenario killing is improved, automatic after-play re-coiling can be realized, and the game progress is automatically recorded. Specifically, the reasons for the poor immersion of the offline scenario, the inability to realize automatic post-play re-coiling and the inability to automatically record the game progress are as follows: the user is required to read longer script content by himself and complete the game by means of real object props, so that the immersion is poor, automatic back-play is not realized, and the game progress cannot be recorded automatically. Based on this, the virtual space interaction method for offline scenario killing of some embodiments of the present disclosure is applied to a head-mounted display device. First, the method includes: in response to detecting that the player user wears the head-mounted display device, scenario presentation information of the target scenario is played through a multimedia playing apparatus of the head-mounted display device. Therefore, the user can directly follow the automatic prompt to watch the script introduction information through the head-mounted display device, and does not need to automatically browse the paper script. And then, in response to detecting the starting information of the target hosting device which is in communication connection with the head-mounted display device and aims at the target scenario, playing the role scenario node information through the multimedia playing device according to the role scenario information corresponding to the player user in the target scenario. Thus, the user can view the scenario flow related information conforming to the determined character. And then, according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device, updating the role script node information previously played by the multimedia playing device. Therefore, the user can conduct interaction of the line-down script killing through the head-mounted display device, and can follow the guidance of the interaction operation of the hosting device to continuously follow the subsequent script killing flow. And then, in response to detecting the ending information corresponding to the target scenario, storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device into behavior data. Therefore, when the scenario killing is finished, the current interactive operation information of the host device and the head-mounted display device can be automatically recorded. Finally, in response to determining that the behavior data includes ending information, compound disk data is generated from the behavior data. Thus, the composite disc data can be automatically generated when the ending information appears in the row data. Also because the user can directly watch the script information through the head-mounted display device, the paper script can be prevented from being read by himself. And because the user can watch the related information of the script flow of the set role in the head-mounted display device, the related script scene can be directly displayed in the virtual space without depending on physical props, and the immersive sense of killing the offline script is improved. And because the interactive operation information of the hosting device and the head-mounted display device when the scenario killing is finished is stored in the behavior data, the game progress can be automatically recorded. And the compound disc data can be directly generated through the behavior data, so that automatic back compound disc can be realized. Therefore, the immersion sense of the off-line script killing is improved, automatic re-coiling after the play can be realized, and the game progress can be automatically recorded.
With further reference to fig. 2, a flow 200 of further embodiments of a virtual space interaction method applied to off-line scenario kills is shown. The process 200 of the virtual space interaction method applied to the offline scenario killing comprises the following steps:
in response to detecting that the player user wears the head-mounted display device, the scenario presentation information of the target scenario is played through the multimedia playing means of the head-mounted display device, step 201.
Step 202, in response to detecting the starting information of the target hosting device in communication connection with the head-mounted display device for the target scenario, playing the role scenario node information through the multimedia playing device according to the role scenario information of the corresponding player user in the target scenario.
In some embodiments, the specific implementation of steps 201-202 may refer to steps 101-102 in those embodiments corresponding to fig. 1, and will not be described herein. The character scenario information may include scenario node trees corresponding to the character scenario information. The scenario node tree corresponding to the above-described character scenario information may be a scenario node tree corresponding to a character of the above-described character scenario information.
In response to detecting the input voice of the player user, step 203, the input voice is converted into voice text.
In some embodiments, the executing entity may convert the input voice to voice text in response to detecting the input voice of the player user.
In step 204, the phonetic text is determined as the interactive operation information of the head-mounted display device.
In some embodiments, the executing body may determine the voice text as the interactive operation information of the head-mounted display device.
And step 205, selecting a script node matched with the interactive operation information of the head-mounted display device from the script node tree as a target script node.
In some embodiments, the execution subject may select a scenario node matching the interactive operation information of the head-mounted display device from the scenario node tree as a target scenario node. In practice, the execution subject may determine a game node corresponding to the interactive operation information. Then, a scenario node corresponding to the determined game node may be determined as a target scenario node.
And step 206, playing the role script node information corresponding to the target script node by the multimedia playing device so as to update the role script node information previously played by the multimedia playing device.
In some embodiments, the executing body may play the character script node information corresponding to the target script node through the multimedia playing device, so as to update the character script node information previously played by the multimedia playing device. In practice, the executing body can play the character script node information through a display screen and/or a loudspeaker of the multimedia playing device.
In step 207, in response to detecting the end information of the corresponding target scenario, the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device are stored to the behavior data.
In response to determining that the behavior data includes ending information, compound disk data is generated from the behavior data, step 208.
In some embodiments, specific implementations of steps 207-208 may refer to steps 104-105 in those embodiments corresponding to fig. 1, and are not described herein.
As can be seen from fig. 2, the flow 200 of the virtual space interaction method applied to the offline scenario killing in some embodiments corresponding to fig. 2 embodies the steps extended to the interaction operation, compared to the description of some embodiments corresponding to fig. 1. Therefore, the schemes described in the embodiments can be introduced to enable the player user to interact directly through voice, so that the way of the player user interaction is increased, and the interaction experience of the player user for killing the line script is improved.
Referring now to fig. 3, a virtual space interaction system 300 for offline scenario killing suitable for use in implementing some embodiments of the present disclosure is shown, comprising: a player client collection 301 and a hosting client 302.
Each player user side of the set 301 of player user sides described above includes a head mounted display device configured to implement steps in those embodiments as correspond to fig. 1 or 2. Communication connection can be performed between each player user terminal in the player user terminal set 301. Here, the connection may be a wireless connection.
Optionally, the player user may also include, but is not limited to: a mobile phone and a tablet computer.
The hosting client 302 is communicatively coupled to each of the player clients. Here, the connection may be a wireless connection. The hosting client is configured to send start information of the target scenario to each player client. The hosting client 302 may be a device held by a hosting user. Thus, through the virtual space interactive system, each player user can play the scenario killing game on line through interactive operation in the virtual space.
Referring now to fig. 4, a schematic diagram of a head mounted display device 400 suitable for use in implementing some embodiments of the present disclosure is shown. The head mounted display device shown in fig. 4 is only one example and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the head mounted display device 400 may include a processing means 401 (e.g., a central processor, a graphics processor, etc.) that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the head mounted display device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; and a communication device 409. The communication means 409 may allow the head mounted display device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 shows a head mounted display apparatus 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 4 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the head mounted display device; or may be present alone without being fitted into the head mounted display device. The computer readable medium carries one or more programs which, when executed by the head mounted display device, cause the head mounted display device to: in response to detecting that a player user wears the head-mounted display device, playing scenario introduction information of a target scenario through a multimedia playing device of the head-mounted display device; responding to the detection of the starting information of target hosting equipment which is in communication connection with the head-mounted display equipment and aims at the target script, and playing the node information of the character script through the multimedia playing device according to the character script information of the corresponding player user in the target script; updating the role script node information previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device; storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to behavior data in response to detecting the ending information corresponding to the target scenario; in response to determining that the behavior data includes ending information, complex disk data is generated from the behavior data.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (13)

1. A virtual space interaction method for offline scenario killing is applied to a head-mounted display device, and comprises the following steps:
in response to detecting that a player user wears the head-mounted display device, play scenario introduction information of a target scenario through a multimedia playing device of the head-mounted display device;
Responding to the detection of the starting information of target hosting equipment in communication connection with the head-mounted display equipment for the target scenario, and playing role scenario node information through the multimedia playing device according to the role scenario information of the player user in the target scenario;
updating the role script node information previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device;
storing the interactive operation information of the target hosting device and the interactive operation information of the head-mounted display device to behavior data in response to detecting the ending information corresponding to the target scenario;
in response to determining that the behavioral data includes ending information, compound disk data is generated from the behavioral data.
2. The method of claim 1, wherein the multimedia playing apparatus comprises the display screen and an audio playing device; and
the scenario introduction information of the target scenario played by the multimedia playing device of the head-mounted display device comprises:
displaying visual information of the scenario introduction information in the display screen, and playing audio of the scenario introduction information through the audio playing device;
And in response to detecting that the head pose information of the player user meets a preset walking state, moving visual information of the scenario introduction information displayed in the display screen to a side position.
3. The method of claim 1, wherein, after the play presentation information of the target scenario is played through the multimedia playing apparatus of the head-mounted display device in response to detecting that the player user wears the head-mounted display device, the method further comprises:
and binding the arbitrary character with the head-mounted display device according to the selection operation of the player user on the arbitrary character in the various characters displayed in the display screen of the head-mounted display device.
4. The method of claim 3, wherein the binding the arbitrary character with the head-mounted display device comprises:
binding the arbitrary roles and the player users;
and binding the player user and the head-mounted display device.
5. The method of claim 3, wherein the binding the arbitrary character with the head-mounted display device comprises:
Determining the role information of the head-mounted display device as the arbitrary role;
acquiring a script node tree corresponding to the arbitrary character as character script information;
and storing the character script information.
6. The method of claim 1, wherein the character script information comprises a script node tree corresponding to the character script information; and
the updating of the node information of the character script previously played by the multimedia playing device according to the interactive operation information of the target hosting device and/or the interactive operation information of the head-mounted display device comprises the following steps:
in response to detecting an input voice of the player user, converting the input voice into voice text;
determining the voice text as interactive operation information of the head-mounted display device;
selecting a scenario node matched with the interactive operation information of the head-mounted display equipment from the scenario node tree as a target scenario node;
and playing the role script node information corresponding to the target script node through the multimedia playing device so as to update the role script node information previously played by the multimedia playing device.
7. The method of claim 1, wherein prior to the playing scenario presentation information of the target scenario by the multimedia playing apparatus of the head mounted display device in response to detecting that the player user wears the head mounted display device, the method further comprises:
in response to receiving scenario confirmation information sent by target hosting equipment in communication connection with the head-mounted display equipment, determining a scenario corresponding to the scenario confirmation information as a target scenario;
and acquiring scenario information of the target scenario, wherein the scenario information comprises a character scenario information set.
8. The method of claim 6, wherein the method further comprises:
and in response to detecting that the position information of the head-mounted display device meets the position condition corresponding to any scenario node in the scenario node tree, playing role scenario node information corresponding to the any scenario node through the multimedia playing device.
9. The method of claim 1, wherein the method further comprises:
identifying the acquired scene object image to obtain scene object information;
and in response to determining that the character script node information corresponding to the scene object information exists in the character script information, playing the character script node information corresponding to the scene object information through the multimedia playing device.
10. The method of claim 1, wherein the method further comprises:
responding to the chat information sent by the associated head-mounted display equipment corresponding to the target script, and displaying the chat information in the display screen;
and sending reply information of the player user for the chat information to the associated head-mounted display device.
11. A head mounted display device comprising:
one or more processors;
a storage device having one or more programs stored thereon,
a multimedia playing device for playing multimedia information, wherein the multimedia information comprises scenario introduction information and character scenario node information,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-10.
12. A virtual space interaction system for offline scenario killing, comprising:
a set of player clients, wherein each player client in the set of player clients comprises a head-mounted display device configured to perform the virtual space interaction method for offline scenario killing of one of claims 1-9, the communication connections between individual player clients in the set of player clients;
And the hosting user side is in communication connection with each player user side and is configured to send starting information of the target scenario to each player user side.
13. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-10.
CN202310132709.6A 2023-02-17 2023-02-17 Virtual space interaction method, system, equipment and medium for off-line scenario killing Pending CN116382466A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310132709.6A CN116382466A (en) 2023-02-17 2023-02-17 Virtual space interaction method, system, equipment and medium for off-line scenario killing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310132709.6A CN116382466A (en) 2023-02-17 2023-02-17 Virtual space interaction method, system, equipment and medium for off-line scenario killing

Publications (1)

Publication Number Publication Date
CN116382466A true CN116382466A (en) 2023-07-04

Family

ID=86960509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310132709.6A Pending CN116382466A (en) 2023-02-17 2023-02-17 Virtual space interaction method, system, equipment and medium for off-line scenario killing

Country Status (1)

Country Link
CN (1) CN116382466A (en)

Similar Documents

Publication Publication Date Title
US10873769B2 (en) Live broadcasting method, method for presenting live broadcasting data stream, and terminal
US10750223B2 (en) System, method, and device for displaying content item
US9686645B2 (en) Location information sharing method and apparatus
CN110198484B (en) Message pushing method, device and equipment
JP7317232B2 (en) Information interaction method, device, equipment, storage medium and program product
CN110324646A (en) Method for displaying and processing, device and the electronic equipment of special efficacy
US12001478B2 (en) Video-based interaction implementation method and apparatus, device and medium
WO2022089192A1 (en) Interaction processing method and apparatus, electronic device, and storage medium
JP7473676B2 (en) AUDIO PROCESSING METHOD, APPARATUS, READABLE MEDIUM AND ELECTRONIC DEVICE
KR20180075931A (en) Method and apparatus for providing item recommend service in online game
CN112836136A (en) Chat interface display method, device and equipment
CN112995759A (en) Interactive service processing method, system, device, equipment and storage medium
CN111857858A (en) Method and apparatus for processing information
CN112969093A (en) Interactive service processing method, device, equipment and storage medium
CN113392764A (en) Video processing method and device, electronic equipment and storage medium
US10484485B2 (en) Context-aware task processing for multiple devices
CN112015506B (en) Content display method and device
CN115097984B (en) Interaction method, interaction device, electronic equipment and storage medium
CN116382466A (en) Virtual space interaction method, system, equipment and medium for off-line scenario killing
US20150154800A1 (en) Augmented reality viewing initiation based on social behavior
US20180227373A1 (en) Method and system for an online user generated geo-network for social interaction, commercial bidding and transaction, and information exchange
CN114339356B (en) Video recording method, device, equipment and storage medium
US20240171704A1 (en) Communication support system, communication support apparatus, communication support method, and storage medium
CN110188712B (en) Method and apparatus for processing image
CN116088973A (en) Information interaction method, device, equipment and medium based on multimedia resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination