WO2021102615A1 - 虚拟现实场景及其交互方法、终端设备 - Google Patents

虚拟现实场景及其交互方法、终端设备 Download PDF

Info

Publication number
WO2021102615A1
WO2021102615A1 PCT/CN2019/120515 CN2019120515W WO2021102615A1 WO 2021102615 A1 WO2021102615 A1 WO 2021102615A1 CN 2019120515 W CN2019120515 W CN 2019120515W WO 2021102615 A1 WO2021102615 A1 WO 2021102615A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
information
semantic
environment
path
Prior art date
Application number
PCT/CN2019/120515
Other languages
English (en)
French (fr)
Inventor
徐守祥
Original Assignee
深圳信息职业技术学院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳信息职业技术学院 filed Critical 深圳信息职业技术学院
Priority to CN201980003474.2A priority Critical patent/CN111095170B/zh
Priority to PCT/CN2019/120515 priority patent/WO2021102615A1/zh
Priority to US17/311,602 priority patent/US11842446B2/en
Publication of WO2021102615A1 publication Critical patent/WO2021102615A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • This application belongs to the field of virtual reality technology, and in particular relates to a virtual reality scene and its interaction method and terminal equipment.
  • a virtual character refers to a three-dimensional model of a human image, an animal image, or an artificially designed illusory image (such as a flying dragon) that is simulated by virtual reality technology. It can simulate the perception and behavior of humans or animals. Just as humans and animals need their own living environment in real life, virtual characters also need their living environment, and the virtual environment is the living environment of virtual characters. Through the co-presentation of virtual characters and virtual environment, a virtual reality scene with a sense of reality and immersion is formed together.
  • the embodiments of the present application provide a virtual reality scene and its interaction method, and terminal equipment to solve the problem of how to conveniently and effectively realize the interaction between the virtual character and the virtual environment in the virtual reality scene in the prior art.
  • the first aspect of the present application provides a virtual reality scene, which includes a virtual environment agent, a virtual character agent, and a semantic path processing unit;
  • the semantic path processing unit is configured to construct a semantic path, and the semantic path is a trajectory drawn on a geometric figure of the virtual environment agent and composed of a node and a directed connection between the node, and the node
  • the information includes at least node location information, node behavior semantic information and node environment semantic information;
  • the virtual character agent is used to obtain the semantic path and move according to its own tasks and the semantic path.
  • the position information of the virtual character agent is consistent with the node position information of the nodes of the semantic path, Execute the target action according to the semantic information of the node behavior of the node and the semantic information of the node environment;
  • the virtual environment agent is used to obtain the target action information of the virtual character agent at the node of the semantic path, and obtain the action result according to the target action information and the node environment semantic information of the node Information; instruct the semantic path processing unit to update the node environment semantic information of the nodes of the semantic path according to the action result information.
  • the second aspect of the present application provides a virtual reality scene interaction method.
  • the method is applied to a semantic path processing unit and includes:
  • the semantic path is a trajectory drawn on the geometric figure of the virtual environment agent and composed of a node and a directed connection between the node, and the node information includes at least node position information , Node behavior semantic information and node environment semantic information;
  • the third aspect of the present application provides a virtual reality scene interaction method, which is applied to a virtual character agent, and includes:
  • Obtain a semantic path which is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directional connection between the nodes, and the node information includes at least node position information and node behavior semantics Semantic information of information and node environment;
  • the target action is executed according to the node behavior semantic information of the node and the node environment semantic information.
  • the fourth aspect of the present application provides a virtual reality scene interaction method.
  • the method is applied to a virtual environment agent and includes:
  • the node information includes at least node location information, node behavior semantic information, and node environment semantic information;
  • a fifth aspect of the present application provides a semantic path processing unit, including:
  • the instruction receiving module is used to receive user instructions and construct a semantic path.
  • the semantic path is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the node.
  • the information includes at least node location information, node behavior semantic information and node environment semantic information;
  • the node environment semantic information update module is used to obtain the action result information of the virtual environment agent, and update the node environment semantic information of the node of the semantic path according to the action result information.
  • the sixth aspect of the present application provides a virtual character agent, including:
  • the semantic path acquisition module is used to acquire a semantic path, the semantic path is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the nodes, and the information of the node includes at least Node location information, node behavior semantic information and node environment semantic information;
  • the movement module is used to move according to its own task and the semantic path;
  • the target action execution module is configured to execute the target action according to the node behavior semantic information of the node and the node environment semantic information when the position information of the virtual character agent is consistent with the node position information of the node of the semantic path.
  • the seventh aspect of the present application provides a virtual environment agent, including:
  • the target action information acquisition module is used to acquire the target action information of the virtual character agent at the node of the semantic path, and the semantic path is drawn on the geometric figure of the virtual environment agent by the node and the node A trajectory composed of a directional connection between the nodes, and the node information includes at least node position information, node behavior semantic information, and node environment semantic information;
  • An action result information acquisition module configured to obtain action result information according to the target action information and the node environment semantic information of the node
  • the instruction update module is used to instruct the semantic path processing unit to update the node environment semantic information of the nodes of the semantic path according to the action result information.
  • An eighth aspect of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the second aspect to the fourth aspect are implemented. Any of the interactive methods of virtual reality scenes.
  • a ninth aspect of the present application provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, any one of the above-mentioned second to fourth aspects is implemented. Scene interaction method.
  • the tenth aspect of the present application provides a computer program product.
  • the terminal device executes any one of the virtual reality scene interaction methods in the second aspect to the fourth aspect.
  • the virtual reality scene includes a virtual environment agent, a virtual character agent, and a semantic path processing unit, and a semantic path composed of nodes and directed connections drawn on the geometry of the virtual environment agent is constructed.
  • the nodes of the semantic path include node location information, node behavior semantic information, and node environment semantic information. Because this semantic path can not only guide the movement of the virtual character agent, but also convey the semantic information of the node behavior and the semantic information of the node environment to the virtual character agent to guide the virtual character agent to perform the target action, and at the same time, it can obtain the target action to the virtual agent in time.
  • the action result information generated by the action of the environmental agent the semantic information of the semantic node environment is updated so that the virtual character agent can make the next execution decision of the target action, that is, the semantic path constructed by the semantic path processing unit can be used as the virtual character agent and the virtual environment agent
  • the medium can not only guide the behavior decision of the virtual character agent but also record the changes of the virtual environment agent. Therefore, through this semantic path, the interaction between the virtual character and the virtual environment in the virtual reality scene can be conveniently and effectively realized.
  • Figure 1 is a schematic structural diagram of a virtual reality scene provided by this application.
  • FIG. 2 is a schematic diagram of a screen of a virtual reality scene provided by this application.
  • FIG. 3 is a schematic diagram of the implementation process of the first virtual reality scene method provided by this application.
  • FIGS 4-6 are schematic diagrams of the three semantic paths provided by this application.
  • FIG. 7 is a schematic diagram of the implementation process of the second virtual reality scene method provided by this application.
  • FIG. 8 is a schematic diagram of the implementation process of the third virtual reality scene method provided by this application.
  • FIG. 9 is a schematic diagram of the composition of a semantic path processing unit provided by this application.
  • FIG. 10 is a schematic diagram of the composition of a virtual character agent provided by this application.
  • FIG. 11 is a schematic diagram of the composition of a virtual environment agent provided by this application.
  • FIG. 12 is a schematic structural diagram of a terminal device provided by this application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
  • the phrase “if determined” or “if detected [described condition or event]” can be construed as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • FIG. 1 is a schematic structural diagram of a virtual reality scene provided by an embodiment of the application, where the virtual reality scene is composed of a virtual environment agent 11, a virtual character agent 12, and a semantic path processing unit 13.
  • the semantic path processing unit 13 is used to construct a semantic path 131, which is a trajectory composed of a node and a directed connection between the node drawn on the geometric figure of the virtual environment agent, and the information of the node includes at least the position of the node Information, node behavior semantic information and node environment semantic information.
  • both the virtual character and the virtual environment are agents that have three-dimensional geometric figures and can recognize and transform semantic information. They are called virtual character agents and virtual environment agents, respectively. They are drawn in the virtual environment agent.
  • the node position information, node behavior semantic information and node environment semantic information of the semantic path on the geometrical figure guide the virtual character agent to walk on the virtual environment agent, perform the target action, and record the virtual character agent’s action to the virtual
  • the influence of the environmental agent is to use the semantic path as the medium of the virtual character agent and the virtual environment agent, which facilitates and effectively realizes the interaction between the virtual character and the virtual environment.
  • the semantic path in the embodiment of the present application has the following three functions: one is the function of the path itself, which solves the problem of virtual character agent routing planning; the other is the function of serializing the behavior of the virtual character agent to solve the virtual character The problem of behavior planning to perform a certain task; the third is the function of environment semantics, which provides the status and change information of related entities near each location node through the path node, and solves the problem of the perception of the virtual agent agent to the agent in the virtual environment.
  • the semantic path processing unit is configured to construct a semantic path, where the semantic path is a trajectory drawn on a geometric figure of the virtual environment agent and composed of a node and a directed connection between the nodes,
  • the node information includes at least node location information, node behavior semantic information, and node environment semantic information.
  • Figure 2 shows a schematic diagram of a virtual reality scene.
  • the virtual reality scene is composed of a virtual environment agent, a virtual character agent active on the virtual environment agent, and a geometric figure drawn on the virtual environment agent.
  • the semantic path is composed of the semantic path processing unit. Specifically, the semantic path is composed of a directional connection between a node and a node, wherein the information of the node includes at least the node position information of the node, the semantic information of the node behavior, and the semantic information of the node environment.
  • the directional connection constitutes the starting point for each node. And information about the direction and distance of entry.
  • the position information of the node records the coordinate information of the node on the virtual environment agent;
  • the virtual character agent is used to obtain the semantic path, and move according to its own task and the semantic path, when the position information of the virtual character agent and the node position information of the nodes of the semantic path When they are consistent, the target action is executed according to the semantic information of the node behavior of the node and the semantic information of the node environment.
  • a virtual character agent is an agent that has character shape geometry information (such as simulated human figures, animal figures, etc.) and can autonomously make path planning and behavior decisions based on the information of the semantic path and its own tasks.
  • the virtual character agent obtains the semantic path drawn on the virtual environment agent, determines the target location to be reached according to its own task, and performs path planning according to the semantic path, and determines the target trajectory that needs to pass the semantic path to reach the target location.
  • the trajectory can be a complete trajectory of the semantic path, or a part of the trajectory in the semantic path.
  • the virtual character moves on the semantic path according to the planned target trajectory.
  • the virtual character agent When the virtual character agent detects that its position information is consistent with the node position information of a node of the semantic path, it obtains the node behavior semantic information and node of the node Environment semantic information, and execute the target action according to the node behavior semantic information and the node environment semantic information.
  • the virtual environment agent is used to obtain the target action information of the virtual character agent at the node of the semantic path, and according to the target action information and the node environment semantic information of the node, Obtain action result information; instruct the semantic path processing unit to update the node environment semantic information of the nodes of the semantic path according to the action result information.
  • the virtual environment agent is an agent that has three-dimensional simulated environment geometry information and can perceive the surrounding things and act on the surrounding things through actuators.
  • the virtual environment agent obtains the target action information conveyed by the virtual role agent and the semantic information of the node environment of the node where the virtual role agent is currently located, and uses the state transition axioms of the virtual environment agent itself (that is, the preset state transition mapping relationship, It can be a state transition mapping table), after the target action is obtained on the virtual environment agent, the environment state change information of the location corresponding to the node in the virtual environment agent is obtained, and this information is called action result information.
  • a virtual reality scene is specifically a guard patrol scene:
  • the virtual environment agent of the guard patrol scene is specifically the designated patrol area environment
  • the virtual character agent is specifically the guard role
  • the semantic path constructed by the semantic path processing unit is specifically the patrol semantic path.
  • the patrol semantic path is drawn in advance on the environment of the designated patrol area, and consists of the directional connection between the patrol node and the patrol node, and the patrol node sets the node position information and node behavior in advance according to its specific location in the designated patrol area environment Semantic information and node environment semantic information.
  • the guard role performs path planning according to the semantic path and moves on the semantic path.
  • the semantic path in the embodiment of the present application is not limited to the path drawn on the land graphics of the virtual environment agent.
  • the semantic path can also be drawn on the water graphics.
  • the trajectory drawn on the sky graphics may be drawn.
  • the virtual reality scene of the embodiment of the application may be a surreal scene of flying dragon activity, and the virtual character agent of this scene is an unreal image-flying dragon; the geometry of the virtual environment agent of this scene specifically includes River, sea and other water graphics and sky graphics; the semantic path processing unit draws a trajectory composed of nodes and directed connections between nodes on the river, sea and other water graphics and sky graphics as a semantic path, and sets it on each node
  • the semantic path on the sky graphics makes behavior decisions, and realize
  • the virtual reality scene includes a virtual environment agent, a virtual character agent, and a semantic path processing unit, and a semantic composed of nodes and directed connections between nodes drawn on the virtual environment agent is constructed.
  • the path and the nodes of the semantic path include node location information, node behavior semantic information, and node environment semantic information. Because this semantic path can not only guide the movement of the virtual character agent, but also convey the semantic information of the node behavior and the semantic information of the node environment to the virtual character agent to guide the virtual character agent to perform the target action, and at the same time, it can obtain the target action to the virtual agent in time.
  • the action result information generated by the action of the environmental agent the semantic information of the semantic node environment is updated so that the virtual character agent can make the next execution decision of the target action, that is, the semantic path constructed by the semantic path processing unit can be used as the virtual character agent and the virtual environment agent
  • the medium can not only guide the behavior decision of the virtual character agent but also record the changes of the virtual environment agent. Therefore, through this semantic path, the interaction between the virtual character and the virtual environment in the virtual reality scene can be conveniently and effectively realized.
  • FIG. 3 shows a schematic flowchart of the first virtual reality scene interaction method provided by an embodiment of the present application.
  • the execution subject of the embodiment of the present application is a semantic path processing unit, which is described in detail as follows:
  • a user instruction is received, and a semantic path is constructed.
  • the semantic path is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the node, and the information of the node is at least Including node location information, node behavior semantic information and node environment semantic information.
  • the semantic path is composed of a directional connection between nodes and nodes, where the information of the node includes at least the node position information of the node, the semantic information of the node behavior, and the semantic information of the node environment.
  • the directional connection constitutes the starting and entering information of each node. Direction and distance information.
  • the position information of the node records the coordinate information of the node on the virtual environment agent;
  • the semantic information of the node behavior records the behavior that the virtual character agent needs to complete when the virtual character agent walks to the node, which can be represented by the corresponding behavior identification number;
  • the environment semantic information records the state information of the virtual environment agent at the location of the node in real time, which can be represented by the corresponding state identification number.
  • step S301 includes:
  • S30101 Receive a drawing instruction, and draw a node of a semantic path and a directed connection between the node on the geometric figure of the virtual environment agent.
  • S30102 Receive a behavior semantic information selection instruction, and add node behavior semantic information to the nodes of the semantic path.
  • S30103 Receive an environment semantic selection instruction, and add initial node environment semantic information for the nodes of the semantic path.
  • receiving a drawing instruction specifically, receiving a user's node selection instruction on the geometry of the virtual environment agent (that is, clicking on the virtual environment agent to select several designated locations as the nodes of the semantic path) and receiving the user Connect the sliding instructions of the selected nodes, thereby completing the drawing of the directional connection between the nodes and the nodes on the geometry of the virtual environment agent, that is, completing the trajectory drawing of the semantic path.
  • the directional connection may be a two-way connection.
  • the directional connection between node A and node B of the semantic path may have both directions A ⁇ B and B ⁇ A for the virtual character agent to make movement decisions.
  • the semantic path may be a circular semantic path as shown in FIG. 4 or a non-circular semantic path as shown in FIG. 5.
  • the semantic path may also be a compound path composed of multiple circular semantic sub-paths and/or non-circular semantic sub-paths, as shown in FIG. 6.
  • S30102 receive the user's behavior semantic information selection instruction on the nodes of the semantic path that has been drawn, and respectively add node behavior semantic information to each node of the semantic path.
  • the semantic path processing unit can complete the construction of the semantic path on the virtual environment agent by simply receiving the user's instruction without complicated programming, so the construction of the semantic path can be more convenient and flexible. , To reduce the development difficulty of virtual reality scene construction.
  • the node information further includes node traffic status information.
  • the step S301 further includes:
  • S30104 Receive a node traffic state selection instruction, and add node traffic state information to the nodes of the semantic path, where the node traffic state information includes first identification information for identifying the node as a path transition node, and for identifying the node.
  • the node is the second identification information of the suspended node.
  • the information of each node on the semantic path also includes node passage status information, which is used to identify whether the node can pass and whether the node is a transition node that conforms to the path.
  • the node traffic state information includes first identification information used to identify the node as a path switching node, and second identification information used to identify the node as a suspended use node.
  • the node communication state information can be stored with a variable "nodeState". For example, as shown in FIG. 6, the node Ij is a path transition node, and then first identification information is added to the node Ij to identify the node as a path transition node.
  • the action result information of the virtual environment agent is obtained, and the node environment semantic information of the node of the semantic path is updated according to the action result information.
  • the semantic path processing unit obtains the action result information in time, and according to the action result The information updates the semantic information of the node environment of the target node.
  • the semantic path processing unit constructs a semantic path by receiving instructions from a user.
  • the semantic path is composed of a directional connection between nodes and nodes.
  • Each node carries node position information, node behavior semantic information, and node Information such as environment semantic information, and the node environment semantic information of the node can be updated in time according to the action result information generated by the virtual environment agent.
  • the semantic path includes node location information and directed connections, it can guide the virtual character agent to plan and move; since the semantic path includes node behavior semantic information and node environment semantic information, it can guide the virtual character agent in the node Because the semantic information of the node environment of the semantic path can also be updated in time based on the action result information generated by the virtual environment agent, it can record and update the semantic node environment semantic information in time so that the virtual role agent can perform the next time The execution decision of the target action; thus, the semantic path can effectively guide the behavior of the virtual character agent on the virtual environment agent, and conveniently and effectively realize the interaction between the virtual character and the virtual environment in the virtual reality scene.
  • FIG. 7 shows a schematic flowchart of a second virtual reality scene interaction method provided by an embodiment of the present application.
  • the execution subject of the embodiment of the present application is a virtual character agent, and the virtual character agent is geometric figure information with a character shape (such as simulated human figures, animal figures, etc.), and can autonomously carry out path planning and behavior decision-making based on the information of the semantic path and its own tasks.
  • the details are as follows:
  • a semantic path is acquired.
  • the semantic path is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the node, and the node information includes at least node position information , Node behavior semantic information and node environment semantic information.
  • the virtual character agent obtains a semantic path, which is a trajectory composed of nodes and directed connections between nodes drawn on the geometry of the virtual environment agent in advance, and the node information includes at least node position information and node behavior Semantic information and node environment semantic information.
  • the position information of the node records the coordinate information of the node on the virtual environment agent;
  • the semantic information of the node behavior records the behavior that the virtual character agent needs to complete when the virtual character agent walks to the node, which can be represented by the corresponding behavior identification number;
  • the node environment The semantic information records the state information of the virtual environment agent at the location of the node in real time, which can be represented by the corresponding state identification number.
  • the virtual character agent obtains an ordered node sequence group of the semantic path from the semantic path processing unit, and each element in the node sequence group includes the sequence number of the node, the node position information of the node, the semantic information of the node behavior, and the node Environmental semantic information, etc.
  • the virtual character agent has a visual recognition function, and obtains the drawn semantic path trajectory from the virtual reality scene through visual recognition.
  • the virtual character agent performs path planning and movement according to its own tasks and acquired semantic paths. For example, suppose that the task of the virtual character agent is to patrol a designated area, and the virtual character moves according to the semantic path trajectory contained in the designated area.
  • step S702 specifically includes:
  • the nodes of the semantic path also include node passage status information.
  • the virtual character agent When the virtual character agent detects that its own position information is consistent with the node position information of a node in the semantic path, it means that the virtual character agent is currently moving to the node. At this time, according to the node behavior semantic information and node environment semantics of the node Information, perform the target action.
  • step S703 includes:
  • the target behavior that the virtual character agent needs to complete on this node is the operation behavior of the opposite door.
  • S70302 Determine and execute a target action according to the execution base corresponding to the target behavior and the node environment semantic information, where the execution base is a two-tuple that stores the environment semantic information and the target action correspondingly.
  • the execution base of each action is pre-stored in the virtual character agent.
  • the execution base is a two-tuple (b, z) that stores the environment semantic information and the target action correspondingly, which means that the target action b is z in the environment semantic information. Executed at the time. Obtain the corresponding multiple execution bases according to the target behavior, and find the target action b corresponding to the current environment semantic information according to the node environment semantic information.
  • the target behavior is the operation behavior of the door
  • the virtual character agent pre-stores two execution bases corresponding to the target behavior: (door opening action, the door is in the closed state), (door closing action, the door is in the open state) ;
  • the virtual character agent queries the two execution bases of the door's operating behavior according to the node environment semantic information, and determines the current The target action to be executed is the door opening action.
  • the execution target action includes:
  • the virtual character agent executes the door opening action specifically by allowing itself to run the target animation, that is, showing that the virtual character agent has performed the target action by allowing the geometric information of the virtual character agent to undergo a specified dynamic change. For example, if the target action is a door opening action, the target animation stored corresponding to the door opening action is played, and the virtual character agent performs the door opening action on the virtual scene screen through the animation. At the same time or after executing the target animation, the virtual character agent conveys the target action information to the virtual environment agent, and can convey to the virtual environment agent that the virtual character currently performs the target action by conveying the action identification number corresponding to the target action.
  • the virtual character agent obtains the semantic path, moves according to the node position information of the semantic path, and when reaching each node, executes according to the semantic information of the node behavior and the semantic information of the node environment on the semantic path.
  • the target action conveniently and effectively guides the virtual character's behavior decision on the virtual environment agent through the semantic path, and effectively realizes the interaction between the virtual character and the virtual environment in the virtual reality scene.
  • FIG. 8 shows a schematic flowchart of a third virtual reality scene interaction method provided by an embodiment of the present application.
  • the execution subject of the embodiment of the present application is a virtual environment agent, and the virtual environment agent is geometric information with a three-dimensional simulation environment. , And can perceive the surrounding things and act on the surrounding things through the agent.
  • the details are as follows:
  • the node information includes at least node position information, node behavior semantic information, and node environment semantic information.
  • the semantic path in the embodiment of the present application is a trajectory composed of nodes and directed connections between nodes drawn in advance on the geometry of the virtual environment agent, and the node information includes at least node position information and node behavior semantic information And node environment semantic information.
  • the position information of the node records the coordinate information of the node on the virtual environment agent;
  • the semantic information of the node behavior records the behavior that the virtual character agent needs to complete when the virtual character agent walks to the node, which can be represented by the corresponding behavior identification number;
  • the node environment The semantic information records the state information of the virtual environment agent at the location of the node in real time, which can be represented by the corresponding state identification number.
  • the virtual environment agent obtains information about the target action performed by the virtual character agent on a node of the semantic path, and the target action information may be an action identification number corresponding to the target action.
  • the virtual environment agent obtains information about the door opening action performed by the virtual character agent on node A, and the action identification number may be "Action_OpenDoor".
  • the action result information is obtained according to the information of the target action and the semantic information of the node environment of the node.
  • Action result information According to the information of the target action and the semantic information of the node environment of the node, after the target action acts on the virtual environment agent, the environment state change information of the position corresponding to the node in the virtual environment agent is obtained. This information is called Action result information.
  • step S802 includes:
  • the target action information and the node environment semantic information of the node query the environment state transition mapping relationship of the virtual environment agent to obtain action result information.
  • the virtual environment agent prestores the environment state transition mapping relationship, which can also be called the state transition axiom of the virtual environment agent.
  • the environment state transition mapping relationship can be realized through a state transition mapping table.
  • Each item in the state transition mapping table stores the information of the target action, the node environment semantic information of the node before the action is executed, and the semantic information of the node environment after the execution of the target action.
  • the corresponding action result information According to the acquired information of the target action and the semantic information of the node environment of the current node, query the state transition mapping table to obtain the corresponding action result information after the current node executes the target action.
  • the information corresponding to one item in the state transition mapping table pre-stored in the virtual environment agent is as follows:
  • the semantic path processing unit is instructed to update the node environment semantic information of the node of the semantic path according to the action result information.
  • the virtual environment agent since the virtual environment agent obtains the target action information on a node and obtains the corresponding action result information, it can instruct the semantic path unit to promptly determine the node environment semantics of the node of the semantic path according to the action result information.
  • the information is updated so that the virtual character agent can make the next target action decision. Therefore, through the virtual environment agent's update record of the semantic path, the interaction between the virtual character and the virtual environment in the virtual reality scene can be conveniently and effectively realized.
  • the embodiment of the present application also provides a semantic path processing unit. As shown in FIG. 9, for ease of description, only the parts related to the embodiment of the present application are shown:
  • the semantic path processing unit includes: an instruction receiving module 91 and a node environment semantic updating module 92. among them:
  • the instruction receiving module 91 is configured to receive user instructions and construct a semantic path, which is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the node, and the node
  • the information at least includes node location information, node behavior semantic information and node environment semantic information.
  • the instruction receiving module 91 specifically includes a drawing module, a behavior semantic information selection module, and an environment semantic selection module:
  • the drawing module is used to receive a drawing instruction, and draw a node of a semantic path and a directed connection between the node on the geometric figure of the virtual environment agent;
  • the behavior semantic information selection module is used to receive behavior semantic information selection instructions, and add node behavior semantic information to the nodes of the semantic path;
  • the environment semantic selection module is used to receive an environment semantic selection instruction, and add initial node environment semantic information to the nodes of the semantic path.
  • the instruction receiving module further includes: a node traffic state selection module, configured to receive a node traffic state selection instruction, and add node traffic state information to the nodes of the semantic path, wherein the node traffic state information includes information for The first identification information that identifies the node as a path conversion node, and the second identification information that is used to identify the node as a suspend-use node.
  • a node traffic state selection module configured to receive a node traffic state selection instruction, and add node traffic state information to the nodes of the semantic path, wherein the node traffic state information includes information for The first identification information that identifies the node as a path conversion node, and the second identification information that is used to identify the node as a suspend-use node.
  • the node environment semantic information update module 92 is configured to obtain the action result information of the virtual environment agent, and update the node environment semantic information of the node of the semantic path according to the action result information.
  • the embodiment of the present application also provides a virtual character agent, as shown in FIG. 10.
  • a virtual character agent as shown in FIG. 10.
  • FIG. 10 For ease of description, only the parts related to the embodiment of the present application are shown:
  • the virtual character agent includes: a semantic path acquisition module 101, a movement module 102, and a target action execution module 103. among them:
  • the semantic path acquisition module 101 is configured to acquire a semantic path, which is a trajectory drawn on a geometric figure of a virtual environment agent and composed of a node and a directed connection between the node, and the information of the node is at least Including node location information, node behavior semantic information and node environment semantic information.
  • the moving module 102 is used to move according to its own task and the semantic path.
  • the movement module 102 is specifically configured to move according to its own task, node location information of the semantic path, and node traffic status information.
  • the target action execution module 103 is configured to execute the target action according to the node behavior semantic information of the node and the node environment semantic information when the position information of the virtual character agent is consistent with the node position information of the node of the semantic path .
  • the target action execution module 103 is specifically configured to determine the target behavior according to the node behavior semantic information when the position information of the virtual character agent is consistent with the node position information of the node of the semantic path
  • a target action is determined and executed, wherein the execution base is a two-tuple that stores the environment semantic information and the target action correspondingly.
  • the execution of the target action specifically includes: executing the target animation and conveying the information of the target action to the virtual environment agent.
  • the embodiment of the present application also provides a virtual environment agent, as shown in FIG. 11.
  • a virtual environment agent as shown in FIG. 11.
  • FIG. 11 For ease of description, only the parts related to the embodiment of the present application are shown:
  • the virtual environment agent includes: a target action information acquisition module 111, an action result information acquisition module 112, and an instruction update module 113. among them:
  • the target action information acquisition module 111 is used to acquire the target action information of the virtual character agent at the node of the semantic path, the semantic path is the node and the node drawn on the geometric figure of the virtual environment agent A trajectory composed of a directional connection between the nodes, and the node information includes at least node position information, node behavior semantic information, and node environment semantic information.
  • the action result information obtaining module 112 is configured to obtain action result information according to the information of the target action and the semantic information of the node environment of the node.
  • the action result information obtaining module 112 is specifically configured to query the environment state transition mapping relationship of the virtual environment agent according to the target action information and the node environment semantic information of the node to obtain the action result information.
  • the instruction updating module 113 is used to instruct the semantic path processing unit to update the node environment semantic information of the nodes of the semantic path according to the action result information.
  • Fig. 12 is a schematic diagram of a terminal device provided by an embodiment of the present invention.
  • the terminal device 12 of this embodiment includes: a processor 120, a memory 121, and a computer program 122 stored in the memory 121 and running on the processor 120, such as a virtual reality scene interaction program .
  • the processor 120 executes the computer program 122, the steps in the above embodiments of the virtual reality scene interaction method are implemented, such as steps S301 to S302 shown in FIG. 3, or steps S701 to S703 shown in FIG. 7, or For example, steps S801 to S803 shown in FIG. 8.
  • the processor 120 executes the computer program 122, the functions of the modules/units in the foregoing device embodiments are implemented, such as the functions of the modules 91 to 92 shown in FIG. 9, or the modules 101 to 102 shown in FIG. , Or the functions of modules 111 to 112 shown in FIG. 11, for example.
  • the computer program 122 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 121 and executed by the processor 120 to complete this invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 122 in the terminal device 12.
  • the computer program 122 may be divided into an instruction receiving module and a node environment semantic information update module; or the computer program 122 may be divided into a semantic path acquisition module, a movement module, and a target action execution module; or the computer program 122 can be divided into a target action information acquisition module, an action result information acquisition module, and an instruction update module.
  • the terminal device 12 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 120 and a memory 121.
  • FIG. 12 is only an example of the terminal device 12, and does not constitute a limitation on the terminal device 12. It may include more or less components than those shown in the figure, or a combination of certain components, or different components.
  • the terminal device may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 120 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 121 may be an internal storage unit of the terminal device 12, such as a hard disk or a memory of the terminal device 12.
  • the memory 121 may also be an external storage device of the terminal device 12, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) equipped on the terminal device 12. Flash memory card Card) and so on.
  • the memory 121 may also include both an internal storage unit of the terminal device 12 and an external storage device.
  • the memory 121 is used to store the computer program and other programs and data required by the terminal device.
  • the memory 121 may also be used to temporarily store data that has been output or will be output.
  • the disclosed device/terminal device and method may be implemented in other ways.
  • the device/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements all or part of the processes in the above-mentioned embodiment methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, it can implement the steps of the foregoing method embodiments.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media any entity or device capable of carrying the computer program code
  • recording medium U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种虚拟现实场景及其交互方法、终端设备,其中,该虚拟现实场景包括虚拟环境智能体、虚拟角色智能体及语义路径处理单元;语义路径处理单元,用于构建语义路径,该语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;虚拟角色智能体,用于获取语义路径,根据自身任务及所述语义路径进行移动及执行目标动作;虚拟环境智能体,用于根据虚拟角色智能体在语义路径的节点的目标动作的信息得到动作结果信息,并指示语义路径处理单元更新语义路径的节点环境语义信息,能够有效地实现虚拟现实场景中的交互。

Description

虚拟现实场景及其交互方法、终端设备 技术领域
本申请属于虚拟现实技术领域,尤其涉及一种虚拟现实场景及其交互方法、终端设备。
背景技术
现有的虚拟现实场景通常由虚拟角色和虚拟环境组成。虚拟角色指的是通过虚拟现实技术模拟生成的具有人类形象、动物形象或者人为设计的虚幻形象(例如飞龙)的三维模型,它能够模拟人类或者动物的感知及行为方式。正如现实生活中人类和动物需要自身生存环境一样,虚拟角色同样需要其生存环境,而虚拟环境就是虚拟角色的生存环境。通过虚拟角色和虚拟环境的配合呈现,共同构成了具有真实感和沉浸感的虚拟现实场景。
技术问题
有鉴于此,本申请实施例提供了虚拟现实场景及其交互方法、终端设备,以解决现有技术中如何方便、有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互的问题。
技术解决方案
本申请第一方面提供一种虚拟现实场景,所述虚拟现实场景包括虚拟环境智能体、虚拟角色智能体及语义路径处理单元;
所述语义路径处理单元,用于构建语义路径,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
所述虚拟角色智能体,用于获取所述语义路径,根据自身任务及所述语义路径进行移动,当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作;
所述虚拟环境智能体,用于获取所述虚拟角色智能体在所述语义路径的节点的目标动作的信息,并根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
本申请第二方面提供一种虚拟现实场景交互方法,所述方法应用于语义路径处理单元,包括:
接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
本申请第三方面提供一种虚拟现实场景交互方法,所述方法应用于虚拟角色智能体,包括:
获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
根据自身的任务及所述语义路径进行移动;
当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
本申请第四方面提供一种虚拟现实场景交互方法,所述方法应用于虚拟环境智能体,包括:
获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;
指示语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
本申请第五方面提供一种语义路径处理单元,包括:
指令接收模块,用于接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
节点环境语义信息更新模块,用于获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
本申请第六方面提供一种虚拟角色智能体,包括:
语义路径获取模块,用于获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
移动模块,用于根据自身的任务及所述语义路径进行移动;
目标动作执行模块,用于当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
本申请第七方面提供一种虚拟环境智能体,包括:
目标动作的信息获取模块,用于获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
动作结果信息获取模块,用于根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;
指示更新模块,用于指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
本申请第八方面提供一种终端设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,上述处理器执行上述计算机程序时实现上述第二方面至第四方面中的任意一种虚拟现实场景交互方法。
本申请第九方面提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,上述计算机程序被处理器执行时实现上述第二方面至第四方面中的任意一种虚拟现实场景交互方法。
本申请第十方面提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第二方面至第四方面中的任意一种虚拟现实场景交互方法。
有益效果
本申请实施例中,在虚拟现实场景中包括虚拟环境智能体、虚拟角色智能体以及语义路径处理单元,构建了绘制在虚拟环境智能体的几何图形上的由节点和有向连接组成的语义路径,且该语义路径的节点包括节点位置信息、节点行为语义信息及节点环境语义信息。由于该语义路径既能够指导虚拟角色智能体的移动,还能够向虚拟角色智能体传达节点行为语义信息和节点环境语义信息以指导虚拟角色智能体执行目标动作,同时又能够及时获取目标动作对虚拟环境智能体作用产生的动作结果信息,更新语义节点环境语义信息以便虚拟角色智能体进行下一次目标动作的执行决策,即语义路径处理单元构建的语义路径能够作为虚拟角色智能体和虚拟环境智能体的媒介,既能指导虚拟角色智能体的行为决策又能够记录虚拟环境智能体的变化,因此通过该语义路径即能够方便、有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请提供的一种虚拟现实场景的结构示意图;
图2为本申请提供的一种虚拟现实场景的画面示意图;
图3为本申请提供的第一种虚拟现实场景方法的实现流程示意图;
图4-6分别为本申请提供的三种语义路径的示意图;
图7为本申请提供的第二种虚拟现实场景方法的实现流程示意图;
图8为本申请提供的第三种虚拟现实场景方法的实现流程示意图;
图9为本申请提供的一种语义路径处理单元的组成示意图;
图10为本申请提供的一种虚拟角色智能体的组成示意图;
图11为本申请提供的一种虚拟环境智能体的组成示意图;
图12为本申请提供的一种终端设备的结构示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
应当理解,当在本说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
实施例一
如图1所示为本申请实施例提供的一种虚拟现实场景的结构示意图,其中该虚拟现实场景由虚拟环境智能体11、虚拟角色智能体12及语义路径处理单元13组成,语义路径处理单元13用于构建语义路径131,该语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
在以往的虚拟现实场景中,虚拟角色和虚拟环境为两个分开独立的单元,虽然图形上可以看到虚拟角色和虚拟环境共同处在同一场景画面中,但是虚拟角色的行为实际并没有和其所处的虚拟环境进行交互,需要分别对两个单元进行复杂的程序构建、组合才可能实现看似有交互的配合功能,即现有的虚拟现实场景中,虚拟角色和虚拟环境难以进行方便有效的交互,且虚拟角色也难以根据虚拟环境的具体状态信息进行行为决策。而本申请实施例中,虚拟角色和虚拟环境均为具有三维的几何图形又能够识别、转换语义信息的智能体,分别称为虚拟角色智能体和虚拟环境智能体,通过绘制在虚拟环境智能体的几何图形上的语义路径的节点位置信息、节点行为语义信息及节点环境语义信息,指导虚拟角色智能体在虚拟环境智能体上行走、执行目标动作、并记录该虚拟角色智能体的动作对虚拟环境智能体的影响,即通过语义路径作为虚拟角色智能体和虚拟环境智能体的媒介,方便有效地实现虚拟角色和虚拟环境的交互。
具体来说,本申请实施例的语义路径具有以下三种功能:一是路径本身的功能,解决虚拟角色智能体路由规划的问题;二是序列化虚拟角色智能体行为的功能,解决虚拟角色为执行某项任务的行为规划问题;三是环境语义的功能,通过路径节点给出各位置节点附近相关实体的状态及其变化信息,解决虚拟角色智能体对所处虚拟环境智能体的感知问题。
具体地,所述语义路径处理单元,用于构建语义路径,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
如图2所示为一种虚拟现实场景的画面示意图,该虚拟现实场景的画面由虚拟环境智能体、在虚拟环境智能体上活动的虚拟角色智能体以及绘制在虚拟环境智能体的几何图形上的语义路径组成,该语义路径由语义路径处理单元构建。具体地,该语义路径由节点和节点之间的有向连接组成,其中节点的信息至少包括节点的节点位置信息、节点行为语义信息和节点环境语义信息,该有向连接构成每个节点可出发和进入的方向与距离的信息。具体地,节点的位置信息记录该节点在虚拟环境智能体上的坐标信息;节点行为语义信息记录虚拟角色智能体行走到该节点时需对应完成的行为,可以用对应的行为标识号表示,例如用behaviorState=28表示此处需要开门或关门的行为;节点环境语义信息实时记录该节点所在位置的虚拟环境智能体的状态信息,可以用对应的状态标识号表示,例如用environmentState=29表示该节点附近的一个房门为关闭状态。
具体地,所述虚拟角色智能体,用于获取所述语义路径,根据自身任务及所述语义路径进行移动,当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
虚拟角色智能体为具有角色外形几何图形信息(例如模拟人图形、动物图形等)且能够根据语义路径的信息及自身的任务自主进行路径规划、行为决策的智能体。虚拟角色智能体获取绘制于虚拟环境智能体上的语义路径,根据自身任务确定所要到达的目标位置,并根据该语义路径进行路径规划,确定到达该目标位置需经过语义路径的目标轨迹,该目标轨迹可以为语义路径的一整条完整轨迹,或者为语义路径中的一部分轨迹。虚拟角色根据规划好的该目标轨迹在语义路径上移动,当虚拟角色智能体检测到自身的位置信息和语义路径的一个节点的节点位置信息一致时,则获取该节点的节点行为语义信息和节点环境语义信息,并根据该节点行为语义信息和节点环境语义信息执行目标动作。例如,当虚拟角色智能体检测到自身的位置信息和语义路径的节点A的节点位置信息一致时,则获取节点A的节点行为语义信息“behaviorState=28”(表示此处需要开门或关门的行为)和节点环境语义信息“environmentState=29”(表示该节点附近的一个房门为关闭状态),并根据该节点行为语义信息和节点环境语义信息确定在节点A需执行的目标动作为开门动作。具体地,虚拟角色智能体执行目标动作后,向虚拟环境智能体发送标识目标动作的信息,该目标动作的信息可以为标识该目标动作的标识号。
具体地,所述虚拟环境智能体,用于获取所述虚拟角色智能体在所述语义路径的节点的目标动作的信息,并根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
虚拟环境智能体为具有三维模拟环境几何图形信息、且能够感知周围事物并通过执行器作用于周围事物的智能体。虚拟环境智能体获取虚拟角色智能体传达的目标动作的信息及虚拟角色智能体当前所在的节点的节点环境语义信息,通过虚拟环境智能体自身的状态转换公理(即预设的状态转换映射关系,可以为状态转换映射表),得到该目标动作作用在该虚拟环境智能体后,虚拟环境智能体中与该节点对应的位置的环境状态变化信息,将该信息称为动作结果信息。例如,虚拟角色智能体在节点行为语义信息为“behaviorState=28”(表示此处需要开门或关门的行为)、节点环境语义信息为“environmentState=29”(表示该节点附近的一个房门为关闭状态)的节点A处执行了开门动作(动作标识号可为Action_OpenDoor);虚拟环境智能体获取了该节点的节点环境语义信息“environmentState=29”和动作标识号“Action_OpenDoor”,通过查询预设的状态转换映射关系,得到开门动作后对应的动作结果信息为房门已打开,根据该动作结果信息更新语义路径上该节点的环境语义信息,即将原来的“environmentState=29”变更为“environmentState=30”(表示该节点附近的一个房门为打开状态)。
为了便于理解,作为示例而非限定,以下提供一种虚拟现实场景具体为警卫巡逻场景的应用示例:
该警卫巡逻场景的虚拟环境智能体具体为指定巡逻区域环境,虚拟角色智能体具体为警卫角色,语义路径处理单元构建的语义路径具体为巡逻语义路径。该巡逻语义路径预先绘制在指定巡逻区域环境上,由巡逻节点和巡逻节点之间的有向连接组成,并且该巡逻节点根据其在指定巡逻区域环境的具体位置提前设置了节点位置信息、节点行为语义信息和节点环境语义信息。
其中,节点行为语义信息可以用变量behaviorState存储,behaviorState=28表示此处需要开门或关门的行为;behaviorState=29表示此处需要检查窗子的行为;behaviorState=30表示此处需要根据现场的火情执行火情的紧急处理行为等。
节点环境语义信息可以用变量environmentState存储,environmentState=29表示该节点附近的一个房门为关闭状态,environmentState=30表示该节点附近的一个房门为打开状态;environmentState=31表示该节点附近的一个窗子已经处于破损状态,environmentState=32表示该节点附近的破损窗子已经修复;environmentState=33表示该节点附近已经处于火情状态,environmentState=34表示该节点附近火情已经被处置等。
警卫角色根据语义路径进行路径规划,在语义路径上移动。当警卫角色到达节点行为语义信息为“behaviorState=28”、节点环境语义信息为“environmentState=29”的节点A时,警卫角色执行开门动作,指定巡逻区域环境接收该动作的信息,得到房门已打开动作结果信息,并指示巡逻语义路径对应将节点A的节点环境语义信息由“environmentState=29”变更为“environmentState=30”。当警卫角色到达节点行为语义信息为“behaviorState=29”、节点环境语义信息为“environmentState=31”的节点B时,警卫角色执行窗子修复动作,指定巡逻区域环境接收该动作的信息,得到窗子已修复的动作结果信息,并指示巡逻语义路径对应将节点B的节点环境语义信息由“environmentState=31”变更为“environmentState=32”。当警卫角色到达节点行为语义信息为“behaviorState=30”、节点环境语义信息为“environmentState=33”的节点C时,警卫角色执行火情处置动作,指定巡逻区域环境接收该动作的信息,得到火情已处置的动作结果信息,并指示巡逻语义路径对应将节点C的节点环境语义信息由“environmentState=33”变更为“environmentState=34”。
可以理解地,本申请实施例中的语义路径不仅限于绘制于虚拟环境智能体的陆地图形上的路径,当虚拟环境智能体包含水域图形或者天空图形时,该语义路径也可以为绘制于水域图形上的轨迹或者绘制于天空图形的轨迹。例如,作为一种示例,本申请实施例的虚拟现实场景可以为飞龙活动的超现实场景,该场景的虚拟角色智能体为虚幻形象——飞龙;该场景的虚拟环境智能体的几何图形具体包括江、海等水域图形以及天空图形;语义路径处理单元在江、海等水域图形和天空图形上绘制由节点和节点之间的有向连接组成的轨迹作为语义路径,并在每个节点上设置对应的飞龙需执行的行为的节点行为语义信息、以及每个节点对应的节点环境语义信息;该飞龙可以根据该绘制在虚拟环境智能体的江、海等水域图形或者绘制在虚拟环境智能体的天空图形上的语义路径进行行为决策,实现飞龙的翻江倒海、或者在空中飞行穿梭的超现实场景。
本申请实施例中,在虚拟现实场景中包括虚拟环境智能体、虚拟角色智能体以及语义路径处理单元,构建了绘制在虚拟环境智能体上的由节点和节点之间的有向连接组成的语义路径且该语义路径的节点包括节点位置信息、节点行为语义信息及节点环境语义信息。由于该语义路径既能够指导虚拟角色智能体的移动,还能够向虚拟角色智能体传达节点行为语义信息和节点环境语义信息以指导虚拟角色智能体执行目标动作,同时又能够及时获取目标动作对虚拟环境智能体作用产生的动作结果信息,更新语义节点环境语义信息以便虚拟角色智能体进行下一次目标动作的执行决策,即语义路径处理单元构建的语义路径能够作为虚拟角色智能体和虚拟环境智能体的媒介,既能指导虚拟角色智能体的行为决策又能够记录虚拟环境智能体的变化,因此通过该语义路径即能够方便、有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互。
实施例二
图3示出了本申请实施例提供的第一种虚拟现实场景交互方法的流程示意图,本申请实施例的执行主体为语义路径处理单元,详述如下:
在S301中,接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
接收用户在虚拟现实场景的画面上的点击指令或者触摸指令,构建语义路径。该语义路径由节点和节点之间的有向连接组成,其中节点的信息至少包括节点的节点位置信息、节点行为语义信息和节点环境语义信息,该有向连接构成每个节点可出发和进入的方向与距离的信息。具体地,节点的位置信息记录该节点在虚拟环境智能体上的坐标信息;节点行为语义信息记录虚拟角色智能体行走到该节点时需对应完成的行为,可以用对应的行为标识号表示;节点环境语义信息实时记录该节点所在位置的虚拟环境智能体的状态信息,可以用对应的状态标识号表示。
可选地,所述步骤S301包括:
S30101:接收绘制指令,在所述虚拟环境智能体的几何图形上绘制语义路径的节点和所述节点之间的有向连接。
S30102:接收行为语义信息选择指令,为所述语义路径的节点添加节点行为语义信息。
S30103:接收环境语义选择指令,为所述语义路径的节点添加初始的节点环境语义信息。
在S30101中,接收绘制指令,具体地,接收用户在虚拟环境智能体的几何图形上的节点选择指令(即在虚拟环境智能体上点击选择若干个指定的位置作为语义路径的节点)以及接收用户连接已选择的节点的滑动指令,从而在虚拟环境智能体的几何图形上完成节点和节点之间的有向连接的绘制,即完成语义路径的轨迹绘制。具体地该有向连接可以为双向连接,例如在语义路径的节点A和节点B之间的有向连接可以同时拥有A→B和B→A两个方向供虚拟角色智能体进行移动决策。可选地,该语义路径可以为如图4所示的环形语义路径,也可以为如图5所示的非环形语义路径。可选地,该语义路径还可以为由多个环形语义子路径和/或非环形语义子路径组成的复合路径,如图6所示。
在S30102中,接收用户在已绘制的语义路径的节点上的行为语义信息选择指令,分别为语义路径的每个节点添加节点行为语义信息。
在S30103中,接收用户在已绘制的语义路径的节点上的环境语义信息选择指令,分别为语义路径的每个节点添加初始的节点环境语义信息。
本申请实施例中,由于语义路径处理单元只需简单地接收用户的指令就可在虚拟环境智能体上完成语义路径的构建,而无需复杂的程序编程,因此可以使语义路径的构建更加便捷灵活,降低虚拟现实场景构建的开发难度。
可选地,所述节点的信息还包括节点通行状态信息,对应地,所述步骤S301还包括:
S30104:接收节点通行状态选择指令,为所述语义路径的节点添加节点通行状态信息,其中所述节点通行状态信息包括用于标识所述节点为路径转换节点的第一标识信息、用于标识所述节点为暂停使用节点的第二标识信息。
本申请实施例中,语义路径上的每个节点的信息还包括节点通行状态信息,用于标识该节点能否通行以及该节点是否为符合路径的转换节点。具体地,该节点通行状态信息包括用于标识该节点为路径转换节点的第一标识信息,用于标识该节点为暂停使用节点的第二标识信息。该节点通信状态信息可以用一个变量“nodeState”进行存储,例如,如图6所示,节点Ij为一个路径转换节点,则为该节点Ij添加第一标识信息以标识该节点为路径转换节点,该第一标识信息可以为“nodeState=3”,并且该节点Ij的信息还包括所有可供选择的下一节点的位置信息(例如I5和J5的位置信息);设图6中的K1为一个不允许虚拟角色智能体通过的暂停使用节点,则为该节点K1添加第二标识信息,例如令“nodeState=4”,表示该节点是一个暂停使用节点,说明不能使用该节点作为通行点。若节点即不是路径转换节点又不是暂停使用的节点,则可以直接将节点通行状态信息默认设置为“nodeState=0”。本申请实施例中,由于节点的信息还包括了节点通信状态信息,能够更准确地指导虚拟角色智能体进行路径规划。
在S302中,获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
当虚拟角色智能体在目标节点上执行目标动作使得虚拟环境智能体的状态发生变化时,即虚拟环境智能体生成动作结果信息时,语义路径处理单元及时获取该动作结果信息,并根据该动作结果信息更新目标节点的节点环境语义信息。例如,节点A上原来的节点环境语义信息为“environmentState=29”(表示该节点A附近的一个房门为关闭状态),当虚拟角色智能体在目标节点上执行开门动作后,动作结果信息为房门已打开,根据该动作结果信息更新语义路径上该节点的环境语义信息,即将原来的“environmentState=29”变更为“environmentState=30”(表示该节点附近的一个房门为打开状态)。
本申请实施例中,语义路径处理单元通过接收用户的指令构建了语义路径,该语义路径由节点和节点之间的有向连接组成,每个节点携带了节点位置信息、节点行为语义信息及节点环境语义信息等信息,并且节点的节点环境语义信息可以根据虚拟环境智能体的产生的动作结果信息来及时进行更新。由于该语义路径包括节点位置信息和有向连接,因此能够指导虚拟角色智能体进行路径规划和移动;由于该语义路径包括节点行为语义信息和节点环境语义信息,因此能够指导虚拟角色智能体在节点上执行目标动作;由于该语义路径的节点环境语义信息还可以根据虚拟环境智能体的产生的动作结果信息来及时进行更新,因此能够及时记录更新语义节点环境语义信息以便虚拟角色智能体进行下一次目标动作的执行决策;从而,该语义路径能有效地指导虚拟角色智能体在虚拟环境智能体上的行为,方便、有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互。
实施例三
图7示出了本申请实施例提供的第二种虚拟现实场景交互方法的流程示意图,本申请实施例的执行主体为虚拟角色智能体,该虚拟角色智能体为具有角色外形的几何图形信息(例如模拟的人形图形、动物图形等),且能够根据语义路径的信息及自身的任务自主进行路径规划、行为决策的智能体。详述如下:
在S701中,获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
虚拟角色智能体获取语义路径,该语义路径为提前绘制在虚拟环境智能体的几何图形上的由节点和节点之间的有向连接组成的轨迹,该节点的信息至少包括节点位置信息、节点行为语义信息和节点环境语义信息。其中,节点的位置信息记录该节点在虚拟环境智能体上的坐标信息;节点行为语义信息记录虚拟角色智能体行走到该节点时需对应完成的行为,可以用对应的行为标识号表示;节点环境语义信息实时记录该节点所在位置的虚拟环境智能体的状态信息,可以用对应的状态标识号表示。
可选地,虚拟角色智能体从语义路径处理单元中获取语义路径的有序的节点序列组,并且节点序列组中每个元素包括节点的序号、节点的节点位置信息、节点行为语义信息和节点环境语义信息等。可选地,虚拟角色智能体具有视觉识别功能,通过视觉从虚拟现实场景中识别获取绘制好的语义路径轨迹。
在S702中,根据自身的任务及所述语义路径进行移动。
虚拟角色智能体根据自身的任务和获取到的语义路径进行路径规划及移动。例如,设虚拟角色智能体的任务为在指定区域上进行巡逻,则该虚拟角色根据指定区域上包含的语义路径轨迹进行移动。
可选地,所述步骤S702,具体包括:
根据自身的任务、所述语义路径的节点位置信息及节点通行状态信息,进行移动。
可选地,语义路径的节点还包括节点通行状态信息,虚拟角色智能体在进行路径规划时,除了结合自身的任务、语义路径的节点位置信息,还考虑节点通行状态信息,进行移动路径的规划。例如若检测到语义路径中的节点K1的节点通行状态信息为“nodeState=4”,则说明该节点为暂停使用节点,虚拟角色智能体在进行路径规划时要避开该节点。之后,虚拟角色智能体根据规划好的路径进行移动。
在S703中,当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
当虚拟角色智能体检测到自身的位置信息和语义路径其中的一个节点的节点位置信息一致时,说明虚拟角色智能体当前移动到该节点,此时根据该节点的节点行为语义信息和节点环境语义信息,执行目标动作。
具体地,所述步骤S703包括:
S70301:当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点行为语义信息,确定目标行为;
当虚拟角色智能体的位置信息和语义路径的一个节点的节点位置信息一致时,获取该节点的节点行为语义信息,确定在该节点上虚拟角色智能体需完成的目标行为。例如,当虚拟角色智能体的位置信息和语义路径的节点A的节点位置信息一致时,则获取节点A的节点行为语义信息“behaviorState=28”(表示此处需要开门或关门的行为),确定在该节点上虚拟角色智能体需完成的目标行为为对门的操作行为。
S70302:根据所述目标行为对应的执行基及所述节点环境语义信息,确定目标动作并执行,其中所述执行基为将环境语义信息和目标动作对应存储的二元组。
在虚拟角色智能体中预先存储了每个行为的执行基,该执行基是一个将环境语义信息和目标动作对应存储的二元组(b,z),表示目标动作b在环境语义信息为z时执行。根据目标行为获取对应的多个执行基,并根据节点环境语义信息查找当前环境语义信息对应的目标动作b。例如,设目标行为是对门的操作行为,虚拟角色智能体中预先存储了该目标行为对应的两个执行基:(开门动作,房门为关闭状态),(关门动作,房门为开启状态);设当前的节点环境语义信息为environmentState=29(表示该节点附近的一个房门为关闭状态),则虚拟角色智能体根据该节点环境语义信息查询门的操作行为的两个执行基,确定当前要执行的目标动作为开门动作。
具体地,所述执行目标动作,包括:
执行目标动画,并向虚拟环境智能体传达目标动作的信息。
虚拟角色智能体执行开门动作具体是通过让自身运行目标动画,即通过让虚拟角色智能体的几何图形信息发生指定的动态变化来显示该虚拟角色智能体执行了目标动作。例如,若目标动作为开门动作,则播放与开门动作对应存储的目标动画,通过该动画在虚拟场景画面上表现该虚拟角色智能体执行了开门动作。在执行目标动画的同时或者之后,虚拟角色智能体向虚拟环境智能体传达目标动作的信息,可以通过传达目标动作对应的动作标识号来向虚拟环境智能体传达当前该虚拟角色执行了目标动作。
本申请实施例中,虚拟角色智能体通过获取语义路径,根据该语义路径的节点位置信息进行移动,在到达每个节点时,根据语义路径上该节点的节点行为语义信息和节点环境语义信息执行目标动作,即通过语义路径方便有效地指导了虚拟角色在虚拟环境智能体上的行为决策,有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互。
实施例四
图8示出了本申请实施例提供的第三种虚拟现实场景交互方法的流程示意图,本申请实施例的执行主体为虚拟环境智能体,该虚拟环境智能体为具有三维模拟环境的几何图形信息,且能够感知周围事物并通过执行器作用于周围事物的智能体。详述如下:
在S801中,获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
本申请实施实施例中的语义路径为提前绘制在虚拟环境智能体的几何图形上的由节点和节点之间的有向连接组成的轨迹,该节点的信息至少包括节点位置信息、节点行为语义信息和节点环境语义信息。其中,节点的位置信息记录该节点在虚拟环境智能体上的坐标信息;节点行为语义信息记录虚拟角色智能体行走到该节点时需对应完成的行为,可以用对应的行为标识号表示;节点环境语义信息实时记录该节点所在位置的虚拟环境智能体的状态信息,可以用对应的状态标识号表示。
虚拟环境智能体获取虚拟角色智能体在语义路径的一节点上执行的目标动作的信息,该目标动作的信息可以为目标动作对应的动作标识号。例如,虚拟环境智能体获取虚拟角色智能体在节点A上的执行的开门动作的信息,该动作标识号可以为“Action_OpenDoor”。
在S802中,根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息。
根据该目标动作的信息和该节点的节点环境语义信息,得到该目标动作作用在该虚拟环境智能体后,虚拟环境智能体中与该节点对应的位置的环境状态变化信息,将该信息称为动作结果信息。
具体地,所述步骤S802包括:
根据所述目标动作的信息及所述节点的节点环境语义信息,查询所述虚拟环境智能体的环境状态转换映射关系,得到动作结果信息。
虚拟环境智能体中预存了环境状态转换映射关系,也可称为虚拟环境智能体的状态转换公理。可选地,该环境状态转换映射关系可以通过状态转换映射表来实现,状态转换映射表的每一项存储了目标动作的信息、执行动作前节点的节点环境语义信息,以及执行该目标动作后对应的动作结果信息。根据获取的目标动作的信息和当前节点的节点环境语义信息,查询该状态转换映射表,可以得到在当前节点执行目标动作后对应的动作结果信息。例如,虚拟环境智能体中预存的状态转换映射表中其中一项对应存储的信息如下:
目标动作的信息 执行动作前节点的节点环境语义信息 动作结果信息
Action_OpenDoor environmentState=29 environmentState=30
当虚拟环境智能体获取到虚拟角色在节点A上的目标动作的信息为“Action_OpenDoor”(表示开门动作),且执行动作前节点的节点环境语义信息为“environmentState=29”(表示该节点附近的一个房门为关闭状态),则得到对应的动作结果信息为“environmentState=30”(表示该节点的房门变更为打开状态)。
在S803中,指示语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
根据该动作结果信息,指示语义路径单元对该节点的节点环境语义信息进行更新。例如,在节点A获得“environmentState=30”的动作结果信息后,指示语义路径单元将节点A的节点环境语义信息由原来的“environmentState=29”(表示该节点附近的一个房门为关闭状态)变更为“environmentState=30”(表示该节点附近的一个房门为打开状态)。
本申请实施中,由于虚拟环境智能体根据在一节点上获取目标动作的信息并得到对应的动作结果信息后,能够指示语义路径单元及时根据该动作结果信息对语义路径的该节点的节点环境语义信息进行更新以便虚拟角色智能体进行下一次目标动作的决策,因此通过该虚拟环境智能体对语义路径的更新记录,即能够方便、有效地实现虚拟现实场景中虚拟角色和虚拟环境的交互。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
实施例五
本申请实施例还提供一种语义路径处理单元,如图9所示,为了便于说明,仅示出了与本申请实施例相关的部分:
该语义路径处理单元包括:指令接收模块91、节点环境语义更新模块92。其中:
指令接收模块91,用于接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
可选地,所述指令接收模块91具体包括绘制模块、行为语义信息选择模块、环境语义选择模块:
绘制模块,用于接收绘制指令,在虚拟环境智能体的几何图形上绘制语义路径的节点和所述节点之间的有向连接;
行为语义信息选择模块,用于接收行为语义信息选择指令,为所述语义路径的节点添加节点行为语义信息;
环境语义选择模块,用于接收环境语义选择指令,为所述语义路径的节点添加初始的节点环境语义信息。
可选地,所述指令接收模块还包括:节点通行状态选择模块,用于接收节点通行状态选择指令,为所述语义路径的节点添加节点通行状态信息,其中所述节点通行状态信息包括用于标识所述节点为路径转换节点的第一标识信息、用于标识所述节点为暂停使用节点的第二标识信息。
节点环境语义信息更新模块92,用于获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
实施例六
本申请实施例还提供一种虚拟角色智能体,如图10所示,为了便于说明,仅示出了与本申请实施例相关的部分:
该虚拟角色智能体包括:语义路径获取模块101、移动模块102和目标动作执行模块103。其中:
语义路径获取模块101,用于获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
移动模块102,用于根据自身的任务及所述语义路径进行移动。
可选地,所述移动模块102,具体用于根据自身的任务、所述语义路径的节点位置信息及节点通行状态信息,进行移动。
目标动作执行模块103,用于当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
可选地,所述目标动作执行模块103,具体用于当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点行为语义信息,确定目标行为;根据所述目标行为对应的执行基及所述节点环境语义信息,确定目标动作并执行,其中所述执行基为将环境语义信息和目标动作对应存储的二元组。
可选地,所述执行目标动作,具体包括:执行目标动画,并向虚拟环境智能体传达目标动作的信息。
实施例七
本申请实施例还提供一种虚拟环境智能体,如图11所示,为了便于说明,仅示出了与本申请实施例相关的部分:
该虚拟环境智能体包括:目标动作的信息获取模块111、动作结果信息获取模块112和指示更新模块113。其中:
目标动作的信息获取模块111,用于获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息。
动作结果信息获取模块112,用于根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息。
可选地,所述动作结果信息获取模块112,具体用于根据所述目标动作的信息及所述节点的节点环境语义信息,查询所述虚拟环境智能体的环境状态转换映射关系,得到动作结果信息。
指示更新模块113,用于指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
需要说明的是,上述单元/模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
实施例八
图12是本发明一实施例提供的终端设备的示意图。如图12所示,该实施例的终端设备12包括:处理器120、存储器121以及存储在所述存储器121中并可在所述处理器120上运行的计算机程序122,例如虚拟现实场景交互程序。所述处理器120执行所述计算机程序122时实现上述各个虚拟现实场景交互方法实施例中的步骤,例如图3所示的步骤S301至S302,或例如图7所示的步骤S701至S703,或例如图8所示的步骤S801至S803。或者,所述处理器120执行所述计算机程序122时实现上述各装置实施例中各模块/单元的功能,例如图9所示模块91至92的功能,或例如图10所示模块101至102的功能,或例如图11所示模块111至112的功能。
示例性的,所述计算机程序122可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器121中,并由所述处理器120执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序122在所述终端设备12中的执行过程。例如,所述计算机程序122可以被分割成指令接收模块、节点环境语义信息更新模块;或者所述计算机程序122可以被分割成语义路径获取模块、移动模块、目标动作执行模块;或者所述计算机程序122可以被分割成目标动作的信息获取模块、动作结果信息获取模块、指示更新模块。
所述终端设备12可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述终端设备可包括,但不仅限于,处理器120、存储器121。本领域技术人员可以理解,图12仅仅是终端设备12的示例,并不构成对终端设备12的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器120可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器121可以是所述终端设备12的内部存储单元,例如终端设备12的硬盘或内存。所述存储器121也可以是所述终端设备12的外部存储设备,例如所述终端设备12上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器121还可以既包括所述终端设备12的内部存储单元也包括外部存储设备。所述存储器121用于存储所述计算机程序以及所述终端设备所需的其他程序和数据。所述存储器121还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (15)

  1. 一种虚拟现实场景,其特征在于,所述虚拟现实场景包括虚拟环境智能体、虚拟角色智能体及语义路径处理单元;
    所述语义路径处理单元,用于构建语义路径,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    所述虚拟角色智能体,用于获取所述语义路径,根据自身任务及所述语义路径进行移动,当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作;
    所述虚拟环境智能体,用于获取所述虚拟角色智能体在所述语义路径的节点的目标动作的信息,并根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
  2. 一种虚拟现实场景交互方法,其特征在于,所述方法应用于语义路径处理单元,包括:
    接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
  3. 如权利要求2所述的虚拟现实场景交互方法,其特征在于,所述接收用户指令,构建语义路径,包括:
    接收绘制指令,在虚拟环境智能体的几何图形上绘制语义路径的节点和所述节点之间的有向连接;
    接收行为语义信息选择指令,为所述语义路径的节点添加节点行为语义信息;
    接收环境语义选择指令,为所述语义路径的节点添加初始的节点环境语义信息。
  4. 如权利要求3所述的虚拟现实场景交互方法,其特征在于,所述节点的信息还包括节点通行状态信息,此时,所述接收用户指令,构建语义路径,还包括:
    接收节点通行状态选择指令,为所述语义路径的节点添加节点通行状态信息,其中所述节点通行状态信息包括用于标识所述节点为路径转换节点的第一标识信息、用于标识所述节点为暂停使用节点的第二标识信息。
  5. 一种虚拟现实场景交互方法,其特征在于,所述方法应用于虚拟角色智能体,包括:
    获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    根据自身的任务及所述语义路径进行移动;
    当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
  6. 如权利要求5所述的虚拟现实场景交互方法,其特征在于,所述语义路径的节点的信息还包括节点通行状态信息,所述根据自身的任务及所述语义路径进行移动,包括:
    根据自身的任务、所述语义路径的节点位置信息及节点通行状态信息,进行移动。
  7. 如权利要求5所述的虚拟现实场景交互方法,其特征在于,所述当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作,包括:
    当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点行为语义信息,确定目标行为;
    根据所述目标行为对应的执行基及所述节点环境语义信息,确定目标动作并执行,其中所述执行基为将环境语义信息和目标动作对应存储的二元组。
  8. 如权利要求5所述的虚拟现实场景交互方法,其特征在于,所述执行目标动作,包括:
    执行目标动画,并向虚拟环境智能体传达目标动作的信息。
  9. 一种虚拟现实场景交互方法,其特征在于,所述方法应用于虚拟环境智能体,包括:
    获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;
    指示语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
  10. 如权利要求9所述的虚拟现实场景交互方法,其特征在于,所述根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息,包括:
    根据所述目标动作的信息及所述节点的节点环境语义信息,查询所述虚拟环境智能体的环境状态转换映射关系,得到动作结果信息。
  11. 一种语义路径处理单元,其特征在于,包括:
    指令接收模块,用于接收用户指令,构建语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    节点环境语义信息更新模块,用于获取虚拟环境智能体的动作结果信息,根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
  12. 一种虚拟角色智能体,其特征在于,包括:
    语义路径获取模块,用于获取语义路径,所述语义路径为绘制在虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    移动模块,用于根据自身的任务及所述语义路径进行移动;
    目标动作执行模块,用于当所述虚拟角色智能体的位置信息与所述语义路径的节点的节点位置信息一致时,根据所述节点的节点行为语义信息及节点环境语义信息,执行目标动作。
  13. 一种虚拟环境智能体,其特征在于,包括:
    目标动作的信息获取模块,用于获取虚拟角色智能体在语义路径的节点的目标动作的信息,所述语义路径为绘制在所述虚拟环境智能体的几何图形上的由节点和所述节点之间的有向连接组成的轨迹,所述节点的信息至少包括节点位置信息、节点行为语义信息及节点环境语义信息;
    动作结果信息获取模块,用于根据所述目标动作的信息及所述节点的节点环境语义信息,得到动作结果信息;
    指示更新模块,用于指示所述语义路径处理单元根据所述动作结果信息更新所述语义路径的节点的节点环境语义信息。
  14. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,当所述处理器执行所述计算机程序时,使得终端设备实现如权利要求2至10任一项所述方法的步骤。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,当所述计算机程序被处理器执行时,使得终端设备实现如权利要求2至10任一项所述方法的步骤。
PCT/CN2019/120515 2019-11-25 2019-11-25 虚拟现实场景及其交互方法、终端设备 WO2021102615A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201980003474.2A CN111095170B (zh) 2019-11-25 2019-11-25 虚拟现实场景及其交互方法、终端设备
PCT/CN2019/120515 WO2021102615A1 (zh) 2019-11-25 2019-11-25 虚拟现实场景及其交互方法、终端设备
US17/311,602 US11842446B2 (en) 2019-11-25 2019-11-25 VR scene and interaction method thereof, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/120515 WO2021102615A1 (zh) 2019-11-25 2019-11-25 虚拟现实场景及其交互方法、终端设备

Publications (1)

Publication Number Publication Date
WO2021102615A1 true WO2021102615A1 (zh) 2021-06-03

Family

ID=70400271

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120515 WO2021102615A1 (zh) 2019-11-25 2019-11-25 虚拟现实场景及其交互方法、终端设备

Country Status (3)

Country Link
US (1) US11842446B2 (zh)
CN (1) CN111095170B (zh)
WO (1) WO2021102615A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773736B (zh) * 2020-07-03 2024-02-23 珠海金山数字网络科技有限公司 一种虚拟角色的行为生成方法及装置
CN112088349A (zh) * 2020-07-31 2020-12-15 深圳信息职业技术学院 目标追踪方法、装置、终端设备及存储介质
CN112989324B (zh) * 2021-03-10 2024-07-19 中国民航信息网络股份有限公司 数据交互的方法、装置、电子设备及存储介质
CN113012300A (zh) * 2021-04-02 2021-06-22 北京隐虚等贤科技有限公司 沉浸式互动内容的创建方法、装置以及存储介质
CN114706381A (zh) * 2022-03-04 2022-07-05 达闼机器人股份有限公司 智能体的训练方法、装置、存储介质及电子设备
CN115222926A (zh) * 2022-07-22 2022-10-21 领悦数字信息技术有限公司 用于在虚拟环境中规划路线的方法、装置以及介质
CN116483198B (zh) * 2023-03-23 2024-08-30 广州卓远虚拟现实科技股份有限公司 一种虚拟运动场景的交互控制方法、系统及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173565A1 (en) * 2010-01-12 2011-07-14 Microsoft Corporation Viewing media in the context of street-level images
CN107103644A (zh) * 2017-04-21 2017-08-29 腾讯科技(深圳)有限公司 一种虚拟场景中对象的控制方法和装置
CN109960545A (zh) * 2019-03-29 2019-07-02 网易(杭州)网络有限公司 虚拟对象控制方法、系统、装置、介质及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7664313B1 (en) * 2000-10-23 2010-02-16 At&T Intellectual Property Ii, L.P. Text-to scene conversion
CN101075349A (zh) * 2007-06-22 2007-11-21 珠海金山软件股份有限公司 一种在svg中表达演示动画效果的方法
US10977662B2 (en) * 2014-04-28 2021-04-13 RetailNext, Inc. Methods and systems for simulating agent behavior in a virtual environment
US10406437B1 (en) * 2015-09-30 2019-09-10 Electronic Arts Inc. Route navigation system within a game application environment
US10402690B2 (en) * 2016-11-07 2019-09-03 Nec Corporation System and method for learning random-walk label propagation for weakly-supervised semantic segmentation
CN106940594B (zh) * 2017-02-28 2019-11-22 深圳信息职业技术学院 一种虚拟人及其运行方法
CN110297697B (zh) * 2018-03-21 2022-02-18 北京猎户星空科技有限公司 机器人动作序列生成方法和装置
CN109582140A (zh) 2018-11-23 2019-04-05 哈尔滨工业大学 一种基于虚拟现实与眼动追踪的建筑室内寻路要素视觉显著性评估系统与方法
CN109806584A (zh) 2019-01-24 2019-05-28 网易(杭州)网络有限公司 游戏场景生成方法及装置、电子设备、存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173565A1 (en) * 2010-01-12 2011-07-14 Microsoft Corporation Viewing media in the context of street-level images
CN107103644A (zh) * 2017-04-21 2017-08-29 腾讯科技(深圳)有限公司 一种虚拟场景中对象的控制方法和装置
CN109960545A (zh) * 2019-03-29 2019-07-02 网易(杭州)网络有限公司 虚拟对象控制方法、系统、装置、介质及电子设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, XIANMEI, BAI XIAN, ZHANG LAN-LAN: "Research on Semantic Environment Model for Virtual Human", COMPUTER ENGINEERING AND DESIGN, vol. 32, no. 11, 31 December 2011 (2011-12-31), pages 3904 - 3907, XP055816153, DOI: 10.16208/j.issn1000-7024.2011.11.071 *
XU, SHOUXIANG, XU RENFENG, YU CHENGLONG, MA CHAO: "Construction of 3D Semantic Entities Based on Fluent Calculus Agent and Their Interaction Perception", JOURNAL OF SHENZHEN INSTITUTE OF INFORMATION TECHNOLOGY, vol. 16, no. 2, 15 June 2018 (2018-06-15), pages 41 - 49, XP055816162, ISSN: 1672-6332 *

Also Published As

Publication number Publication date
US11842446B2 (en) 2023-12-12
US20220277523A1 (en) 2022-09-01
CN111095170A (zh) 2020-05-01
CN111095170B (zh) 2021-01-01

Similar Documents

Publication Publication Date Title
WO2021102615A1 (zh) 虚拟现实场景及其交互方法、终端设备
Fonnet et al. Survey of immersive analytics
US11823315B2 (en) Animation making method and apparatus, computing device, and storage medium
WO2020207190A1 (zh) 一种三维信息确定方法、三维信息确定装置及终端设备
WO2021138761A1 (zh) 虚拟角色的任务执行方法、装置及终端设备
CN103513992B (zh) 一种通用的教育娱乐机器人应用软件研制平台
US8085265B2 (en) Methods and systems of generating 3D user interface for physical environment
US11298820B2 (en) Corpus curation for action manifestation for cognitive robots
WO2020199690A1 (zh) 基于云平台共享学习系统及方法、共享平台及方法、介质
US20180202819A1 (en) Automatic routing to event endpoints
CN111966361B (zh) 用于确定待部署模型的方法、装置、设备及其存储介质
CN116468831B (zh) 模型处理方法、装置、设备及存储介质
Abdullah et al. A case study in COSMIC functional size measurement: angry bird mobile application
CN110457505A (zh) 基于图数据库进行关系挖掘的方法和装置
Taniguchi et al. Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society
Hamami et al. A systematic review on particle swarm optimization towards target search in the swarm robotics domain
Abu-Abed et al. New game artificial intelligence tools for virtual mine on unreal engine
CN104133942A (zh) 智能3d生活数据云服务系统
US20220083703A1 (en) Customizable reinforcement learning of column placement in structural design
US10637814B2 (en) Communication routing based on physical status
US9563723B1 (en) Generation of an observer view in a virtual environment
Schranz et al. Vsnsim-a simulator for control and coordination in visual sensor networks
US20180203883A1 (en) Computer-aided tracking of physical entities
US10635981B2 (en) Automated movement orchestration
Fu et al. formal Modeling and verification of dynamic reconfiguration of autonomous robotics systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954340

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19954340

Country of ref document: EP

Kind code of ref document: A1