WO2023174097A1 - 交互方法、装置、设备及计算机可读存储介质 - Google Patents
交互方法、装置、设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2023174097A1 WO2023174097A1 PCT/CN2023/080020 CN2023080020W WO2023174097A1 WO 2023174097 A1 WO2023174097 A1 WO 2023174097A1 CN 2023080020 W CN2023080020 W CN 2023080020W WO 2023174097 A1 WO2023174097 A1 WO 2023174097A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- information
- control instruction
- target control
- pose
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
- A63F13/525—Changing parameters of virtual cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present disclosure belongs to the field of virtual reality technology, and in particular relates to an interactive method, device, equipment and computer-readable storage medium.
- the alignment point of the ray related to the pointing of the handle in the virtual scene and the interface is generally determined based on the orientation of the handle, and the user directly responds to further operations on the handle. In this way, it is easy to have false responses when determining the user's operation on the interface, and the anti-interference ability is poor.
- the embodiments of the present disclosure provide an implementation solution that is different from the related technology to solve the technical problem in the related technology of poor anti-interference ability in the way the user interacts with the interface in the virtual reality scene.
- the present disclosure provides an interaction method, including:
- the target control area displays at least one element
- an interactive device including:
- the acquisition module is used to obtain the target control instructions corresponding to the target object
- a first determination module configured to use the target control instruction to determine a target control area corresponding to the target control instruction in the virtual scene, where at least one element is displayed in the target control area;
- a second determination module configured to determine a target element according to the target control instruction and the target control area, and the target element is included in the at least one element
- a response module configured to respond to the target control instruction based on the target element.
- an electronic device including:
- the processor is configured to perform the first aspect or any method in each possible implementation manner of the first aspect by executing the executable instructions.
- embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
- the computer program when executed by a processor, implements the first aspect or any of the possible implementations of the first aspect. method.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the first aspect or any method in each possible implementation manner of the first aspect.
- the present disclosure can obtain the target control instruction corresponding to the target object; use the target control instruction to determine the target control area corresponding to the target control instruction in the virtual scene, and the target control area displays at least one element; according to the target The control instruction and the target control area determine a target element, and the target element is included in the at least one element; based on the solution of the target element responding to the target control instruction, the target element can be realized in combination with the identification of the area where the element is located.
- the target elements are determined and the anti-interference ability is also strong, which improves the user experience.
- Figure 1 is a schematic structural diagram of an interactive system provided by an embodiment of the present disclosure
- Figure 2a is a schematic flowchart of an interaction method provided by an embodiment of the present disclosure
- Figure 2b is a schematic diagram of a target control area provided by an embodiment of the present disclosure.
- Figure 2c is a schematic scene diagram of a display method of target ray elements provided by an embodiment of the present disclosure
- Figure 2d is a schematic diagram of a display method of collision special effects provided by an embodiment of the present disclosure
- Figure 2e is a schematic diagram of the relationship between the operation area and the target control area provided by an embodiment of the present disclosure
- Figure 3 is a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure.
- FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- VR Virtual Reality, virtual reality technology, is a new practical technology that includes computers, electronic information, and simulation technologies. Its basic implementation method is to simulate a virtual environment with a computer to give people a sense of immersion in the environment.
- FIG. 1 is a schematic structural diagram of an interactive system provided by an exemplary embodiment of the present disclosure.
- the system includes: a control device 11 and a display device 10.
- the control device 11 is connected to the display device 10.
- the connection can be made through Bluetooth, or it can be Connect via the network.
- control device 11 is used for the user to trigger a target control instruction and send the target control instruction to the display device 10;
- the display device 10 is used to obtain the target control instruction corresponding to the target object; use the target control instruction to determine the target control area corresponding to the target control instruction in the virtual scene, and at least one element is displayed in the target control area; according to the The target control instruction and the target control area determine a target element, and the target element is included in the at least one element; and the target control instruction is responded to based on the target element.
- control device 11 may be a handle 112 or a pen-shaped holding device 111.
- the user can also directly recognize the gestures of the user's hand through the display device, thereby giving target control instructions through the gesture actions of the user's hand, thereby controlling the display device.
- the display device 10 may be a head-mounted device.
- the display device 10 may include a data processing device and a head-mounted device, wherein the data processing device is used to obtain a target control instruction corresponding to the target object; and use the target control instruction to determine the object in the virtual scene corresponding to the target control instruction.
- a target control area the target control area displays at least one element; determine a target element according to the target control instruction and the target control area, and the target element is included in the at least one element; based on the target element , responding to the target control instruction through the head-mounted device; the head-mounted device is used to display the corresponding screen.
- the data processing device may be a user terminal device or a device with data processing functions such as a PC.
- Figure 2a is a schematic flowchart of an interaction method provided by an exemplary embodiment of the present disclosure.
- the method can be applied to an extended reality XR device. Alternatively, it can be a display device in the embodiment corresponding to Figure 1.
- the method at least Includes the following steps:
- the aforementioned target object may be the control device in the embodiment corresponding to FIG. 1 , that is, a handle or a pen-like holding device.
- the aforementioned target object may also be the user's hand.
- the target control instruction may be a control instruction received from the control device.
- the target object is the user's hand, regarding the determination method of the target control instruction, the above method further includes:
- S02. Determine the user's action information based on the image information
- the action information is included in the preset gesture information set, trigger the step of obtaining the target control instruction corresponding to the target object.
- the target control instruction is the control instruction corresponding to the action information.
- the aforementioned image information may be captured by a camera set installed on the display device, or may be captured by a shooting device installed outside the display device and then sent to the display device.
- determining the user's action information based on the image information includes:
- the image information is input into a preset action recognition model, and the action recognition model is executed to obtain the user's action information.
- the action recognition model is a machine learning model obtained by training multiple sets of sample data.
- the aforementioned action information includes one or more of the following: movement trajectory information of the hand, movement information of at least one finger joint of the hand.
- the movement trajectory information of the hand includes the movement trajectory information of the palm center, and the motion information of the finger joints may be the motion information of the finger joints relative to the palm center.
- the motion information of the finger joints may be the movement information of the three-dimensional positions of the joint points corresponding to the finger joints.
- the gesture information set includes multiple sets of hand action information and gesture types corresponding to each set of hand action information; the above method also includes:
- the similarity value corresponding to the target result that is, the highest similarity value in the matching result is greater than the preset threshold, it is determined that the user's action information is included in the preset gesture information set.
- the gesture type corresponding to the target result is the target gesture type corresponding to the user's action information.
- the control instruction corresponding to the target gesture type is the control instruction corresponding to the action information (that is, the user's action information).
- the aforementioned pose information may include position information and attitude angle information, where the position information is three-dimensional coordinate information, and the attitude angle information is rotation information around a coordinate axis corresponding to the three-dimensional coordinate information.
- the attitude angle information includes: angle information of rotation around the X-axis, angle information of rotation around the Y-axis, and angle information of rotation around the Z-axis.
- using the target control instruction to determine the target control area in the virtual scene corresponding to the target control instruction may include:
- determining the target control area according to the area corresponding to the posture information may specifically include: using the area corresponding to the posture information as the target control area.
- obtaining the pose information corresponding to the target object includes:
- the pose information received from the target object is used as the pose information corresponding to the target object, where the pose information may be included in the target control instruction, or Can not be included in target control instructions.
- the pose information received from the target object is its own pose information determined by the target object according to its own measurement module.
- image information corresponding to the target object is obtained; pose information corresponding to the target object is determined based on the image information.
- target object in this disclosure can also be other parts of the user, such as arms, eyes, face, etc.
- the posture information corresponding to the hand may include the three-dimensional position information of the center of the palm, and the pointing information of the hand, where the pointing information of the hand is the direction of the line connecting the wrist joint of the hand and the root joint of the middle finger toward the median.
- the direction information of the root joint may include the attitude angle information of the three coordinate axes corresponding to the three-dimensional position information of the center of the palm.
- using the pose information to determine the area in the virtual scene corresponding to the pose information includes:
- the first correspondence information includes: multiple pose intervals and a region corresponding to each pose interval in the multiple pose intervals.
- determining the area corresponding to the pose information based on the target area includes: using the target area as the area corresponding to the pose information.
- the size of the pose information and its corresponding area may be related to the sensitivity of the control device.
- the size of the target control instruction and its corresponding target control area is also related to the sensitivity of the control device.
- the target control area corresponding to the target control instruction corresponding to the pose information abc can be It is A in Figure 2b.
- Figure 2b shows the sizes of three target control areas. The larger the target control area, the stronger the anti-interference ability. E in Figure 2b is the target element.
- the above method also includes:
- the size parameter of the area corresponding to each pose interval in the first correspondence information is determined according to the sensitivity level; wherein, the higher the sensitivity level, the larger the area corresponding to the same pose interval.
- determining the target element according to the target control instruction and the target control area includes:
- the control instruction set includes a variety of first control instructions and elements corresponding to various first control instructions;
- its corresponding target control instruction may be an instruction to click a button, an instruction to double-click a button, an instruction to long-press a button, etc.
- its corresponding target control instruction may be: making a fist, pressing down with the index finger, etc.
- determining the target element corresponding to the target control instruction based on the target control instruction and the control instruction set includes: setting the first control instruction that is the same as the target control instruction in the control instruction set.
- the corresponding element serves as the target element.
- the target element when the target object is a control device and the target control instruction is an instruction to click a preset button, the target element may be a heart-shaped logo.
- responding to the target control instruction based on the target element includes:
- the second correspondence information includes a variety of second control instructions for the target element, and various second correspondence information among the plurality of second control instructions.
- the aforementioned target task information may be control information on the target element, such as likes; it may also be control information on the screen content of the target control area achieved by operating the target element. Controlled task information, such as deletion, enlargement, pause, etc.
- responding to the target control instruction according to the target task information includes: controlling execution of the target task information.
- the target control instruction is an instruction to click a preset button
- the target element is a heart-shaped logo.
- responding to the target control instructions according to the target task information includes: controlling the heart-shaped logo Change color.
- the method further includes the following steps:
- the aforementioned update information may include any one or more of addition, deletion, modification, update information for the second control instruction in the second correspondence information, or the task information corresponding to the second control instruction.
- the above method also includes:
- displaying the target ray element based on the pose information includes: determining the characteristic information of the target ray element based on the pose information; and displaying the target ray element according to the characteristic information.
- the characteristic information includes at least one or more of the following: starting position information, center position information, color information, orientation information, length information, and thickness parameter information.
- the center position information is the three-dimensional coordinate information of the center of the target ray element
- the orientation information is the attitude angle information of the target ray element.
- the attitude angle information is the rotation information of the coordinate axis corresponding to the three-dimensional coordinate information around the center of the target ray element.
- the above method also includes:
- a special effect corresponding to the target ray element is displayed based on the receiving time.
- displaying special effects corresponding to the target ray element based on the receiving moment includes:
- the special effect element is controlled to move along the direction of the target ray element at a preset rate.
- displaying special effects corresponding to the target ray element based on the receiving moment includes:
- the special effect element is controlled to move along the target ray element Elements move at a preset rate.
- the model corresponding to the target object can also be displayed.
- the display mode of the target ray elements can be seen as shown in Figure 2c.
- the method further includes: if it is detected that the distance between the position information of the special effect element and the position information of the target element is less than a first preset distance, displaying the corresponding collision special effect.
- the method further includes: if it is detected that the distance between the position information of the special effect element and the position information of the center of the target control area is less than a second preset distance, displaying the corresponding collision special effect.
- the logo of the special effect element can be set by the user, such as a heart-shaped logo, a cartoon logo, etc.
- the position information of the special effect element and the position information of the target element may respectively refer to the position information of the special effect element in the virtual scene and the position information of the target element in the virtual scene.
- the position information of the center of the target control area is also the position information of the center of the target control area in the virtual scene.
- collision special effects include any one or more of the following:
- the target element changes color
- the target element is enlarged according to a preset ratio
- the text corresponding to the target control area is changed to the default text
- the display content of the target control area changes color
- the display method of the collision special effect can be seen as shown in Figure 2d.
- the aforementioned collision special effects can be three-dimensional special effects.
- This solution can realize multi-directional display of collision special effects in the virtual reality space, thereby making the scene more realistic, further improving the user's immersion and improving the user experience.
- the display of the target ray element may also be stopped.
- the display of the target ray element may also be stopped.
- the areas in the virtual scene involved in this disclosure are display areas in the virtual scene, and each display area corresponds to a coordinate range relative to the virtual scene.
- the display area in the virtual scene of the present disclosure may include a picture area and an operation area.
- the operation area may be at least part of the picture area, or cover the picture area, or partially overlap with the picture area, or be independent of the picture area. .
- the angle between the display surface corresponding to the operation area and the display surface corresponding to the picture area may be a preset value.
- the number of the aforementioned operation areas may be one, and the target control area determined according to the solution of the present disclosure may be the operation area, or may correspond to the operation area.
- the number of operation areas can also be multiple, the display surfaces corresponding to the multiple operation areas can be the same surface, and the target control area determined according to the solution of the present disclosure can be one of the aforementioned multiple operation areas.
- the relationship between the target control area and its corresponding operation area meets the following preset conditions: the center of the target control area coincides with the center of the operation area, and the area of the target control area is a preset multiple of the area of the operation area, where , the operation area is within the target control area, as shown in Figure 2e.
- the target control area can also be determined by detecting the target ray element displayed in the virtual scene, such as: obtaining the intersection point of the target camera element and the display surface in the virtual scene; determining the target control area based on the position information of the intersection point.
- the user wears the all-in-one headset and enters the VR scene, and can open the corresponding application.
- the display area of the application can include a screen area and an operation area.
- the operation area can be located on one side of the screen area, and the user can control it through the operation handle.
- the operation area is controlled to realize the user's interaction with the virtual reality scene.
- the user wears the all-in-one headset and enters the VR scene.
- the virtual scene displays a picture area and an operation area, and displays rays determined based on the user's operation of the handle.
- the rays move accordingly.
- the integrated headset responds to the control instruction and performs the tasks corresponding to the control instruction, such as likes, Forward etc.
- the present disclosure can obtain the target control instruction corresponding to the target object; use the target control instruction to determine the target control area corresponding to the target control instruction in the virtual scene, and the target control area displays at least one element; according to the target The control instruction and the target control area determine a target element, and the target element is included in the at least one element; based on the target element effect
- the scheme that responds to the target control instruction can be combined with the identification of the area where the element is located to achieve the determination of the target element, and has strong anti-interference ability, which improves the user experience.
- Figure 3 is a schematic structural diagram of an interactive device provided by an exemplary embodiment of the present disclosure.
- the interactive device can be applied to extended reality XR equipment, including:
- the acquisition module 31 is used to acquire the target control instruction corresponding to the target object
- the first determination module 32 is configured to use the target control instruction to determine a target control area corresponding to the target control instruction in the virtual scene, where the target control area displays at least one element;
- the second determination module 33 is configured to determine a target element according to the target control instruction and the target control area, and the target element is included in the at least one element;
- the response module 34 is configured to respond to the target control instruction based on the target element.
- the device when used to use the target control instruction to determine the target control area corresponding to the target control instruction in the virtual scene, it is specifically used to:
- the target control area is determined according to the area corresponding to the pose information.
- the device is further used for:
- the step of obtaining the target control instruction corresponding to the target object is triggered.
- the device when used to obtain the pose information corresponding to the target object, it is specifically used to:
- the device when used to use the pose information to determine a region in the virtual scene corresponding to the pose information, it is specifically used to:
- the area corresponding to the pose information is determined based on the target area.
- the device when used to determine a target element according to the target control instruction and the target control area, it is specifically used to:
- control instruction set corresponding to the target control area, where the control instruction set includes a variety of first control instructions and elements corresponding to various first control instructions;
- the target element corresponding to the target control instruction is determined based on the target control instruction and the control instruction set.
- the device when used to respond to the target control instruction based on the target element, it is specifically configured to:
- the second correspondence information includes a variety of second control instructions for the target element, and various second control instructions among the plurality of second control instructions.
- the device is further used for:
- the target ray element is displayed based on the pose information.
- the device is further used for:
- a special effect corresponding to the target ray element is displayed based on the receiving time.
- the device embodiments and the method embodiments may correspond to each other, and similar descriptions may refer to the method embodiments. To avoid repetition, they will not be repeated here.
- the device can execute the above method embodiment, and the foregoing and other operations and/or functions of each module in the device are respectively the corresponding processes in each method in the above method embodiment. For the sake of brevity, they will not be described here. Repeat.
- the software module may be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, register, etc.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps in the above method embodiment in combination with its hardware.
- FIG. 4 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device may include:
- Memory 401 and processor 402. The memory 401 is used to store computer programs and transmit the program code to the processor 402.
- the processor 402 can call and run the computer program from the memory 401 to implement the method in the embodiment of the present disclosure.
- the processor 402 may be configured to execute the above method embodiments according to instructions in the computer program.
- the processor 402 may include, but is not limited to:
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- the memory 401 includes, but is not limited to:
- Non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), electrically removable memory. Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. Volatile memory may be Random Access Memory (RAM), which is used as an external cache.
- RAM Random Access Memory
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM double data rate synchronous dynamic random access memory
- Double Data Rate SDRAM DDR SDRAM
- ESDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous link dynamic random access memory
- Direct Rambus RAM Direct Rambus RAM
- the computer program can be divided into one or more modules, and the one or more modules are stored in the memory 401 and executed by the processor 402 to complete the tasks provided by the present disclosure.
- the one or more modules may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program in the electronic device.
- the electronic device may also include:
- Transceiver 403, the transceiver 403 can be connected to the processor 402 or the memory 401.
- the processor 402 can control the transceiver 403 to communicate with other devices. Specifically, it can send information or data to other devices, or receive information or data sent by other devices.
- Send and receive Transmitter 403 may include a transmitter and a receiver.
- the transceiver 403 may further include an antenna, and the number of antennas may be one or more.
- bus system where in addition to the data bus, the bus system also includes a power bus, a control bus and a status signal bus.
- the present disclosure also provides a computer storage medium on which a computer program is stored.
- the computer program When the computer program is executed by a computer, the computer can perform the method of the above method embodiment.
- embodiments of the present disclosure also provide a computer program product containing instructions, which when executed by a computer causes the computer to perform the method of the above method embodiments.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted over a wired connection from a website, computer, server, or data center (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
- the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as digital video discs (DVD)), or semiconductor media (such as solid state disks (SSD)), etc.
- an interaction method including:
- the target control area displays at least one element
- using the target control instruction to determine the target control area corresponding to the target control instruction in the virtual scene includes:
- the target control area is determined according to the area corresponding to the pose information.
- the method further includes:
- the step of obtaining the target control instruction corresponding to the target object is triggered.
- obtaining the pose information corresponding to the target object includes:
- using the pose information to determine a region in the virtual scene corresponding to the pose information includes:
- the area corresponding to the pose information is determined based on the target area.
- determining the target element according to the target control instruction and the target control area includes:
- control instruction set corresponding to the target control area, where the control instruction set includes a variety of first control instructions and elements corresponding to various first control instructions;
- the target element corresponding to the target control instruction is determined based on the target control instruction and the control instruction set.
- responding to the target control instruction based on the target element includes:
- the second correspondence information includes a variety of second control instructions for the target element, and various second control instructions among the plurality of second control instructions.
- the method further includes:
- the target ray element is displayed based on the pose information.
- the method further includes:
- a special effect corresponding to the target ray element is displayed based on the receiving time.
- an interactive device including:
- the acquisition module is used to obtain the target control instructions corresponding to the target object
- a first determination module configured to use the target control instruction to determine a target control area corresponding to the target control instruction in the virtual scene, where at least one element is displayed in the target control area;
- a second determination module configured to determine a target element according to the target control instruction and the target control area, and the target element is included in the at least one element
- a response module configured to respond to the target control instruction based on the target element.
- the device when the device is configured to use the target control instruction to determine a target control area corresponding to the target control instruction in the virtual scene, it is specifically used to:
- the target control area is determined according to the area corresponding to the pose information.
- the device is further used for:
- the step of obtaining the target control instruction corresponding to the target object is triggered.
- the device when used to obtain the pose information corresponding to the target object, it is specifically used to:
- the device when used to use the pose information to determine a region in the virtual scene corresponding to the pose information, it is specifically used to:
- the area corresponding to the pose information is determined based on the target area.
- the device when used to determine a target element according to the target control instruction and the target control area, it is specifically used to:
- control instruction set corresponding to the target control area, where the control instruction set includes a variety of first control instructions and elements corresponding to various first control instructions;
- the target element corresponding to the target control instruction is determined based on the target control instruction and the control instruction set.
- the device when used to respond to the target control instruction based on the target element, it is specifically configured to:
- the second correspondence information includes a variety of second control instructions for the target element, and various second control instructions among the plurality of second control instructions.
- the device is further used for:
- the target ray element is displayed based on the pose information.
- the device is further used for:
- a special effect corresponding to the target ray element is displayed based on the receiving time.
- an electronic device including:
- the processor is configured to perform the above method via executing the executable instructions.
- a computer-readable storage medium is provided, a computer program is stored thereon, and when the computer program is executed by a processor, the above method is implemented.
- modules and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods for each specific application. method to implement the described functionality, but such implementation should not be considered to be beyond the scope of this disclosure.
- the disclosed systems, devices and methods can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the modules is only a logical function division. In actual implementation, there may be other division methods.
- multiple modules or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
- the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or modules, and may be in electrical, mechanical or other forms.
- Modules described as separate components may or may not be physically separated, and components shown as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, each functional module in various embodiments of the present disclosure may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Computer Graphics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
提供了一种交互方法、装置、设备及计算机可读存储介质,其中,方法包括:获取目标对象对应的目标控制指令;利用目标控制指令确定虚拟场景中与目标控制指令对应的目标控制区域,目标控制区域展示有至少一个元素;根据目标控制指令与目标控制区域确定目标元素,目标元素包含于至少一个元素中;基于目标元素响应目标控制指令。
Description
相关申请的交叉引用
本公开要求于2022年3月15日提交的,申请名称为“交互方法、装置、设备及计算机可读存储介质”的、中国专利申请号为“202210253972.6”的优先权,该中国专利申请的全部内容通过引用结合在本公开中。
本公开属于虚拟现实技术领域,尤其涉及一种交互方法、装置、设备及计算机可读存储介质。
相关技术中,在实现用户与虚拟现实场景中的界面的交互时,一般根据手柄的朝向确定虚拟场景中与手柄的指向相关的射线与界面的对准点,以及用户对手柄的进一步操作直接作出相应的响应,通过该方式,容易在确定用户对界面的操作时出现误响应,抗干扰能力较差。
本公开实施例提供一种与相关技术不同的实现方案,以解决相关技术中,实现用户与虚拟现实场景中的界面交互的方式,抗干扰能力较差的技术问题。
第一方面,本公开提供一种交互方法,包括:
获取目标对象对应的目标控制指令;
利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
基于所述目标元素响应所述目标控制指令。
第二方面,本公开提供一种交互装置,包括:
获取模块,用于获取目标对象对应的目标控制指令;
第一确定模块,用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
第二确定模块,用于根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
响应模块,用于基于所述目标元素响应所述目标控制指令。
第三方面,本公开提供一种电子设备,包括:
处理器;以及
存储器,用于存储所述处理器的可执行指令;
其中,所述处理器配置为经由执行所述可执行指令来执行第一方面或第一方面各可能的实施方式中的任一方法。
第四方面,本公开实施例提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现第一方面或第一方面各可能的实施方式中的任一方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现第一方面或第一方面各可能的实施方式中的任一方法。
本公开可通过获取目标对象对应的目标控制指令;利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;基于所述目标元素响应所述目标控制指令的方案,可结合对元素所在的区域的识别实现对目标元素的确定,抗干扰能力也较强,提高了用户体验。
为了更清楚地说明本公开实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1为本公开一实施例提供的交互系统的结构示意图;
图2a为本公开一实施例提供的交互方法的流程示意图;
图2b为本公开一实施例提供的目标控制区域的示意图;
图2c为本公开一实施例提供的目标射线元素的显示方式的场景示意图图;
图2d为本公开一实施例提供的碰撞特效的显示方式示意图;
图2e为本公开一实施例提供的操作区域与目标控制区域的关系的示意图;
图3为本公开一实施例提供的交互装置的结构示意图;
图4为本公开实施例提供的一种电子设备的结构示意图。
下面详细描述本公开的实施例,所述实施例的示例在附图中示出。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
本公开实施例的说明书、权利要求书及附图中的术语“第一”和“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开实施例的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
首先,下面对本公开实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
VR:Virtual Reality,虚拟现实技术,是一项全新的实用技术,囊括计算机、电子信息、仿真技术,其基本实现方式是计算机模拟虚拟环境从而给人以环境沉浸感。
下面以具体的实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
图1为本公开一示例性实施例提供的一种交互系统的结构示意图,该系统包括:控制设备11与显示设备10,控制设备11与显示设备10连接,具体可通过蓝牙进行连接,还可以通过网络进行连接。
具体地,控制设备11用于供用户触发目标控制指令,并将所述目标控制指令发送至显示设备10;
显示设备10用于获取目标对象对应的目标控制指令;利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域内展示有至少一个元素;根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;基于所述目标元素响应所述目标控制指令。
在本公开的一些可选的实施例中,前述控制设备11可以为手柄112,也可以为笔状握持设备111。
在一些实施例中,用户还可以直接通过显示设备对用户手部的手势进行识别,从而通过用户手部的手势动作来给出目标控制指令,从而对显示设备进行控制操作。
可选地,显示设备10可以为头戴设备。
可选地,显示设备10可以包括数据处理设备与头戴设备,其中,数据处理设备用于获取目标对象对应的目标控制指令;利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;基于所述目标元素,通过所述头戴设备响应所述目标控制指令;头戴设备用于显示相应的画面。其中,数据处理设备可以为用户终端设备或PC等具有数据处理功能的设备。
本系统实施例中的各组成单元,如控制设备11与显示设备10的具体的工作原理,与具体的交互过程可参见如下各方法实施例的描述。
图2a为本公开一示例性实施例提供的一种交互方法的流程示意图,该方法可应用于扩展现实XR设备,可选地,可以为图1对应的实施例中的显示设备,该方法至少包括以下步骤:
S201、获取目标对象对应的目标控制指令;
S202、利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
S203、根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
S204、基于所述目标元素响应所述目标控制指令。
可选地,前述目标对象可以为图1对应的实施例中的控制设备,即手柄或笔状握持设备。
可选地,前述目标对象还可以为用户的手部。
当目标对象为控制设备时,目标控制指令可以为接收自控制设备的控制指令,当目标对象为用户的手部时,针对目标控制指令的确定方式,上述方法还包括:
S01、获取第一预设时长内所述目标对象对应的图像信息;
S02、根据所述图像信息确定用户的动作信息;
S03、当所述动作信息包含于预设的手势信息集时,触发获取所述目标对象对应的目标控制指令的步骤。其中,目标控制指令为所述动作信息对应的控制指令。
具体地,前述图像信息可以由设置于显示设备的相机组拍摄得到,也可以为设置于显示设备外部的拍摄装置拍摄后发送至显示设备。
可选地,前述步骤S02中,根据所述图像信息确定用户的动作信息包括:
将所述图像信息输入预设的动作识别模型,执行所述动作识别模型,得到用户的动作信息。其中,动作识别模型为对多组样本数据进行训练得到的机器学习模型。
可选地,前述动作信息包括以下一种或多种:手部的移动轨迹信息、手部的至少一个指关节的运动信息。其中,手部的移动轨迹信息包括手掌中心的移动轨迹信息,指关节的运动信息可以为指关节相对于手掌中心的运动信息。其中,指关节的运动信息可以为指关节对应的关节点的三维位置的移动信息。
可选地,针对前述步骤S03中的手势信息集,所述手势信息集中包括多组手部动作信息,与各组手部动作信息对应的手势类型;上述方法还包括:
将所述用户的动作信息与手势信息集中的至少部分手部动作信息进行匹配,得到所述用户的动作信息与所述至少部分手部动作信息的匹配结果,所述匹配结果包括用户的动作信息与所述至少部分手部动作信息中的各组手部动作信息的相似度值;
获取所述匹配结果中对应的相似度值最高的目标结果;
若所述目标结果对应的相似度值,即匹配结果中最高的相似度值大于预设阈值,则确定所述用户的动作信息包含于所述预设的手势信息集。
其中,目标结果对应的手势类型为所述用户的动作信息对应的目标手势类型。
所述目标手势类型对应的控制指令则为所述动作信息(即用户的动作信息)对应的控制指令。
在本公开的一些可选的实施例中,前述位姿信息可包括位置信息与姿态角信息,其中,位置信息为三维坐标信息,姿态角信息为围绕该三维坐标信息对应的坐标轴的旋转信息,如姿态角信息包括:围绕X轴旋转的角度信息、围绕Y轴旋转的角度信息,以及围绕Z轴旋转的角度信息。
可选地,前述步骤S202中,利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域可包括:
S221、在获取到所述目标控制指令时,获取所述目标对象对应的位姿信息;
S222、利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域;
S223、根据所述位姿信息对应的区域确定所述目标控制区域。其中,根据所述位姿信息对应的区域确定所述目标控制区域可具体包括:将位姿信息对应的区域作为目标控制区域。
可选地,前述步骤S221中,获取所述目标对象对应的位姿信息包括:
将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,或
获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
可选地,当目标对象为控制设备时,将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,其中,所述位姿信息可以包含于目标控制指令中,也可以不含于目标控制指令中。
可选地,接收自目标对象的位姿信息为目标对象根据自身的测量模块确定的自身的位姿信息。
可选地,当目标对象为用户的手部时,则获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
需要说明的是,本公开中的目标对象还可以为用户的其它部位,如:胳膊、眼睛、面部等。
可选地,手部对应的位姿信息可包括手掌中心的三维位置信息,以及手部的指向信息,其中,手部的指向信息为手部的腕关节与中指根部关节的连线朝向中值根部关节的方向信息,该方向信息可包括手掌中心的三维位置信息对应的三个坐标轴的姿态角信息。
可选地,前述步骤S222中,利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域包括:
S2021、根据所述位姿信息与预设的位姿区间信息确定所述位姿信息所属的目标位姿区间,所述位姿区间信息中包括多个位姿区间;
S2022、利用所述目标位姿区间与预设的第一对应关系信息确定所述目标位姿区间对应的目标区域;
S2023、基于所述目标区域确定所述位姿信息对应的区域。
具体地,第一对应关系信息包括:多个位姿区间与多个位姿区间中各位姿区间对应的区域。
可选地,S2023中,基于所述目标区域确定所述位姿信息对应的区域包括:将所述目标区域作为所述位姿信息对应的区域。
具体地,位姿信息与其对应的区域的大小可以与控制设备的灵敏程度有
关,相应地,目标控制指令与其对应的目标控制区域的大小,也与控制设备的灵敏程度有关,具体可参见图2b所示,与位姿信息abc相应的目标控制指令对应的目标控制区域可以为图2b中的A,图2b示出了三种目标控制区域的大小,目标控制区域越大抗干扰能力越强,图2b中的e为目标元素。
可选地,上述方法还包括:
获取用户根据相关的灵敏度选择控件选择的灵敏度等级;
根据所述灵敏度等级确定所述第一对应关系信息中的各位姿区间对应的区域的大小参数;其中,灵敏度等级越高,同一位姿区间对应的区域越大。
可选地,前述S203中,根据所述目标控制指令与所述目标控制区域确定目标元素包括:
S2031、获取所述目标控制区域对应的控制指令集,所述控制指令集中包括多种第一控制指令与各种第一控制指令对应的元素;
S2032、基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素。
可选地,当目标对象为控制设备时,其对应的目标控制指令可以为单击按键的指令、双击按键的指令、长按按键的指令等。
可选地,当目标对象为手部时,其对应的目标控制指令可以为:握拳、食指按下等。
可选地,前述S2032中,基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素包括:将所述控制指令集中与所述目标控制指令相同的第一控制指令对应的元素作为所述目标元素。
在本公开的一些可选的实施例中,当目标对象为控制设备,目标控制指令为单击预设按键的指令时,目标元素可以为心形标识
进一步地,前述步骤S204中,基于所述目标元素响应所述目标控制指令包括:
S2041、获取所述目标元素对应的第二对应关系信息,所述第二对应关系信息包括针对所述目标元素的多种第二控制指令,与所述多种第二控制指令中各种第二控制指令对应的任务信息;
S2042、基于所述目标控制指令与所述第二对应关系信息确定所述目标控制指令对应的目标任务信息;
S2043、根据所述目标任务信息响应所述目标控制指令。
可选地,前述目标任务信息可以为对目标元素的控制信息,如点赞;也可以为对通过对目标元素的操作实现的对所述目标控制区域的画面内容进行
控制的任务信息,如删除、放大、暂停等信息。
可选地,S2043中,根据所述目标任务信息响应所述目标控制指令包括:控制执行所述目标任务信息。
在本公开的一些可选的实施例中,当目标对象为控制设备,目标控制指令为单击预设按键的指令,目标元素为心形标识目标任务信息为点赞时,根据目标任务信息响应目标控制指令包括:控制心形标识改变颜色。
可选地,所述方法还包括以下步骤:
S21、获取相关人员输入的针对所述第二对应关系信息的更新信息;
S22、根据所述更新信息对所述第二对应关系信息进行更新。
具体地,前述更新信息可包括增加、删除、修改中的任一种或多种,针对第二对应关系信息中的第二控制指令,或第二控制指令对应的任务信息的更新信息。
可选地,为了进一步提高用户体验,上述方法还包括:
S1、获取所述位姿信息对应的目标射线元素;
S2、基于所述位姿信息展示所述目标射线元素。
具体地,基于所述位姿信息展示目标射线元素包括:基于所述位姿信息确定所述目标射线元素的特征信息;按照所述特征信息展示所述目标射线元素。其中,所述特征信息包括以下至少一种或多种:起始位置信息、中心位置信息,颜色信息、朝向信息、长度信息,以及粗细参数信息。其中,中心位置信息为目标射线元素的中心的三维坐标信息,所述朝向信息目标射线元素的姿态角信息。其中,该姿态角信息为围绕目标射线元素的中心的三维坐标信息对应的坐标轴的旋转信息。
可选地,上述方法还包括:
获取所述目标控制指令对应的接收时刻;
基于所述接收时刻与所述目标射线元素展示对应的特效。
可选地,基于所述接收时刻与所述目标射线元素展示对应的特效包括:
获取所述目标元素对应的特效元素;
在所述接收时刻,控制所述特效元素沿所述目标射线元素的朝向按照预设速率进行移动。
可选地,基于所述接收时刻与所述目标射线元素展示对应的特效包括:
获取所述目标元素对应的特效元素;
在所述接收时刻后的预设时间段,控制所述特效元素沿所述目标射线元
素按照预设速率进行移动。
可选地,还可展示目标对象对应的模型,在本公开的一些可选的实施例中,目标射线元素的显示方式可参见图2c所示。
可选地,所述方法还包括:若检测到所述特效元素的位置信息与所述目标元素的位置信息的距离小于第一预设距离,则展示对应的碰撞特效。
可选地,所述方法还包括:若检测到所述特效元素的位置信息与目标控制区域的中心的位置信息的距离小于第二预设距离,则展示对应的碰撞特效。
可选地,特效元素的标识可以由用户设定,如:心形标识、卡通标识等。
其中,特效元素的位置信息以及目标元素的位置信息可以分别指特效元素在虚拟场景中的位置信息与目标元素在虚拟场景中的位置信息。相应地,目标控制区域的中心的位置信息也为目标控制区域的中心在虚拟场景中的位置信息。
其中,所述碰撞特效包括以下任一种或多种:
所述目标元素改变颜色;
所述目标元素按照预设比例放大;
所述目标控制区域对应的文字改变为预设文字;
所述目标控制区域的展示内容改变颜色;
预设的碰撞动画。
在本公开的一些可选的实施例中,碰撞特效的显示示方式可参见图2d所示。
可选地,前述碰撞特效可以为立体特效,本方案可以实现在虚拟现实空间内多方位展示碰撞特效,进而可以使场景更逼真,进一步提高用户的沉浸感,提高用户体验。
可选地,在控制所述特效元素与沿所述目标射线元素按照预设速率进行移动后的第二预设时长后,若未检测到所述特效元素的位置信息与所述目标元素的位置信息的距离小于第一预设距离,则不响应,还可停止展示所述目标射线元素。
可选地,在控制所述特效元素与沿所述目标射线元素按照预设速率进行移动后的第二预设时长后,若未检测到所述特效元素的位置信息与目标控制区域的中心的位置信息的距离小于第二预设距离,则不响应,还可停止展示所述目标射线元素。
具体地,本公开中涉及的虚拟场景中的区域,为虚拟场景中的展示区域,各展示区域对应有相对于虚拟场景的坐标范围。
可选地,本公开的虚拟场景中的展示区域可包括画面区域与操作区域,操作区域可以为画面区域的至少部分区域,或覆盖于画面区域,或与画面区域部分重合,或独立于画面区域。
可选地,操作区域对应的展示面与画面区域对应的展示面的夹角可以为预设值。
可选地,前述操作区域的个数可以为一个,根据本公开的方案确定出的目标控制区域可以为该操作区域,也可以与该操作区域对应。
可选地,操作区域的个数还可以为多个,多个操作区域对应的展示面可以为同一个面,根据本公开的方案确定出的目标控制区域可以为前述多个操作区域中的其中一个操作区域,或与其中一个操作区域对应。
可选地,目标控制区域与其对应的操作区域的关系符合以下预设条件:目标控制区域的中心与该操作区域的中心重合,目标控制区域的面积为该操作区域的面积的预设倍数,其中,操作区域处于目标控制区域之内,如图2e所示。
可选地,还可通过检测虚拟场景中展示的目标射线元素确定目标控制区域,如:获取目标摄像元素与虚拟场景中的展示面的交点;根据该交点的位置信息确定目标控制区域。
以下结合本公开的一应用场景对本方案进行进一步说明:
场景一、
用户佩戴头戴设备一体机,进入VR场景后,可打开相应的应用程序,该应用程序的展示区域可包括画面区域与操作区域,操作区域可位于画面区域的一侧,用户可通过操作手柄对操作区域进行控制实现用户与虚拟现实场景的交互。
场景二、
用户佩戴头戴设备一体机,进入VR场景后,虚拟场景中展示有画面区域与操作区域,并且显示有根据用户对手柄的操作确定的射线,用户移动手柄时,射线进行相应的移动,当射线与虚拟场景中的一操作区域对应的目标控制区域相交,且用户通过手柄触发控制指令时,则头戴设备一体机对该控制指令做出响应,执行该控制指令对应的任务,如点赞、转发等。
本公开可通过获取目标对象对应的目标控制指令;利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;基于所述目标元素响
应所述目标控制指令的方案,可结合对元素所在的区域的识别,实现对目标元素的确定,抗干扰能力也较强,提高了用户体验。
图3为本公开一示例性实施例提供的一种交互装置的结构示意图,该交互装置可应用于扩展现实XR设备,包括:
获取模块31,用于获取目标对象对应的目标控制指令;
第一确定模块32,用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
第二确定模块33,用于根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
响应模块34,用于基于所述目标元素响应所述目标控制指令。
根据本公开的一个或多个实施例,所述装置在用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域时,具体用于:
在获取到所述目标控制指令时,获取所述目标对象对应的位姿信息;
利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域;
根据所述位姿信息对应的区域确定所述目标控制区域。
根据本公开的一个或多个实施例,所述装置还用于:
获取第一预设时长内所述目标对象对应的图像信息;
根据所述图像信息确定用户的动作信息;
当所述动作信息包含于预设的手势信息集时,触发获取所述目标对象对应的目标控制指令的步骤。
根据本公开的一个或多个实施例,所述装置在用于获取所述目标对象对应的位姿信息时,具体用于:
将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,或
获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
根据本公开的一个或多个实施例,所述装置在用于利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域时,具体用于:
根据所述位姿信息与预设的位姿区间信息确定所述位姿信息所属的目标位姿区间,所述位姿区间信息中包括多个位姿区间;
利用所述目标位姿区间与预设的第一对应关系信息确定所述目标位姿区间对应的目标区域;
基于所述目标区域确定所述位姿信息对应的区域。
根据本公开的一个或多个实施例,所述装置在用于根据所述目标控制指令与所述目标控制区域确定目标元素时,具体用于:
获取所述目标控制区域对应的控制指令集,所述控制指令集中包括多种第一控制指令与各种第一控制指令对应的元素;
基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素。
根据本公开的一个或多个实施例,所述装置在用于基于所述目标元素响应所述目标控制指令时,具体用于:
获取所述目标元素对应的第二对应关系信息,所述第二对应关系信息包括针对所述目标元素的多种第二控制指令,与所述多种第二控制指令中各种第二控制指令对应的任务信息;
基于所述目标控制指令与所述第二对应关系信息确定所述目标控制指令对应的目标任务信息;
根据所述目标任务信息响应所述目标控制指令。
根据本公开的一个或多个实施例,所述装置还用于:
获取所述位姿信息对应的目标射线元素;
基于所述位姿信息展示所述目标射线元素。
根据本公开的一个或多个实施例,所述装置还用于:
获取所述目标控制指令对应的接收时刻;
基于所述接收时刻与所述目标射线元素展示对应的特效。
应理解的是,装置实施例与方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,该装置可以执行上述方法实施例,并且该装置中的各个模块的前述和其它操作和/或功能分别为了上述方法实施例中的各个方法中的相应流程,为了简洁,在此不再赘述。
上文中结合附图从功能模块的角度描述了本公开实施例的装置。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。具体地,本公开实施例中的方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本公开实施例公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。可选地,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法实施例中的步骤。
图4是本公开实施例提供的电子设备的示意性框图,该电子设备可包括:
存储器401和处理器402,该存储器401用于存储计算机程序,并将该程序代码传输给该处理器402。换言之,该处理器402可以从存储器401中调用并运行计算机程序,以实现本公开实施例中的方法。
例如,该处理器402可用于根据该计算机程序中的指令执行上述方法实施例。
在本公开的一些实施例中,该处理器402可以包括但不限于:
通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。
在本公开的一些实施例中,该存储器401包括但不限于:
易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。
在本公开的一些实施例中,该计算机程序可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器401中,并由该处理器402执行,以完成本公开提供的方法。该一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序在该电子设备中的执行过程。
如图4所示,该电子设备还可包括:
收发器403,该收发器403可连接至该处理器402或存储器401。
其中,处理器402可以控制该收发器403与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发
器403可以包括发射机和接收机。收发器403还可以进一步包括天线,天线的数量可以为一个或多个。
应当理解,该电子设备中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。
本公开还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。或者说,本公开实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得计算机执行上述方法实施例的方法。
当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本公开实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
根据本公开的一个或多个实施例,提供一种交互方法,包括:
获取目标对象对应的目标控制指令;
利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
基于所述目标元素响应所述目标控制指令。
根据本公开的一个或多个实施例,利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域包括:
在获取到所述目标控制指令时,获取所述目标对象对应的位姿信息;
利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域;
根据所述位姿信息对应的区域确定所述目标控制区域。
根据本公开的一个或多个实施例,所述方法还包括:
获取第一预设时长内所述目标对象对应的图像信息;
根据所述图像信息确定用户的动作信息;
当所述动作信息包含于预设的手势信息集时,触发获取目标对象对应的目标控制指令的步骤。
根据本公开的一个或多个实施例,获取所述目标对象对应的位姿信息包括:
将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,或
获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
根据本公开的一个或多个实施例,利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域包括:
根据所述位姿信息与预设的位姿区间信息确定所述位姿信息所属的目标位姿区间,所述位姿区间信息中包括多个位姿区间;
利用所述目标位姿区间与预设的第一对应关系信息确定所述目标位姿区间对应的目标区域;
基于所述目标区域确定所述位姿信息对应的区域。
根据本公开的一个或多个实施例,根据所述目标控制指令与所述目标控制区域确定目标元素包括:
获取所述目标控制区域对应的控制指令集,所述控制指令集中包括多种第一控制指令与各种第一控制指令对应的元素;
基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素。
根据本公开的一个或多个实施例,基于所述目标元素响应所述目标控制指令包括:
获取所述目标元素对应的第二对应关系信息,所述第二对应关系信息包括针对所述目标元素的多种第二控制指令,与所述多种第二控制指令中各种第二控制指令对应的任务信息;
基于所述目标控制指令与所述第二对应关系信息确定所述目标控制指令对应的目标任务信息;
根据所述目标任务信息响应所述目标控制指令。
根据本公开的一个或多个实施例,所述方法还包括:
获取所述位姿信息对应的目标射线元素;
基于所述位姿信息展示所述目标射线元素。
根据本公开的一个或多个实施例,所述方法还包括:
获取所述目标控制指令对应的接收时刻;
基于所述接收时刻与所述目标射线元素展示对应的特效。
根据本公开的一个或多个实施例,提供一种交互装置,包括:
获取模块,用于获取目标对象对应的目标控制指令;
第一确定模块,用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;
第二确定模块,用于根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;
响应模块,用于基于所述目标元素响应所述目标控制指令。
根据本公开的一个或多个实施例,所述装置用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域时,具体用于:
在获取到所述目标控制指令时,获取所述目标对象对应的位姿信息;
利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域;
根据所述位姿信息对应的区域确定所述目标控制区域。
根据本公开的一个或多个实施例,所述装置还用于:
获取第一预设时长内所述目标对象对应的图像信息;
根据所述图像信息确定用户的动作信息;
当所述动作信息包含于预设的手势信息集时,触发获取目标对象对应的目标控制指令的步骤。
根据本公开的一个或多个实施例,所述装置在用于获取所述目标对象对应的位姿信息时,具体用于:
将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,或
获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
根据本公开的一个或多个实施例,所述装置在用于利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域时,具体用于:
根据所述位姿信息与预设的位姿区间信息确定所述位姿信息所属的目标位姿区间,所述位姿区间信息中包括多个位姿区间;
利用所述目标位姿区间与预设的第一对应关系信息确定所述目标位姿区间对应的目标区域;
基于所述目标区域确定所述位姿信息对应的区域。
根据本公开的一个或多个实施例,所述装置在用于根据所述目标控制指令与所述目标控制区域确定目标元素时,具体用于:
获取所述目标控制区域对应的控制指令集,所述控制指令集中包括多种第一控制指令与各种第一控制指令对应的元素;
基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素。
根据本公开的一个或多个实施例,所述装置在用于基于所述目标元素响应所述目标控制指令时,具体用于:
获取所述目标元素对应的第二对应关系信息,所述第二对应关系信息包括针对所述目标元素的多种第二控制指令,与所述多种第二控制指令中各种第二控制指令对应的任务信息;
基于所述目标控制指令与所述第二对应关系信息确定所述目标控制指令对应的目标任务信息;
根据所述目标任务信息响应所述目标控制指令。
根据本公开的一个或多个实施例,所述装置还用于:
获取所述位姿信息对应的目标射线元素;
基于所述位姿信息展示所述目标射线元素。
根据本公开的一个或多个实施例,所述装置还用于:
获取所述目标控制指令对应的接收时刻;
基于所述接收时刻与所述目标射线元素展示对应的特效。
根据本公开的一个或多个实施例,提供一种电子设备,包括:
处理器;以及
存储器,用于存储所述处理器的可执行指令;
其中,所述处理器配置为经由执行所述可执行指令来执行上述方法。
根据本公开的一个或多个实施例,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方
法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。例如,在本公开各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。
以上仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以该权利要求的保护范围为准。
Claims (12)
- 一种交互方法,包括:获取目标对象对应的目标控制指令;利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;基于所述目标元素响应所述目标控制指令。
- 根据权利要求1所述的方法,其中,利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域包括:在获取到所述目标控制指令时,获取所述目标对象对应的位姿信息;利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域;根据所述位姿信息对应的区域确定所述目标控制区域。
- 根据权利要求1所述的方法,其中,所述方法还包括:获取第一预设时长内所述目标对象对应的图像信息;根据所述图像信息确定用户的动作信息;当所述动作信息包含于预设的手势信息集时,触发获取目标对象对应的目标控制指令的步骤。
- 根据权利要求2所述的方法,其中,获取所述目标对象对应的位姿信息包括:将接收自所述目标对象的位姿信息作为所述目标对象对应的位姿信息,或获取所述目标对象对应的图像信息;根据所述图像信息确定所述目标对象对应的位姿信息。
- 根据权利要求2所述的方法,其中,利用所述位姿信息确定虚拟场景中与所述位姿信息对应的区域包括:根据所述位姿信息与预设的位姿区间信息确定所述位姿信息所属的目标位姿区间,所述位姿区间信息中包括多个位姿区间;利用所述目标位姿区间与预设的第一对应关系信息确定所述目标位姿区间对应的目标区域;基于所述目标区域确定所述位姿信息对应的区域。
- 根据权利要求1所述的方法,其中,根据所述目标控制指令与所述目 标控制区域确定目标元素包括:获取所述目标控制区域对应的控制指令集,所述控制指令集中包括多种第一控制指令与各种第一控制指令对应的元素;基于所述目标控制指令与所述控制指令集确定所述目标控制指令对应的目标元素。
- 根据权利要求1所述的方法,其中,基于所述目标元素响应所述目标控制指令包括:获取所述目标元素对应的第二对应关系信息,所述第二对应关系信息包括针对所述目标元素的多种第二控制指令,与所述多种第二控制指令中各种第二控制指令对应的任务信息;基于所述目标控制指令与所述第二对应关系信息确定所述目标控制指令对应的目标任务信息;根据所述目标任务信息响应所述目标控制指令。
- 根据权利要求2所述的方法,其中,所述方法还包括:获取所述位姿信息对应的目标射线元素;基于所述位姿信息展示所述目标射线元素。
- 根据权利要求8所述的方法,其中,所述方法还包括:获取所述目标控制指令对应的接收时刻;基于所述接收时刻与所述目标射线元素展示对应的特效。
- 一种交互装置,包括:获取模块,用于获取目标对象对应的目标控制指令;第一确定模块,用于利用所述目标控制指令确定虚拟场景中与所述目标控制指令对应的目标控制区域,所述目标控制区域展示有至少一个元素;第二确定模块,用于根据所述目标控制指令与所述目标控制区域确定目标元素,所述目标元素包含于所述至少一个元素中;响应模块,用于基于所述目标元素响应所述目标控制指令。
- 一种电子设备,包括:处理器;以及存储器,用于存储所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1-9任一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-9任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210253972.6 | 2022-03-15 | ||
CN202210253972.6A CN116795263A (zh) | 2022-03-15 | 2022-03-15 | 交互方法、装置、设备及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023174097A1 true WO2023174097A1 (zh) | 2023-09-21 |
Family
ID=88022377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/080020 WO2023174097A1 (zh) | 2022-03-15 | 2023-03-07 | 交互方法、装置、设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116795263A (zh) |
WO (1) | WO2023174097A1 (zh) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190285896A1 (en) * | 2018-03-19 | 2019-09-19 | Seiko Epson Corporation | Transmission-type head mounted display apparatus, method of controlling transmission-type head mounted display apparatus, and computer program for controlling transmission-type head mounted display apparatus |
CN111249719A (zh) * | 2020-01-20 | 2020-06-09 | 腾讯科技(深圳)有限公司 | 轨迹提示方法和装置、存储介质及电子装置 |
CN111450527A (zh) * | 2020-04-30 | 2020-07-28 | 网易(杭州)网络有限公司 | 一种信息处理方法及装置 |
CN111813214A (zh) * | 2019-04-11 | 2020-10-23 | 广东虚拟现实科技有限公司 | 虚拟内容的处理方法、装置、终端设备及存储介质 |
CN112148189A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 一种ar场景下的交互方法、装置、电子设备及存储介质 |
CN112486031A (zh) * | 2020-12-18 | 2021-03-12 | 深圳康佳电子科技有限公司 | 一种智能家居设备的控制方法、存储介质及智能终端 |
CN113457151A (zh) * | 2021-07-16 | 2021-10-01 | 腾讯科技(深圳)有限公司 | 虚拟道具的控制方法、装置、设备及计算机可读存储介质 |
-
2022
- 2022-03-15 CN CN202210253972.6A patent/CN116795263A/zh active Pending
-
2023
- 2023-03-07 WO PCT/CN2023/080020 patent/WO2023174097A1/zh unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190285896A1 (en) * | 2018-03-19 | 2019-09-19 | Seiko Epson Corporation | Transmission-type head mounted display apparatus, method of controlling transmission-type head mounted display apparatus, and computer program for controlling transmission-type head mounted display apparatus |
CN111813214A (zh) * | 2019-04-11 | 2020-10-23 | 广东虚拟现实科技有限公司 | 虚拟内容的处理方法、装置、终端设备及存储介质 |
CN111249719A (zh) * | 2020-01-20 | 2020-06-09 | 腾讯科技(深圳)有限公司 | 轨迹提示方法和装置、存储介质及电子装置 |
CN111450527A (zh) * | 2020-04-30 | 2020-07-28 | 网易(杭州)网络有限公司 | 一种信息处理方法及装置 |
CN112148189A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 一种ar场景下的交互方法、装置、电子设备及存储介质 |
CN112486031A (zh) * | 2020-12-18 | 2021-03-12 | 深圳康佳电子科技有限公司 | 一种智能家居设备的控制方法、存储介质及智能终端 |
CN113457151A (zh) * | 2021-07-16 | 2021-10-01 | 腾讯科技(深圳)有限公司 | 虚拟道具的控制方法、装置、设备及计算机可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN116795263A (zh) | 2023-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11269481B2 (en) | Dynamic user interactions for display control and measuring degree of completeness of user gestures | |
KR102021851B1 (ko) | 가상현실 환경에서의 사용자와 객체 간 상호 작용 처리 방법 | |
CN108885521A (zh) | 跨环境共享 | |
CN108671539A (zh) | 目标对象交互方法及装置、电子设备、存储介质 | |
US10627911B2 (en) | Remote interaction with content of a transparent display | |
WO2023016174A1 (zh) | 手势操作方法、装置、设备和介质 | |
Matulic et al. | Phonetroller: Visual representations of fingers for precise touch input with mobile phones in vr | |
US20170131785A1 (en) | Method and apparatus for providing interface interacting with user by means of nui device | |
CN107272892A (zh) | 一种虚拟触控系统、方法及装置 | |
CN106201284B (zh) | 用户界面同步系统、方法 | |
CN117472189B (zh) | 具有实物感的打字或触控的实现方法 | |
CN110717993A (zh) | 一种分体式ar眼镜系统的交互方法、系统及介质 | |
CN111913674A (zh) | 虚拟内容的显示方法、装置、系统、终端设备及存储介质 | |
WO2023174097A1 (zh) | 交互方法、装置、设备及计算机可读存储介质 | |
CN109547696B (zh) | 一种拍摄方法及终端设备 | |
TW202405621A (zh) | 通過虛擬滑鼠遠程控制擴展實境的系統及方法 | |
KR20230170485A (ko) | 손 동작에 관한 이미지 데이터를 획득하는 전자 장치 및 그 동작 방법 | |
CN116339501A (zh) | 数据处理方法、装置、设备及计算机可读存储介质 | |
KR102612430B1 (ko) | 전이학습을 이용한 딥러닝 기반 사용자 손 동작 인식 및 가상 현실 콘텐츠 제공 시스템 | |
US20240281071A1 (en) | Simultaneous Controller and Touch Interactions | |
US20240248544A1 (en) | Method of calling applications, device, and medium | |
US20240103625A1 (en) | Interaction method and apparatus, electronic device, storage medium, and computer program product | |
CN114968053B (zh) | 操作处理方法及装置、计算机可读存储介质和电子设备 | |
US20240320924A1 (en) | Dynamic augmented reality and screen-based rebinding based on detected behavior | |
WO2024131405A1 (zh) | 对象移动控制方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23769603 Country of ref document: EP Kind code of ref document: A1 |