WO2020087642A1 - 虚拟交互的方法、实体机器人、显示终端及系统 - Google Patents

虚拟交互的方法、实体机器人、显示终端及系统 Download PDF

Info

Publication number
WO2020087642A1
WO2020087642A1 PCT/CN2018/118934 CN2018118934W WO2020087642A1 WO 2020087642 A1 WO2020087642 A1 WO 2020087642A1 CN 2018118934 W CN2018118934 W CN 2018118934W WO 2020087642 A1 WO2020087642 A1 WO 2020087642A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
robot
physical
scene
physical robot
Prior art date
Application number
PCT/CN2018/118934
Other languages
English (en)
French (fr)
Inventor
刘利剑
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201880038721.8A priority Critical patent/CN110869881A/zh
Publication of WO2020087642A1 publication Critical patent/WO2020087642A1/zh
Priority to US17/242,249 priority patent/US20210245368A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/081Touching devices, e.g. pressure-sensitive
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • Embodiments of the present application relate to the field of robots, and in particular, to a virtual interaction method, physical robot, display terminal, and system.
  • a common scenario is that the user and the physical robot are in the same real scene and the distance between the two is relatively short.
  • the user uses the remote control to remotely control the physical robot.
  • this type of human-computer interaction requires that the distance between the user and the physical robot cannot exceed the coverage of the remote control signal. If the distance between the user and the physical robot exceeds the coverage of the remote control signal, this type of human-computer interaction cannot be used. .
  • Another common scenario is to simulate user interaction with a virtual robot in a virtual scene.
  • the virtual scene in this human-computer interaction mode is artificially designed in advance and has nothing to do with the real scene. Therefore, the user experience is not real enough.
  • This application provides a virtual interaction method, physical robot, display terminal and system to optimize the human-computer interaction experience.
  • a first aspect of an embodiment of the present application provides a method for virtual interaction.
  • the method includes:
  • a virtual scene corresponding to the real scene in the current measurement range is drawn to display the virtual scene through the display terminal.
  • a second aspect of an embodiment of the present application provides a physical robot, including:
  • At least one sensor for measuring the real scene in the current measurement range At least one sensor for measuring the real scene in the current measurement range
  • a processor connected to the at least one sensor, is configured to obtain data measured by the at least one sensor by measuring a real scene in the current measurement range, and execute the method described in the first aspect of the present application.
  • a third aspect of an embodiment of the present application provides a display terminal, including:
  • a communication component configured to communicate with the first physical robot to obtain data measured by at least one sensor on the first physical robot to measure the real scene in the current measurement range;
  • a display component connected to the processor, is used to display a virtual scene corresponding to the real scene in the current measurement range.
  • a fourth aspect of the embodiments of the present application provides a system for virtual interaction.
  • the system includes:
  • the first physical robot has at least one sensor for measuring the real scene in the current measurement range
  • the data processing server is connected to the first physical robot and is used for the method described in the first aspect of the present application.
  • a fifth aspect of the embodiments of the present application provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps in the method as described in the first aspect of the present application.
  • the technical solution does not limit the distance between the user and the physical robot must be within the coverage of the remote control signal It also does not limit that the user and the physical robot must be in the same real scene.
  • the sensors on the physical robot measure the real scene in the current measurement range and the measured data changes synchronously.
  • the virtual scene drawn also changes synchronously and is displayed through the display terminal. The user can experience the real scene around the real robot in real time by watching the real-time changing virtual scene displayed on the display terminal.
  • FIG. 2 is a flowchart of a method for virtual interaction proposed by another embodiment of the present application.
  • FIG. 3 is a flowchart of a method for virtual interaction proposed by another embodiment of the present application.
  • FIG. 5 is a flowchart of a method for virtual interaction proposed by another embodiment of the present application.
  • FIG. 6 is a schematic diagram of a physical robot provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a display terminal provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a virtual interaction system provided by an embodiment of the present application.
  • a method for virtual interaction proposed in an embodiment of the present application may be executed by a processor with an information processing function.
  • the processor may be disposed inside the physical robot (for example: the first physical robot in any of the following embodiments of the present application, any physical robot other than the first physical robot), or the processor may be disposed in the display terminal (for example : A terminal with both a display function and an information processing function), or the processor may also be installed inside a data processing server (for example, a server with a data processing function).
  • FIG. 1 is a flowchart of a virtual interaction method proposed by an embodiment of the present application. As shown in Figure 1, the method includes the following steps:
  • Step S11 Obtain data measured by at least one sensor on the first physical robot to measure the real scene in the current measurement range, and the current measurement range changes with the movement of the first physical robot in the real scene;
  • Step S12 Based on the data measured by the at least one sensor, draw a virtual scene corresponding to the real scene in the current measurement range to display the virtual scene through the display terminal.
  • At least one sensor is provided on the first physical robot, and the at least one sensor may be a real scene measurement sensor, that is, a sensor for measuring a real scene around the first physical robot.
  • the at least one sensor includes but is not limited to: an image sensor, a camera, an angular velocity sensor, an infrared sensor, a laser radar, and the like.
  • obtaining data measured by at least one sensor on the first physical robot includes but is not limited to: depth of field data, orientation data, color data, and the like.
  • the current measurement range of at least one sensor changes accordingly. For example, assuming that the first physical robot is walking in a house in the real world, as the first physical robot moves from the southeast corner of the house to the northwest corner, the current measurement range of at least one sensor also changes from the southeast corner of the house To the northwest corner, correspondingly, the data obtained by at least one sensor on the first physical robot also changes accordingly. That is to say, the data measured by at least one sensor changes in real time, is synchronized with the real scene currently surrounding the first physical robot, and is data representing the real scene surrounding the first physical robot.
  • step S12 is executed to draw a virtual scene corresponding to the real scene in the current measurement range of the at least one sensor.
  • a virtual scene corresponding to the real scene in the current measurement range of the at least one sensor.
  • the correspondingly drawn virtual scene also changes in real time, and is synchronized with the real scene around the first physical robot currently.
  • the drawn virtual scene will be displayed through the display terminal.
  • the corresponding virtual scene is drawn and displayed through the display terminal, and the user can watch the virtual scene displayed on the display terminal .
  • the sensors on the physical robot measure the real scene in the current measurement range and the measured data changes synchronously.
  • the virtual scene drawn also changes synchronously and is displayed through the display terminal. The user can experience the real scene around the real robot in real time by watching the real-time changing virtual scene displayed on the display terminal.
  • the at least one sensor includes a position sensor; referring to FIG. 2, FIG. 2 is a flowchart of a method for virtual interaction provided by another embodiment of the present application. As shown in FIG. 2, the method includes the following steps in addition to steps S11-S12:
  • Step S13 according to the data measured by the position sensor, draw a first virtual robot corresponding to the first physical robot in the virtual scene to display the virtual scene containing the first virtual robot through the display terminal .
  • the movement of the first virtual robot in the virtual scene is synchronized with the movement of the first physical robot in the real scene.
  • At least one sensor further includes a position sensor.
  • the first virtual robot corresponding to the first physical robot may be continuously drawn in the drawn virtual scene.
  • the correspondence between the first physical robot and the first virtual robot means that the movement of the first physical robot in the real scene is synchronized with the movement of the first virtual robot in the drawn virtual scene, that is, the first virtual robot is to The first physical robot is mapped to the image obtained in the drawn virtual scene.
  • the data obtained by acquiring the position sensor on the first physical robot also changes accordingly.
  • the first virtual robot drawn in step S13 accordingly also changes in real time and is synchronized with the movement of the first physical robot.
  • the virtual robot corresponding to the physical robot is superimposed on the drawn virtual scene and displayed through the display terminal.
  • the user can watch the virtual scene containing the virtual robot displayed on the display terminal, on the one hand, truly experience the real scene around the physical robot, and know the position of the physical robot in the real scene around it, on the other hand, because the virtual scene contains Virtual robots improve visual interest.
  • the data measured by the position sensor on the physical robot changes synchronously, and the drawn virtual robot also moves synchronously and is displayed through the display terminal.
  • a virtual robot that moves synchronously with a robot can visually perceive the movement of a physical robot in a real scene in real time.
  • FIG. 3 is a flowchart of a method for virtual interaction provided by another embodiment of the present application. As shown in FIG. 3, the method includes the following steps in addition to steps S11-S13:
  • Step S14 Based on the data measured by the at least one sensor, draw a virtual component in the virtual scene to display the virtual scene containing the virtual component through the display terminal;
  • Step S15 Obtain a first control instruction for the first physical robot, the first control instruction is used to control the first physical robot and the first virtual robot to move synchronously, so that the first virtual robot Interact with the virtual component in the virtual scene;
  • Step S16 In response to the first control instruction, control the first virtual robot to interact with the virtual component in the virtual scene.
  • the virtual component is a virtual and interactive component.
  • the virtual component is a virtual and interactive component drawn based on data measured by at least one sensor on the first physical robot.
  • the virtual component can be drawn continuously in the drawn virtual scene. In this way, the virtual component is superimposed on the drawn virtual scene and displayed through the display terminal.
  • the user can truly experience the real scene around the physical robot on the one hand, and on the other hand, because the virtual scene contains virtual components, the visual interest is improved.
  • the virtual component can also be drawn in the real scene around the user.
  • the user can actually experience the real scene around the physical robot by watching the virtual scene displayed on the display terminal, on the other hand, the user can also see the virtual components in the real scene around him, so that the user can easily
  • the combination of the virtual scenes and virtual components that are seen enhances the visual richness and fun.
  • the virtual scene containing the virtual component is displayed through the display terminal, the user can watch the virtual scene containing the virtual component displayed on the display terminal, and if the user wants to experience the interactive function of the virtual component, the first physical robot can be targeted
  • the control operation is performed so that the processor executes step S15 to obtain the first control instruction.
  • there are other physical robots in the real scene where the first physical robot is located that is, there are multiple physical robots located in the same real scene as the first physical robot, if the user wants to experience the same real scene
  • the interaction of multiple physical robots in the drawn virtual scene may also make a control operation for the first physical robot, so that the processor performs steps similar to step S15 to obtain a second control instruction.
  • the processor obtains the first control instruction, and is not limited to the following multiple implementation manners:
  • First implementation manner obtaining a first remote control instruction from a remote controller, and the first physical robot is adapted to the remote controller.
  • Second implementation manner obtaining a touch operation collected by a touch device; processing the touch operation to obtain the first control instruction.
  • Third implementation manner obtaining a gesture image collected by an image collection device; processing the gesture image to obtain the first control instruction.
  • Fourth implementation manner obtaining audio data collected by an audio collection device; processing the audio data to obtain the first control instruction.
  • the user can press a button on the remote control, so that the remote control generates the first remote control command and transmits it to the processor.
  • the processor controls the movement of the first physical robot, indirectly controls the synchronous movement of the first virtual robot, and then controls the interaction between the first virtual robot and the virtual component.
  • the processor If the processor is connected to the touch device, the user can make a touch operation.
  • the touch device collects the user's touch operation and transmits it to the processor.
  • the processor processes the touch operation and determines the first control. Instruction, and then control the movement of the first physical robot, indirectly control the synchronous movement of the first virtual robot, and then control the interaction between the first virtual robot and the virtual component.
  • the processor If the processor is connected to the image acquisition device, the user can make a gesture.
  • the image acquisition device collects the user's gesture image and transmits it to the processor.
  • the processor processes the gesture image to determine the first control instruction, and then controls The movement of the first physical robot indirectly controls the synchronous movement of the first virtual robot, thereby controlling the interaction between the first virtual robot and the virtual component.
  • the processor If the processor is connected to an audio collection device, the user can speak the audio corresponding to the first control command, the audio collection device transmits the collected audio data to the processor, and the processor processes the audio data to determine
  • the control instruction then controls the movement of the first physical robot, indirectly controls the synchronous movement of the first virtual robot, and then controls the interaction between the first virtual robot and the virtual component.
  • the user controls the physical robot to move in the real scene by pressing the remote control, making a touch operation, making a gesture, or speaking audio, so that the virtual robot corresponding to the physical robot is in the drawn virtual scene Synchronous movements, and thus the user-controlled physical robot, can be achieved, so that the corresponding virtual robot interacts with the virtual component in the drawn virtual scene, which improves the fun of human-computer interaction.
  • FIG. 4 is a flowchart of a method for virtual interaction provided by another embodiment of the present application. As shown in FIG. 4, the method includes the following steps in addition to steps S11-S12:
  • S14 ' Based on the position data of each of the plurality of other physical robots, draw other virtual robots corresponding to each of the plurality of other physical robots in the virtual scene.
  • the plurality of other physical robots are different from the first An actual robot of an actual robot to display a virtual scene containing the other virtual robot through a display terminal.
  • the processor can obtain the respective position data of multiple other physical robots located in the same real scene as the first physical robot.
  • multiple other physical robots located in the same real scene with the first physical robot each have a position sensor and are connected to the processor, and the respective position sensors of multiple other physical robots located in the same real scene with the first physical robot will measure The obtained position data is transmitted to the processor.
  • the processor After the processor obtains the position data of each of the plurality of other physical robots and executes step S12, it may continue to draw other virtual robots corresponding to each of the plurality of other physical robots in the drawn virtual scene. Drawing other virtual robots corresponding to other physical robots is similar to drawing the first virtual robot corresponding to the first physical robot, and will not be repeated here.
  • step S13'-step S14 'and step S13 may be implemented.
  • all virtual robots corresponding to their corresponding virtual robots are drawn in the drawn virtual scene and displayed through the display terminal.
  • the user can know the relative positions of all the physical robots in the real scene by watching the virtual scene displayed on the display terminal containing the virtual robots corresponding to all the physical robots, which improves the visual interest.
  • FIG. 5 is a flowchart of a method for virtual interaction provided by another embodiment of the present application. As shown in FIG. 5, the method includes the following steps in addition to steps S11-S12 and steps S13'-S14 ':
  • Step S15 ' Obtaining a second control instruction for the first physical robot, the second control instruction is used to synchronize the movement of the first virtual robot and the first virtual robot corresponding to the first physical robot, Causing the first virtual robot to interact with the other virtual robot in the virtual scene;
  • Step S16 ' In response to the second control instruction, controlling the first virtual robot to interact with the other virtual robot in the virtual scene.
  • all physical robots are drawn in the drawn virtual scene corresponding to their corresponding virtual robots, and displayed through the display terminal, so that the user knows the relative positions of all physical robots in the real scene.
  • the user wants to experience the interaction of multiple physical robots located in the same real scene in the drawn virtual scene, and can make a control operation for the first physical robot so that the processor performs steps similar to step S15 to obtain the second control instruction .
  • the following describes how the processor controls the interaction between the first virtual robot and other virtual robots.
  • the user can press a button on the remote control, so that the remote control generates the first remote control command and transmits it to the processor.
  • the processor controls the movement of the first physical robot, indirectly controls the synchronous movement of the first virtual robot, and then controls the first virtual robot to interact with other virtual robots.
  • the processor If the processor is connected to the touch device, the user can make a touch operation.
  • the touch device collects the user's touch operation and transmits it to the processor.
  • the processor processes the touch operation and determines the first control. Instruction, and then control the movement of the first physical robot, indirectly control the synchronous movement of the first virtual robot, and then control the first virtual robot to interact with other virtual robots.
  • the processor If the processor is connected to the image acquisition device, the user can make a gesture.
  • the image acquisition device collects the user's gesture image and transmits it to the processor.
  • the processor processes the gesture image to determine the first control instruction, and then controls The movement of the first physical robot indirectly controls the synchronous movement of the first virtual robot, thereby controlling the interaction of the first virtual robot with other virtual robots.
  • the processor If the processor is connected to an audio collection device, the user can speak the audio corresponding to the first control command, the audio collection device transmits the collected audio data to the processor, and the processor processes the audio data to determine the first Control instructions, and then control the movement of the first physical robot, indirectly control the synchronous movement of the first virtual robot, and then control the first virtual robot to interact with other virtual robots.
  • the user controls the physical robot to move in the real scene by pressing the remote control, making a touch operation, making a gesture, or speaking audio, so that the virtual robot corresponding to the physical robot is in the drawn virtual scene Synchronous movements, and thus user-controlled physical robots, can be achieved, so that the corresponding virtual robots interact with other virtual robots in the drawn virtual scene, which improves the fun of human-computer interaction.
  • FIG. 6 is a schematic diagram of a physical robot provided by an embodiment of the present application.
  • physical robots include:
  • the processor 602 is connected to the at least one sensor, and is used to obtain data measured by the at least one sensor by measuring the real scene in the current measurement range, and execute the virtual interaction described in the above embodiments of the present application method.
  • FIG. 7 is a schematic diagram of a display terminal provided by an embodiment of the present application. As shown in FIG. 7, the display terminal includes:
  • the communication component 701 is configured to communicate with the first physical robot to obtain data measured by at least one sensor on the first physical robot to measure the real scene in the current measurement range;
  • the processor 702 is used for the virtual interaction method described in the foregoing embodiments of the present application.
  • the display component 703 is connected to the processor and is used to display a virtual scene corresponding to the real scene in the current measurement range.
  • the display component is a touch screen for collecting touch operations; or an integrated touch panel in the display terminal is connected to the processor for collecting touch operations.
  • an image acquisition component is integrated in the display terminal, connected to the processor, and used for acquiring gesture images.
  • an audio collection component is integrated in the display terminal, connected to the processor, and used for collecting audio data.
  • the display terminal is smart glasses, a smart phone or a tablet computer.
  • FIG. 8 is a schematic diagram of a virtual interaction system provided by an embodiment of the present application.
  • the virtual interaction system includes:
  • the first physical robot 801 has at least one sensor for measuring the real scene in the current measurement range;
  • the data processing server 802 is connected to the first physical robot and is used for the method described in the first aspect of the present application.
  • the system further includes:
  • the display terminal 803 is connected to the data processing server and is used to display a virtual scene corresponding to the real scene in the current measurement range.
  • system further includes:
  • the remote controller 804 is adapted to the first physical robot, and is used to generate a first remote control instruction.
  • system further includes:
  • the touch control device 805 is connected to the data processing server and used for collecting touch operations.
  • system further includes:
  • the image acquisition device 806 is connected to the data processing server and is used for acquiring gesture images.
  • system further includes:
  • the audio collection device 807 is connected to the data processing server and used for collecting audio data.
  • another embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps in the method described in any of the above embodiments of the present application .
  • another embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor. The steps in the method described in the examples.
  • the description is relatively simple, and the relevant part can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the present invention may be provided as methods, devices, or computer program products. Therefore, the embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present invention may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing terminal device to produce a machine so that the instructions executed by the processor of the computer or other programmable data processing terminal device Means for generating the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing terminal device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including instruction means
  • the instruction device implements the functions specified in one block or multiple blocks in the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded on a computer or other programmable data processing terminal device, so that a series of operation steps are performed on the computer or other programmable terminal device to generate computer-implemented processing, so that the computer or other programmable terminal device
  • the instructions executed above provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.

Abstract

一种虚拟交互的方法、实体机器人、显示终端及系统,以优化人机交互的体验。所述虚拟交互的方法包括:获得第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,所述当前测量范围随所述第一实体机器人在真实场景中的运动而改变(S11);根据所述至少一个传感器测得的数据,绘制所述当前测量范围内的真实场景对应的虚拟场景,以通过显示终端显示所述虚拟场景(S12)。

Description

虚拟交互的方法、实体机器人、显示终端及系统
本申请要求在2018年10月31日提交中国专利局、申请号为201811291700.5、发明名称为“虚拟交互的方法、实体机器人、显示终端及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及机器人领域,尤其涉及一种虚拟交互的方法、实体机器人、显示终端及系统。
背景技术
随着机器人的普及,人们在日常工作和生活中需要与机器人进行交互的情景越来越多。
一种常见的情景是:用户与实体机器人处于同一真实场景中且两者之间的距离较近,用户使用遥控器对实体机器人进行遥控。然而,此种人机交互方式要求用户与实体机器人之间的距离不能超出遥控信号的覆盖范围,如果用户与实体机器人之间的距离超出遥控信号的覆盖范围,则无法采用此种人机交互方式。
另一种常见的情景是:模拟用户在虚拟场景中与虚拟机器人的交互。然而,此种人机交互方式中的虚拟场景是预先人为设计的,与真实场景无关,因而,带给用户的体验不够真实。
发明内容
本申请提供一种虚拟交互的方法、实体机器人、显示终端及系统,以优化人机交互的体验。
本申请实施例第一方面提供了一种虚拟交互的方法,所述方法包括:
获得第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,所述当前测量范围随所述第一实体机器人在真实场景中的运动而改变;
根据所述至少一个传感器测得的数据,绘制所述当前测量范围内的真 实场景对应的虚拟场景,以通过显示终端显示所述虚拟场景。
本申请实施例第二方面提供一种实体机器人,包括:
至少一个传感器,用于对当前测量范围内的真实场景进行测量;
处理器,与所述至少一个传感器连接,用于获得所述至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,并执行本申请第一方面所述的方法。
本申请实施例第三方面提供一种显示终端,包括:
通信组件,用于与第一实体机器人通信,以获得所述第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据;
处理器,用于本申请第一方面所述的方法;
显示组件,与所述处理器连接,用于显示所述当前测量范围内的真实场景对应的虚拟场景。
本申请实施例第四方面提供一种虚拟交互的系统,所述系统包括:
第一实体机器人,具有至少一个传感器,用于对当前测量范围内的真实场景进行测量;
数据处理服务器,与所述第一实体机器人连接,用于本申请第一方面所述的方法。
本申请实施例第五方面提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请第一方面所述的方法中的步骤。
采用上述技术方案,根据在实体机器人上的传感器对当前测量范围内的真实场景进行测量而测得的数据,绘制相应的虚拟场景并通过显示终端显示出来,用户可以通过观看显示终端显示的虚拟场景,真实地体验实体机器人当前周围的真实场景,实现了将用户真实地带入到实体机器人当前周围的真实场景中,该技术方案并不限定用户与实体机器人之间的距离必须在遥控信号的覆盖范围,也并不限定用户与实体机器人必须处于同一真实场景中。并且,随着实体机器人在真实场景中的运动,实体机器人上的传感器对当前测量范围内的真实场景进行测量而测得的数据同步变化,绘 制的虚拟场景也同步变化并通过显示终端显示出来,用户可以通过观看显示终端显示的实时变化的虚拟场景,实时地体验实体机器人当前周围的真实场景。
附图说明
为了更清楚地说明本申请各个实施例的技术方案,下面将对本申请各个实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例提出的虚拟交互的方法的流程图;
图2是本申请另一实施例提出的虚拟交互的方法的流程图;
图3是本申请另一实施例提出的虚拟交互的方法的流程图;
图4是本申请另一实施例提出的虚拟交互的方法的流程图;
图5是本申请另一实施例提出的虚拟交互的方法的流程图;
图6是本申请一实施例提供的实体机器人的示意图;
图7是本申请一实施例提供的显示终端的示意图;
图8是本申请一实施例提供的虚拟交互的系统的示意图。
具体实施方式
下面将结合本申请各个实施例中的附图,对本申请各个实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的各个实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
首先,本申请一实施例提出的虚拟交互的方法,该方法可以由具有信息处理功能的处理器执行。该处理器可以设置在实体机器人(例如:本申请下述各个实施例中的第一实体机器人、除第一实体机器人外的任一实体机器人)内部,该处理器也可以设置在显示终端(例如:一个兼具显示功能 和信息处理功能的终端)内部,或者,该处理器也可以设置在数据处理服务器(例如:一个具有数据处理功能的服务器)内部。
参考图1,图1是本申请一实施例提出的虚拟交互的方法的流程图。如图1所示,该方法包括以下步骤:
步骤S11:获得第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,所述当前测量范围随所述第一实体机器人在真实场景中的运动而改变;
步骤S12:根据所述至少一个传感器测得的数据,绘制所述当前测量范围内的真实场景对应的虚拟场景,以通过显示终端显示所述虚拟场景。
在本实施例中,第一实体机器人上设置有至少一个传感器,至少一个传感器可以是实景测量传感器,即用于测量第一实体机器人周围的真实场景的传感器。示例地,至少一个传感器包括但不限于:图像传感器、摄像头、角速度传感器、红外传感器、激光雷达等。相应地,获得第一实体机器人上的至少一个传感器测得的数据包括但不限于:景深数据、方位数据、颜色数据等。
可以理解的是,随着第一实体机器人在真实场景中的运动,至少一个传感器的当前测量范围相应变化。示例地,假设第一实体机器人在真实世界中的一间房屋里行走,随着第一实体机器人从该房屋的东南角走向西北角,至少一个传感器的当前测量范围也由该房屋的东南角变化到西北角,相应地,获得第一实体机器人上的至少一个传感器测得的数据也随之变化。也就是说,至少一个传感器测得的数据是实时变化的,是与第一实体机器人当前周围的真实场景同步的,是表征第一实体机器人当前周围的真实场景的数据。
在得到第一实体机器人上的至少一个传感器测得的数据后,执行步骤S12,绘制至少一个传感器的当前测量范围内的真实场景对应的虚拟场景。具体绘制虚拟场景的方法可以参考相关技术。可以理解的是,随着步骤S11中至少一个传感器测得的数据的实时变化,相应地绘制出的虚拟场景也是实时变化的,是与第一实体机器人当前周围的真实场景同步的。绘制出 的虚拟场景将通过显示终端显示出来。
采用上述技术方案,根据在实体机器人上的传感器对当前测量范围内的真实场景进行测量而测得的数据,绘制相应的虚拟场景并通过显示终端显示出来,用户可以通过观看显示终端显示的虚拟场景,真实地体验实体机器人当前周围的真实场景,实现了将用户真实地带入到实体机器人当前周围的真实场景中,该技术方案并不限定用户与实体机器人之间的距离必须在遥控信号的覆盖范围,也并不限定用户与实体机器人必须处于同一真实场景中。并且,随着实体机器人在真实场景中的运动,实体机器人上的传感器对当前测量范围内的真实场景进行测量而测得的数据同步变化,绘制的虚拟场景也同步变化并通过显示终端显示出来,用户可以通过观看显示终端显示的实时变化的虚拟场景,实时地体验实体机器人当前周围的真实场景。
结合上述实施例,在本申请的另一个实施例中,所述至少一个传感器包括位置传感器;参考图2,图2是本申请另一实施例提供的虚拟交互的方法的流程图。如图2所示,该方法除包括步骤S11-步骤S12外,还包括以下步骤:
步骤S13:根据所述位置传感器测得的数据,在所述虚拟场景中绘制所述第一实体机器人对应的第一虚拟机器人,以通过所述显示终端显示包含所述第一虚拟机器人的虚拟场景。
其中,所述第一虚拟机器人在所述虚拟场景中的运动与所述第一实体机器人在真实场景中的运动同步。
在本实施例中,至少一个传感器还包括位置传感器。如此,根据位置传感器测得的数据,在执行完步骤S12后,还可以在绘制出的虚拟场景中继续绘制第一实体机器人对应的第一虚拟机器人。第一实体机器人与第一虚拟机器人之间的对应关系是指:第一实体机器人在真实场景中的运动与第一虚拟机器人在绘制出的虚拟场景中的运动同步,即第一虚拟机器人是将第一实体机器人映射到绘制出的虚拟场景中得到的映像。
可以理解的是,随着第一实体机器人在真实场景中的运动,获得第一 实体机器人上的位置传感器测得的数据也随之变化。随着第一实体机器人上的位置传感器测得的数据的实时变化,相应地执行步骤S13所绘制出的第一虚拟机器人也是实时变化的,是与第一实体机器人的运动同步的。
采用上述技术方案,与实体机器人对应的虚拟机器人就叠加在绘制出的虚拟场景中,并通过显示终端显示出来。用户可以通过观看显示终端显示的包含虚拟机器人的虚拟场景,一方面真实地体验实体机器人当前周围的真实场景,并获知实体机器人在其当前周围的真实场景中的位置,另一方面由于虚拟场景包含虚拟机器人,提高了视觉趣味性。
并且,随着实体机器人在真实场景中的运动,实体机器人上的位置传感器测得的数据同步变化,绘制的虚拟机器人也同步运动并通过显示终端显示出来,用户可以通过观看显示终端显示的与实体机器人同步运动的虚拟机器人,实时地视觉感知实体机器人在真实场景中的运动。
结合上述实施例,在本申请的另一个实施例中,参考图3,图3是本申请另一实施例提供的虚拟交互的方法的流程图。如图3所示,该方法除包括步骤S11-步骤S13外,还包括以下步骤:
步骤S14:根据所述至少一个传感器测得的数据,在所述虚拟场景中绘制虚拟组件,以通过所述显示终端显示包含所述虚拟组件的虚拟场景;
步骤S15:获得针对所述第一实体机器人的第一控制指令,所述第一控制指令用于通过控制所述第一实体机器人和所述第一虚拟机器人同步运动,使得所述第一虚拟机器人在所述虚拟场景中与所述虚拟组件交互;
步骤S16:响应于所述第一控制指令,控制所述第一虚拟机器人在所述虚拟场景中与所述虚拟组件交互。
在本实施例中,在本实施例中,虚拟组件是一种虚拟的且具有交互功能的组件。具体地,虚拟组件是根据第一实体机器人上的至少一个传感器测得的数据,绘制出的虚拟的且具有交互功能的组件。
在一种实施方式中,在执行完步骤S12后,还可以在绘制出的虚拟场景中继续绘制虚拟组件,如此,虚拟组件就叠加在绘制出的虚拟场景中,并通过显示终端显示出来。用户可以通过观看显示终端显示的包含虚拟组 件的虚拟场景,一方面真实地体验实体机器人当前周围的真实场景,另一方面由于虚拟场景包含虚拟组件,提高了视觉趣味性。
在另一种实施方式中,在执行完步骤S12后,还可以在用户当前周围的真实场景中绘制虚拟组件。如此,一方面,用户可以通过观看显示终端显示的虚拟场景,真实地体验实体机器人当前周围的真实场景,另一方面用户还可以在自身当前周围的真实场景中亲眼看到虚拟组件,便于用户将看到的虚拟场景和虚拟组件结合,提高了视觉丰富性和趣味性。
在一种实施方式中,通过显示终端显示包含虚拟组件的虚拟场景,用户可以通过观看显示终端显示的包含虚拟组件的虚拟场景,如果用户想要体验虚拟组件的交互功能,可以针对第一实体机器人做出控制操作,使得处理器执行步骤S15,获得第一控制指令。
在另一种实施方式中,第一实体机器人所在的真实场景中还存在其他实体机器人,即,存在多个与第一实体机器人位于同一真实场景的实体机器人,如果用户想要体验位于同一真实场景的多个实体机器人在绘制出的虚拟场景中的交互,也可以针对第一实体机器人做出控制操作,使得处理器执行类似于步骤S15的步骤,获得第二控制指令。
具体地,处理器获得第一控制指令,有且不限于以下多种实施方式:
第一种实施方式:获得来自于遥控器的第一遥控指令,所述第一实体机器人与所述遥控器适配。
第二种实施方式:获得触控设备采集的触控操作;对所述触控操作进行处理,以获得所述第一控制指令。
第三种实施方式:获得图像采集设备采集的手势图像;对所述手势图像进行处理,以获得所述第一控制指令。
第四种实施方式:获得音频采集设备采集的音频数据;对所述音频数据进行处理,以获得所述第一控制指令。
下面对上述四种实施方式下,处理器如何控制虚拟机器人与虚拟组件交互进行说明。
(1)在用户手持与第一实体机器人适配的遥控器,且与第一实体机器 人之间的距离在遥控信号的覆盖范围内的情况下:
用户可以按下遥控器上的按键,使得遥控器生成第一遥控指令,并传输给处理器。处理器接收到第一遥控指令后,控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与虚拟组件交互。
(2)针对用户手边没有与第一实体机器人适配的遥控器的情况,或者用户与第一实体机器人之间的距离超出了遥控信号的覆盖范围的情况:
a)如果处理器与触控设备连接,则用户可以做出触控操作,触控设备采集到用户的触控操作后传输给处理器,处理器对该触控操作进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与虚拟组件交互。
b)如果处理器与图像采集设备连接,则用户可以做出手势,图像采集设备采集到用户的手势图像后传输给处理器,处理器对该手势图像进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与虚拟组件交互。
c)如果处理器与音频采集设备连接,则用户可以说出第一控制命令对应的音频,音频采集设备将采集到的音频数据传输给处理器,处理器对该音频数据进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与虚拟组件交互。
采用上述技术方案,用户通过按下遥控器、做出触控操作、做出手势或者说出音频等方式控制实体机器人在真实场景中运动,使得实体机器人对应的虚拟机器人在绘制出的虚拟场景中同步运动,进而达到用户控制实体机器人,使得对应的虚拟机器人在绘制出的虚拟场景中与虚拟组件交互,提高了人机交互的趣味性。
结合上述实施例,在本申请的另一个实施例中,参考图4,图4是本申请另一实施例提供的虚拟交互的方法的流程图。如图4所示,该方法除包括步骤S11-步骤S12外,还包括以下步骤:
S13’:获得与所述第一实体机器人位于同一真实场景的多个其他实体机器人的各自的位置数据;
S14’:基于所述多个其他实体机器人各自的位置数据,在所述虚拟场景中绘制所述多个其他实体机器人各自对应的其他虚拟机器人,所述多个其他实体机器人为不同于所述第一实体机器人的实体机器人,以通过显示终端显示包含所述其他虚拟机器人的虚拟场景。
在本实施例中,第一实体机器人所在的真实场景中还存在多个其他实体机器人,即,存在多个与第一实体机器人位于同一真实场景的实体机器人,为了能够使用户看到第一实体机器人所在的真实场景中,多个其他实体机器人各自的位置,处理器可以获得与第一实体机器人位于同一真实场景的多个其他实体机器人的各自的位置数据。具体地,与第一实体机器人位于同一真实场景的多个其他实体机器人各自具有位置传感器且均与处理器连接,与第一实体机器人位于同一真实场景的多个其他实体机器人各自的位置传感器将测得的位置数据传输给处理器。
处理器获得多个其他实体机器人各自的位置数据且执行完步骤S12后,还可以在绘制出的虚拟场景中继续绘制多个其他实体机器人各自对应的其他虚拟机器人。绘制其他实体机器人对应的其他虚拟机器人,与绘制第一实体机器人对应的第一虚拟机器人类似,在此不再赘述。
采用上述技术方案,与实体机器人在同一真实场景中的其他实体机器人对应的其他虚拟机器人就叠加在绘制出的虚拟场景中,并通过显示终端显示出来。用户可以通过观看显示终端显示的包含其他实体机器人对应的其他虚拟机器人的虚拟场景,获知其他实体机器人在真实场景中的位置,提高了视觉趣味性。
在另一种实施方式中,可以步骤S13’-步骤S14’与步骤S13可以均实施,如此,在绘制出的虚拟场景中绘制所有实体机器人对应各自对应的虚拟机器人,并通过显示终端显示出来。用户可以通过观看显示终端显示的包含所有实体机器人各自对应的虚拟机器人的虚拟场景,获知所有实体机器人在真实场景中相互之间的相对位置,提高了视觉趣味性。
结合上述实施例,在本申请的另一个实施例中,参考图5,图5是本申请另一实施例提供的虚拟交互的方法的流程图。如图5所示,该方法除包括步骤S11-步骤S12以及步骤S13’-步骤S14’外,还包括以下步骤:
步骤S15’:获得针对所述第一实体机器人的第二控制指令,所述第二控制指令用于通过控制所述第一实体机器人和所述第一实体机器人对应的第一虚拟机器人同步运动,使得所述第一虚拟机器人在所述虚拟场景中与所述其他虚拟机器人交互;
步骤S16’:响应于所述第二控制指令,控制所述第一虚拟机器人在所述虚拟场景中与所述其他虚拟机器人交互。
在本实施例中,在绘制出的虚拟场景中绘制所有实体机器人对应各自对应的虚拟机器人,并通过显示终端显示出来,使得用户获知所有实体机器人在真实场景中相互之间的相对位置之后,如果用户想要体验位于同一真实场景的多个实体机器人在绘制出的虚拟场景中的交互,可以针对第一实体机器人做出控制操作,使得处理器执行类似于步骤S15的步骤,获得第二控制指令。
下面对处理器如何控制第一虚拟机器人与其他虚拟机器人交互进行说明。
(1)在用户手持与第一实体机器人适配的遥控器,且与第一实体机器人之间的距离在遥控信号的覆盖范围内的情况下:
用户可以按下遥控器上的按键,使得遥控器生成第一遥控指令,并传输给处理器。处理器接收到第一遥控指令后,控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与其他虚拟机器人交互。
(2)针对用户手边没有与第一实体机器人适配的遥控器的情况,或者用户与第一实体机器人之间的距离超出了遥控信号的覆盖范围的情况:
a)如果处理器与触控设备连接,则用户可以做出触控操作,触控设备采集到用户的触控操作后传输给处理器,处理器对该触控操作进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机 器人同步运动,进而控制第一虚拟机器人与其他虚拟机器人交互。
b)如果处理器与图像采集设备连接,则用户可以做出手势,图像采集设备采集到用户的手势图像后传输给处理器,处理器对该手势图像进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与其他虚拟机器人交互。
c)如果处理器与音频采集设备连接,则用户可以说出第一控制命令对应的音频,音频采集设备将采集到的音频数据传输给处理器,处理器对该音频数据进行处理后确定第一控制指令,然后控制第一实体机器人运动,间接控制第一虚拟机器人同步运动,进而控制第一虚拟机器人与其他虚拟机器人交互。
采用上述技术方案,用户通过按下遥控器、做出触控操作、做出手势或者说出音频等方式控制实体机器人在真实场景中运动,使得实体机器人对应的虚拟机器人在绘制出的虚拟场景中同步运动,进而达到用户控制实体机器人,使得对应的虚拟机器人在绘制出的虚拟场景中与其他虚拟机器人交互,提高了人机交互的趣味性。
基于同一发明构思,本申请一实施例提供一种实体机器人,该实体机器人可以是上述各实施例中的第一实体机器人或者、除第一实体机器人外的任一实体机器人。参考图6,图6是本申请一实施例提供的实体机器人的示意图。如图6所示,实体机器人包括:
至少一个传感器601,用于对当前测量范围内的真实场景进行测量;
处理器602,与所述至少一个传感器连接,用于获得所述至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,并执行本申请上述各个实施例所述的虚拟交互的方法。
基于同一发明构思,本申请一实施例提供一种显示终端,参考图7,图7是本申请一实施例提供的显示终端的示意图。如图7所示,显示终端包括:
通信组件701,用于与第一实体机器人通信,以获得所述第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数 据;
处理器702,用于本申请上述各个实施例所述的虚拟交互的方法;
显示组件703,与所述处理器连接,用于显示所述当前测量范围内的真实场景对应的虚拟场景。
可选地,所述显示组件是触摸屏,用于采集触控操作;或所述显示终端内集成触摸板,与所述处理器连接,用于采集触控操作。
可选地,所述显示终端内集成图像采集组件,与所述处理器连接,用于采集手势图像。
可选地,所述显示终端内集成音频采集组件,与所述处理器连接,用于采集音频数据。
可选地,所述显示终端是智能眼镜、智能手机或平板电脑。
基于同一发明构思,本申请一实施例提供一种虚拟交互的系统。参加图8,图8是本申请一实施例提供的虚拟交互的系统的示意图。如图8所示,该虚拟交互的系统包括:
第一实体机器人801,具有至少一个传感器,用于对当前测量范围内的真实场景进行测量;
数据处理服务器802,与所述第一实体机器人连接,用于本申请第一方面所述的方法。
可选地,如图8所示,所述系统还包括:
显示终端803,与所述数据处理服务器连接,用于显示所述当前测量范围内的真实场景对应的虚拟场景。
可选地,所述系统还包括:
遥控器804,与所述第一实体机器人适配,用于生成第一遥控指令。
可选地,所述系统还包括:
触控设备805,与所述数据处理服务器连接,用于采集触控操作。
可选地,所述系统还包括:
图像采集设备806,与所述数据处理服务器连接,用于采集手势图像。
可选地,所述系统还包括:
音频采集设备807,与所述数据处理服务器连接,用于采集音频数据。
基于同一发明构思,本申请另一实施例提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请上述任一实施例所述的方法中的步骤。
基于同一发明构思,本申请另一实施例提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行时实现本申请上述任一实施例所述的方法中的步骤。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可 读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的一种分析店铺的评价数据的方法、装置、存储介质和电子设备,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (21)

  1. 一种虚拟交互的方法,其特征在于,所述方法包括:
    获得第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,所述当前测量范围随所述第一实体机器人在真实场景中的运动而改变;
    根据所述至少一个传感器测得的数据,绘制所述当前测量范围内的真实场景对应的虚拟场景,以通过显示终端显示所述虚拟场景。
  2. 根据权利要求1所述的方法,其特征在于,所述至少一个传感器包括位置传感器;所述方法还包括:
    根据所述位置传感器测得的数据,在所述虚拟场景中绘制所述第一实体机器人对应的第一虚拟机器人,以通过所述显示终端显示包含所述第一虚拟机器人的虚拟场景;
    其中,所述第一虚拟机器人在所述虚拟场景中的运动与所述第一实体机器人在真实场景中的运动同步。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述至少一个传感器测得的数据,在所述虚拟场景中绘制虚拟组件,以通过所述显示终端显示包含所述虚拟组件的虚拟场景;
    获得针对所述第一实体机器人的第一控制指令,所述第一控制指令用于通过控制所述第一实体机器人和所述第一虚拟机器人同步运动,使得所述第一虚拟机器人在所述虚拟场景中与所述虚拟组件交互;
    响应于所述第一控制指令,控制所述第一虚拟机器人在所述虚拟场景中与所述虚拟组件交互。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获得与所述第一实体机器人位于同一真实场景的多个其他实体机器人的各自的位置数据;
    基于所述多个其他实体机器人各自的位置数据,在所述虚拟场景中绘制所述多个其他实体机器人各自对应的其他虚拟机器人,所述多个其他实体机器人为不同于所述第一实体机器人的实体机器人,以通过显示终端显示包含所述其他虚拟机器人的虚拟场景。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    获得针对所述第一实体机器人的第二控制指令,所述第二控制指令用于通过控制所述第一实体机器人和所述第一实体机器人对应的第一虚拟机器人同步运动,使得所述第一虚拟机器人在所述虚拟场景中与所述其他虚拟机器人交互;
    响应于所述第二控制指令,控制所述第一虚拟机器人在所述虚拟场景中与所述其他虚拟机器人交互。
  6. 根据权利要求3所述的方法,其特征在于,所述第一实体机器人与遥控器适配;获得针对第一实体机器人的第一控制指令,包括:
    获得来自于所述遥控器的第一遥控指令。
  7. 根据权利要求3所述的方法,其特征在于,获得针对第一实体机器人的第一控制指令,包括:
    获得触控设备采集的触控操作;
    对所述触控操作进行处理,以获得所述第一控制指令。
  8. 根据权利要求3所述的方法,其特征在于,获得针对第一实体机器人的第一控制指令,包括:
    获得图像采集设备采集的手势图像;
    对所述手势图像进行处理,以获得所述第一控制指令。
  9. 根据权利要求3所述的方法,其特征在于,获得针对第一实体机器人的第一控制指令,包括:
    获得音频采集设备采集的音频数据;
    对所述音频数据进行处理,以获得所述第一控制指令。
  10. 一种实体机器人,其特征在于,包括:
    至少一个传感器,用于对当前测量范围内的真实场景进行测量;
    处理器,与所述至少一个传感器连接,用于获得所述至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据,并执行权利要求1-10任一所述的方法。
  11. 一种显示终端,其特征在于,包括:
    通信组件,用于与第一实体机器人通信,以获得所述第一实体机器人上的至少一个传感器对当前测量范围内的真实场景进行测量而测得的数据;
    处理器,用于执行权利要求1-9任一所述的方法;
    显示组件,与所述处理器连接,用于显示所述当前测量范围内的真实场景对应的虚拟场景。
  12. 根据权利要求11所述的显示终端,其特征在于,所述显示组件是触摸屏,用于采集触控操作;或所述显示终端内集成触摸板,与所述处理器连接,用于采集触控操作。
  13. 根据权利要求11所述的显示终端,其特征在于,所述显示终端内集成图像采集组件,与所述处理器连接,用于采集手势图像。
  14. 根据权利要求11所述的显示终端,其特征在于,所述显示终端内集成音频采集组件,与所述处理器连接,用于采集音频数据。
  15. 根据权利要求11所述的显示终端,其特征在于,所述显示终端是智能眼镜、智能手机或平板电脑。
  16. 一种虚拟交互的系统,其特征在于,所述系统包括:
    第一实体机器人,具有至少一个传感器,用于对当前测量范围内的真实场景进行测量;
    数据处理服务器,与所述第一实体机器人连接,用于执行权利要求1-10任一所述的方法。
  17. 根据权利要求16所述的系统,其特征在于,所述系统还包括:
    显示终端,与所述数据处理服务器连接,用于显示所述当前测量范围内的真实场景对应的虚拟场景。
  18. 根据权利要求16所述的系统,其特征在于,所述系统还包括:
    遥控器,与所述第一实体机器人适配,用于生成第一遥控指令。
  19. 根据权利要求16所述的系统,其特征在于,所述系统还包括:
    触控设备,与所述数据处理服务器连接,用于采集触控操作。
  20. 根据权利要求16所述的系统,其特征在于,所述系统还包括:
    图像采集设备,与所述数据处理服务器连接,用于采集手势图像。
  21. 根据权利要求16所述的系统,其特征在于,所述系统还包括:
    音频采集设备,与所述数据处理服务器连接,用于采集音频数据。
PCT/CN2018/118934 2018-10-31 2018-12-03 虚拟交互的方法、实体机器人、显示终端及系统 WO2020087642A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880038721.8A CN110869881A (zh) 2018-10-31 2018-12-03 虚拟交互的方法、实体机器人、显示终端及系统
US17/242,249 US20210245368A1 (en) 2018-10-31 2021-04-27 Method for virtual interaction, physical robot, display terminal and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811291700.5 2018-10-31
CN201811291700 2018-10-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/242,249 Continuation US20210245368A1 (en) 2018-10-31 2021-04-27 Method for virtual interaction, physical robot, display terminal and system

Publications (1)

Publication Number Publication Date
WO2020087642A1 true WO2020087642A1 (zh) 2020-05-07

Family

ID=70462498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118934 WO2020087642A1 (zh) 2018-10-31 2018-12-03 虚拟交互的方法、实体机器人、显示终端及系统

Country Status (2)

Country Link
US (1) US20210245368A1 (zh)
WO (1) WO2020087642A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112732075A (zh) * 2020-12-30 2021-04-30 佛山科学技术学院 一种面向教学实验的虚实融合机器教师教学方法及系统
CN113434044A (zh) * 2021-07-01 2021-09-24 宁波未知数字信息技术有限公司 一种从混合现实到物理实体一体化交互系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237166A (zh) * 2013-03-28 2013-08-07 北京东方艾迪普科技发展有限公司 一种基于机器人云台的摄像机控制方法及系统
CN104484522A (zh) * 2014-12-11 2015-04-01 西南科技大学 一种基于现实场景的机器人模拟演练系统的构建方法
US20170181383A1 (en) * 2014-05-26 2017-06-29 Institute Of Automation Chinese Academy Of Sciences Pruning Robot System
CN108090966A (zh) * 2017-12-13 2018-05-29 广州市和声信息技术有限公司 一种适用于虚拟场景的虚拟物体重构方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103237166A (zh) * 2013-03-28 2013-08-07 北京东方艾迪普科技发展有限公司 一种基于机器人云台的摄像机控制方法及系统
US20170181383A1 (en) * 2014-05-26 2017-06-29 Institute Of Automation Chinese Academy Of Sciences Pruning Robot System
CN104484522A (zh) * 2014-12-11 2015-04-01 西南科技大学 一种基于现实场景的机器人模拟演练系统的构建方法
CN108090966A (zh) * 2017-12-13 2018-05-29 广州市和声信息技术有限公司 一种适用于虚拟场景的虚拟物体重构方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112732075A (zh) * 2020-12-30 2021-04-30 佛山科学技术学院 一种面向教学实验的虚实融合机器教师教学方法及系统
CN113434044A (zh) * 2021-07-01 2021-09-24 宁波未知数字信息技术有限公司 一种从混合现实到物理实体一体化交互系统

Also Published As

Publication number Publication date
US20210245368A1 (en) 2021-08-12

Similar Documents

Publication Publication Date Title
US11238666B2 (en) Display of an occluded object in a hybrid-reality system
US11699271B2 (en) Beacons for localization and content delivery to wearable devices
US11887246B2 (en) Generating ground truth datasets for virtual reality experiences
US9268410B2 (en) Image processing device, image processing method, and program
US20140317576A1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
EP3286601B1 (en) A method and apparatus for displaying a virtual object in three-dimensional (3d) space
JP2013165366A (ja) 画像処理装置、画像処理方法及びプログラム
US10249084B2 (en) Tap event location with a selection apparatus
WO2020151432A1 (zh) 一种智能看房的数据处理方法及系统
WO2017030193A1 (ja) 情報処理装置、情報処理方法、及びプログラム
WO2021067090A1 (en) Automated eyewear device sharing system
US20210263168A1 (en) System and method to determine positioning in a virtual coordinate system
CN104866103B (zh) 一种相对位置确定方法、穿戴式电子设备及终端设备
US20210245368A1 (en) Method for virtual interaction, physical robot, display terminal and system
US10582190B2 (en) Virtual training system
CN110717993B (zh) 一种分体式ar眼镜系统的交互方法、系统及介质
US10632362B2 (en) Pre-visualization device
US10459533B2 (en) Information processing method and electronic device
CN110869881A (zh) 虚拟交互的方法、实体机器人、显示终端及系统
Marques et al. An augmented reality framework for supporting technicians during maintenance procedures
WO2023179369A1 (zh) 控制装置的定位方法、装置、设备、存储介质及计算机程序产品
CN117130465A (zh) 基于xr设备的参数设置方法、装置、设备及存储介质
CN117742555A (zh) 控件交互方法、装置、设备和介质
WO2024049589A1 (en) Authoring tools for creating interactive ar experiences
TW201626159A (zh) 相對位置判斷方法、顯示器控制方法、以及使用這些方法的系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18938276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018938276

Country of ref document: EP

Effective date: 20210531

122 Ep: pct application non-entry in european phase

Ref document number: 18938276

Country of ref document: EP

Kind code of ref document: A1