WO2018072760A1 - 一种信息处理方法及电子设备、存储介质 - Google Patents

一种信息处理方法及电子设备、存储介质 Download PDF

Info

Publication number
WO2018072760A1
WO2018072760A1 PCT/CN2017/111074 CN2017111074W WO2018072760A1 WO 2018072760 A1 WO2018072760 A1 WO 2018072760A1 CN 2017111074 W CN2017111074 W CN 2017111074W WO 2018072760 A1 WO2018072760 A1 WO 2018072760A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
data
image
unit
controlling
Prior art date
Application number
PCT/CN2017/111074
Other languages
English (en)
French (fr)
Inventor
仇稳钧
杨高峰
安宁
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018072760A1 publication Critical patent/WO2018072760A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers

Definitions

  • the present invention relates to the field of control, and in particular, to an information processing method, an electronic device, and a storage medium.
  • control of the movement of electronic devices in the prior art does not provide an immersive experience to the user.
  • the embodiments of the present invention provide an information processing method, an electronic device, and a storage medium, to at least solve the technical problem that the control of the motion of the electronic device in the prior art cannot provide the immersive experience to the user.
  • an information processing method including: obtaining The image acquisition unit collects the obtained image, and generates first data for transmitting to the second electronic device based on the image; wherein the second electronic device is a control device of the first electronic device, The second electronic device can receive the first data sent by the first electronic device, and can send second data for controlling the first electronic device to the first electronic device, where the The second data is the data obtained by the second electronic device based on the detection of the pose of the second electronic device; the second data sent by the second electronic device is received, and the first electronic data is obtained based on the second data. a control instruction of the working state of the device, and executing the control instruction; wherein the control instruction is used to at least control a driving unit of the first electronic device to drive the first electronic device to generate a displacement.
  • the obtaining an image obtained by the image acquisition unit and generating the first data for transmitting to the second electronic device based on the image includes: obtaining a two-dimensional image respectively obtained by the plurality of the image acquisition units And generating a corresponding stereoscopic image by executing a preset stereoscopic image construction algorithm, and transmitting the stereoscopic image as the first data to the second electronic device.
  • the first electronic device further has an audio collection unit, where the audio collection unit is configured to collect audio data of an environment in which the first electronic device is located; after the generating the stereo image, the method further includes And synchronizing the stereoscopic image with the audio data, and transmitting the synchronously combined stereoscopic image and audio data as the first data to the second electronic device.
  • the second data is a pitch and/or a yaw angle measured by the second electronic device
  • the second data is used to obtain a control instruction for controlling an operating state of the first electronic device, including Obtaining, according to the pitch and/or yaw angle measured by the second electronic device, querying a preset first mapping relationship, obtaining a pitch and/or a yaw angle of the corresponding image capturing unit of the first electronic device, and A control command for controlling the image acquisition unit to adjust the pitch and/or yaw angle is generated based on the obtained pitch and/or yaw angle of the image acquisition unit.
  • the second data is a measured moving linear velocity and/or angular velocity of the second electronic device
  • the second data is used to obtain a control instruction for controlling an operating state of the first electronic device, including Obtaining a preset second mapping relationship according to the measured moving linear velocity and/or angular velocity of the second electronic device, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device, and based on the obtained
  • the moving linear velocity and/or angular velocity of the first electronic device generates control commands for controlling the first electronic device to adjust the moving linear velocity and/or angular velocity.
  • an information processing method including: receiving first data sent by the first electronic device, where the first data is based on the first electronic device And generating, by the image collecting unit, the generated image, and transmitting, by the first electronic device, second data for controlling the first electronic device, where the second data is based on the second electronic device
  • the data obtained by the pose detection wherein the first electronic device obtains, after receiving the second data sent by the second electronic device, the working state of the first electronic device based on the second data.
  • Controlling instructions, and executing the control instructions, wherein the control instructions are at least for controlling a driving unit of the first electronic device to drive the first electronic device to generate a displacement.
  • the second data is a pitch and/or a yaw angle measured by the second electronic device, and the second data for controlling the first electronic device is sent to the first electronic device.
  • the method further includes: measuring a pitch and/or a yaw angle of the second electronic device; and transmitting, to the first electronic device, second data for controlling the first electronic device:
  • the first electronic device issues a pitch and/or a yaw angle of the second electronic device.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the second data for controlling the first electronic device is sent to the first electronic device.
  • the method further includes: measuring a moving linear velocity and/or an angular velocity of the second electronic device; and transmitting, to the first electronic device, a second number for controlling the first electronic device
  • the method includes: transmitting a moving linear velocity and/or an angular velocity of the second electronic device to the first electronic device.
  • a first electronic device including: an image acquisition unit configured to acquire an image of an environment in which the first electronic device is located; and a generating unit configured to be based on the image Generating first data for transmitting to the second electronic device; wherein the second electronic device is a control end device of the first electronic device, and the second electronic device is capable of receiving the first electronic device The first data, and the second data for controlling the first electronic device is sent to the first electronic device, where the second data is based on the pose detection of the second electronic device
  • the first sending unit is configured to send the first data to the second electronic device; the first receiving unit is configured to receive the second data that is sent by the second electronic device; and the acquiring unit, Configuring to obtain a control instruction for controlling an operating state of the first electronic device based on the second data, wherein the control instruction is at least used to control a driving list of the first electronic device Driving the first electronic device is displaced; driving unit configured to execute the control command, providing a driving force to the first electronic device so
  • the first electronic device includes a plurality of the image collection units, and the plurality of image acquisition units are configured to collect a two-dimensional image of an environment in which the first electronic device is located, and the generating unit includes: generating a sub-unit configured to generate a corresponding stereoscopic image by performing a preset stereoscopic image construction algorithm based on the two-dimensional images respectively obtained by the plurality of image acquisition units, the first transmitting unit further configured to use the stereoscopic image as The first data is sent to the second electronic device.
  • the first electronic device further includes: an audio collection unit configured to collect audio data of an environment in which the first electronic device is located; and a merging unit configured to synchronize the stereo image with the audio data Merging; the first sending unit is further configured to send the synchronously combined stereoscopic image and audio data as the first data to the second electronic device Ready.
  • the second data is a pitch and/or a yaw angle measured by the second electronic device
  • the acquiring unit includes: a first query subunit configured to be measured according to the second electronic device a pitch and/or a yaw angle, querying a preset first mapping relationship, obtaining a pitch and/or a yaw angle of a corresponding image acquisition unit of the first electronic device; the first generation subunit configured to be based on the obtained
  • the pitch and/or yaw angle of the image acquisition unit generates control instructions for controlling the image acquisition unit to adjust the pitch and/or yaw angle.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the acquiring unit includes: a second query subunit configured to be measured according to the second electronic device Transmitting a line speed and/or an angular velocity, querying a preset second mapping relationship, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device; and a second generating subunit configured to be based on the obtained first
  • the moving linear velocity and/or angular velocity of the electronic device generates control commands for controlling the first electronic device to adjust the moving linear velocity and/or angular velocity.
  • a second electronic device including: a second receiving unit, configured to receive first data sent by the first electronic device, where the first data is The second electronic device is configured to generate the second data for controlling the first electronic device to the first electronic device according to the image acquired by the image collecting unit.
  • the second data is data obtained by the second electronic device based on the pose detection of the second electronic device, after the first electronic device receives the second data sent by the second electronic device, based on the The second data obtains a control instruction for controlling an operating state of the first electronic device, and executes the control instruction, wherein the control instruction is at least used to control a driving unit of the first electronic device to drive the first
  • the electronic device produces a displacement.
  • the second data is a pitch and/or a yaw angle measured by the second electronic device
  • the second electronic device further includes: a first detecting unit configured to be in the second sending subunit Transmitting, by the first electronic device, second data for controlling the first electronic device Pre-measuring the pitch and/or yaw angle of the second electronic device; the second sending unit is further configured to deliver the pitch and/or yaw of the second electronic device to the first electronic device angle.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the second electronic device further includes: a second detecting unit configured to be in the third transmitting subunit Before the first electronic device sends the second data for controlling the first electronic device, measuring a moving linear velocity and/or an angular velocity of the second electronic device; the second sending unit is further configured to The first electronic device issues a moving linear velocity and/or an angular velocity of the second electronic device.
  • a first electronic device comprising: a processor and a memory for storing a first computer program executable on a processor, wherein the processor is configured to operate In the first computer program, the steps of the above method applied to the first electronic device are performed.
  • a first computer readable storage medium having stored thereon a first computer program, wherein the first computer program is implemented by the processor to implement the above application to the first electronic The steps of the method of the device.
  • a second electronic device comprising: a processor and a memory for storing a second computer program executable on the processor, wherein the processor is configured to operate In the second computer program, the steps of the above method applied to the second electronic device are performed.
  • a second computer readable storage medium having stored thereon a second computer program, wherein the second computer program is implemented by the processor to implement the above application to the second electronic The steps of the method of the device.
  • the image capturing unit of the first electronic device collects an image of the surrounding environment, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data. Detecting your own pose to get the second data, and sending the second data Sending to the first electronic device, the first electronic device generates a control command according to the second data, and generates a displacement according to the control instruction, because the user can view the image of the surrounding environment from the perspective of the first electronic device, and adjust the second electronic device according to the image
  • the positional posture controls the movement of the first electronic device to achieve the technical effect of providing the user with an immersive experience in controlling the movement of the first electronic device, thereby solving the control of the movement of the electronic device in the prior art. Unable to provide users with technical issues of an immersive experience.
  • FIG. 1 is a flowchart of a first electronic device execution information processing method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a second electronic device execution information processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a first electronic device according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a second electronic device according to an embodiment of the present invention.
  • FIG. 5 is an interaction diagram of modules of a first electronic device and a second electronic device in accordance with an embodiment of the present invention.
  • an embodiment of an information processing method is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although The logical order is shown in the flowcharts, but in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of a first electronic device performing information processing method according to an embodiment of the present invention, the method being applied to a first electronic device.
  • the first electronic device includes a driving unit, the driving unit is configured to provide a driving force for the first electronic device to enable the first electronic device to generate a displacement, the first electronic device further includes at least one image capturing unit, and the image collecting unit is configured to acquire the first An image of the environment in which the electronic device is located.
  • the method includes the following steps:
  • Step S102 obtaining an image acquired by the image acquisition unit, and generating first data for transmitting to the second electronic device based on the image; wherein the second electronic device is a control device of the first electronic device, and the second electronic device can Receiving the first data sent by the first electronic device, and sending the second data for controlling the first electronic device to the first electronic device, where the second data is based on the pose of the second electronic device (position and/or Or gesture) to detect the data obtained.
  • Step S104 Receive second data sent by the second electronic device, obtain a control instruction for controlling an operating state of the first electronic device based on the second data, and execute a control instruction, where the control instruction is used to control at least the first electronic device.
  • the drive unit drives the first electronic device to generate a displacement.
  • the first electronic device can be a robot, an unmanned aerial vehicle, or the like.
  • the first electronic device includes a drive unit and at least one image acquisition unit.
  • the image acquisition unit can be a camera.
  • the image acquisition unit is configured to collect an image of the environment in which the first electronic device is located.
  • the image acquisition unit of the first electronic device collects an image of the environment in which the first electronic device is located, and generates first data based on the acquired image.
  • the first electronic device transmits the first data to the second electronic device.
  • the second electronic device is a control end device of the first electronic device.
  • the second electronic device may be a virtual reality (VR) device or the like. The user can wear the second electronic device or wear it on the head.
  • VR virtual reality
  • the second electronic device receives the first data sent by the first electronic device, so that the user can obtain the environment information of the first electronic device, for example, an image of the environment in which the first electronic device is located.
  • the user can control the change of his or her position and/or posture based on the image of the environment in which the first electronic device is seen. Since the user wears the second electronic device on the body or wears it on the head, when the user's posture changes, the posture of the second electronic device also changes.
  • the second electronic device obtains the second data based on the detection of its own pose.
  • the second electronic device transmits the second data to the first electronic device.
  • the control instruction is an instruction for controlling an operating state of the first electronic device, for example, the control command can control the first
  • the driving unit of the electronic device drives the first electronic device to generate a displacement, or the control command can control the driving unit of the first electronic device to drive the first electronic device to adjust the moving direction or the moving speed.
  • the driving unit is configured to provide a driving force to the first electronic device to enable the first electronic device to generate a displacement.
  • the image capturing unit of the first electronic device collects an image of the surrounding environment, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data. Detecting the self-position to obtain the second data, and transmitting the second data to the first electronic device, the first electronic device generates a control instruction according to the second data, and generates a displacement according to the control instruction, because the user can view from the perspective of the first electronic device An image to the surrounding environment, adjusting the pose of the second electronic device according to the image to control the movement of the first electronic device, The technical effect of providing the user with an immersive experience in controlling the movement of the first electronic device, thereby solving the technical problem in the prior art that the control of the movement of the electronic device cannot provide the user with an immersive experience.
  • the first electronic device is a robot and the second electronic device is a VR device.
  • the robot collects the environmental image and generates the first data based on the image and sends it to the VR device.
  • the VR device receives the first data, generates an image based on the first data, and the user wearing the VR device sees the image generated by the VR device, and can know the information of the environment in which the robot is located, and the user controls the posture according to the environmental information of the robot. Changes are made and the pose of the VR device changes.
  • the VR device obtains the second data based on the detection of its own pose.
  • the VR device sends the second data to the robot.
  • the control command is an instruction for controlling an operating state of the robot, for example, the control command can control the driving unit of the robot to drive the robot to generate a displacement. Or, the control command can control the driving unit of the robot to drive the robot to adjust the moving direction or the moving speed.
  • obtaining an image acquired by the image acquisition unit, and generating first data for sending to the second electronic device based on the image including: respectively obtaining a two-dimensional image obtained by the multiple image acquisition units, and executing the preset
  • the stereoscopic image construction algorithm generates a corresponding stereoscopic image and transmits the stereoscopic image as the first data to the second electronic device.
  • the first electronic device has a plurality of image acquisition units, which may be located at different positions of the first electronic device, and images of the environment surrounding the first electronic device may be acquired from different directions and/or heights.
  • Each image acquisition unit of the first electronic device collects at least one image of the environment surrounding the first electronic device, and the image captured by the image acquisition unit may be a two-dimensional image, and the first electronic device constructs the plurality of two-dimensional images according to the preset stereo image.
  • the algorithm generates a corresponding stereoscopic image, and transmits the stereoscopic image as the first data to the second electronic device.
  • the second electronic device receives the stereoscopic image sent by the first electronic device, the user (who wears the second electronic device) can see the first electronic device.
  • the user controls the position change of the user, the posture of the second electronic device changes accordingly, the second electronic device detects the posture of the second device to obtain the second data, and sends the second data to the first electronic device, and the first electronic device is based on the second
  • the data generation control instruction generates a displacement according to the control instruction. Since the first electronic device transmits the stereoscopic image to the second electronic device, the user using the second electronic device can view the surrounding environment of the first electronic device from the perspective of the first electronic device.
  • the stereoscopic image adjusts the pose of the second electronic device according to the stereoscopic image to control the motion of the first electronic device, and achieves the technical effect of providing the user with an immersive experience during the process of controlling the movement of the first electronic device.
  • the first electronic device further has an audio collection unit, where the audio collection unit is configured to collect audio data of an environment in which the first electronic device is located; after the generated stereo image, the method further includes: synchronizing the stereo image with the audio data. And transmitting the synchronously combined stereoscopic image and audio data as the first data to the second electronic device.
  • the first electronic device has an audio collection unit.
  • the audio collection unit is capable of collecting audio data of an environment in which the first electronic device is located.
  • Each of the plurality of image acquisition units of the first electronic device acquires at least one image of the environment surrounding the first electronic device, and the image acquired by the image acquisition unit may be a two-dimensional image, and the first electronic device uses the plurality of two-dimensional images according to the image.
  • the preset stereo image construction algorithm generates a corresponding stereo image.
  • the audio collection unit of the first electronic device collects audio data of an environment in which the first electronic device is located.
  • the first electronic device synchronously combines the stereoscopic image and the audio data, and transmits the synchronously combined stereoscopic image and the audio data as the first data to the second electronic device.
  • the first data sent by the first electronic device to the second electronic device includes the audio data of the environment in which the first electronic device is located
  • the user of the second electronic device can more immersively feel that the first electronic device is located The environment, therefore, the control of the first electronic device is more accurate, ensuring the security of the first electronic device.
  • the embodiment of the invention can integrate audio collection on both the first electronic device and the second electronic device. Units, which can transmit data by wireless communication, thus enabling voice call functions. In this way, when the user of the second electronic device emits a sound, the audio collection unit of the second electronic device collects the audio information sent by the user and transmits the information to the first electronic device through wireless communication. The first electronic device receives and plays the audio information sent by the second electronic device.
  • the first electronic device is a robot
  • the second electronic device is a VR device
  • the user A is a user of the VR device.
  • the user A controls the robot to visit the shopping mall, and the user A sees the situation in the shopping mall through the stereoscopic image of the surrounding environment sent by the robot.
  • the audio data of the surrounding environment sent by the robot hears the sound of the environment around the robot, and there is an experience in the mall.
  • user A sees an item and wants to know its price. User A makes a sound. "How much is this thing?"
  • the audio collection unit of the VR device collects the sound from user A and sends the audio data of user A through wireless communication.
  • the robot receives and plays the audio data of the user A, so the salesperson in the mall hears the voice of the robot: "How much is this thing?"
  • the robot continues to collect the image and audio data of the surrounding environment, and synthesizes the data and sends it to the VR device, so that the user A can hear the answer from the salesperson, and the user can continue to voice.
  • the VR device then sends the voice information to the robot, and the robot receives and plays the audio information of the user A, so that the user A can communicate and communicate with the salesperson.
  • the image capturing unit of the first electronic device collects an image of the surrounding environment, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data.
  • the self-position is detected to obtain the second data, and the second data is sent to the first electronic device.
  • the second data is a measured pitch and/or yaw angle of the second electronic device, or the second data is a measured moving line speed and/or angular velocity of the second electronic device.
  • the first electronic device queries the preset first mapping relationship according to the pitch and/or yaw angle measured by the second electronic device, obtains a pitch and/or a yaw angle of the corresponding image acquiring unit of the first electronic device, and is based on The pitch and/or yaw angle of the obtained image acquisition unit generates control commands for controlling the image acquisition unit to adjust the pitch and/or yaw angle.
  • the first electronic device queries the preset second mapping according to the measured moving linear velocity and/or angular velocity of the second electronic device. a relationship, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device, and generating, based on the obtained moving linear velocity and/or angular velocity of the first electronic device, for controlling the first electronic device to adjust the moving linear velocity and/or the angular velocity Control instruction.
  • the first electronic device queries the preset pitch according to the pitch and/or yaw angle measured by the second electronic device. a third mapping relationship, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device, and generating, based on the obtained moving linear velocity and/or angular velocity of the first electronic device, for controlling the first electronic device to adjust the moving linear velocity and/or Angular speed control command.
  • the first electronic device queries the preset fourth mapping according to the measured moving linear velocity and/or angular velocity of the second electronic device. Relationship, obtaining a pitch and/or yaw angle of a corresponding image acquisition unit of the first electronic device, and generating, based on the obtained pitch and/or yaw angle of the image acquisition unit, for controlling the image acquisition unit to adjust pitch and/or yaw Angle control instructions.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship are all pre-stored in the first electronic device.
  • the first mapping relationship is a relationship between a pitch and/or a yaw angle of the second electronic device and the first electronic device.
  • the second mapping relationship is a relationship between a moving linear velocity and/or an angular velocity of the second electronic device and the first electronic device.
  • the third mapping relationship is a relationship between a pitch and/or a yaw angle of the second electronic device and a moving linear velocity and/or angular velocity of the first electronic device.
  • the fourth mapping relationship is a moving linear velocity and/or angular velocity of the second electronic device, and The relationship between the pitch and/or yaw angle of the first electronic device.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship may all be a functional relationship.
  • the second mapping relationship is as shown in Table 1.
  • Moving linear velocity v0 of the second electronic device Moving linear velocity v0 of the second electronic device Moving line speed of the first electronic device V0 ⁇ 0.1m/s 0m/s 0.1m/s ⁇ v0 ⁇ 0.2m/s 0.1m/s 0.2m/s ⁇ v0 ⁇ 1m/s 0.2m/s 1m/s ⁇ v0 ⁇ 3m/s 1m/s 3m/s ⁇ v0 1.5m/s
  • the user B wears the VR device and turns on the robot.
  • the image acquisition unit of the robot collects an image of the surrounding environment, generates first data based on the acquired image, and transmits the first data to the VR device, and the VR device receives the first data, and provides the environment image around the robot to the user B.
  • User B finds that the robot is in a very small environment and the road is wide, so User B walks at a higher speed (greater than or equal to 3 m/s), assuming that User B walks at a speed of 4 m/s.
  • the VR device Since the VR device is worn on the user B, the VR device detects its own moving linear velocity and obtains 4 m/s, then the VR device transmits its own moving linear velocity to the robot, and after the robot receives the moving linear velocity of the VR device, In Table 1, the moving linear velocity of the robot corresponding to 4 m/s is queried. Since 3 m/s ⁇ 4 m/s, the corresponding moving linear velocity is 1.5 m/s.
  • the robot generates a control command for controlling the robot to travel at a speed of 1.5 m/s, the robot executes a control command, and the drive unit drives the robot to travel at a speed of 1.5 m/s.
  • FIG. 2 is a flowchart of a method for processing information performed by a second electronic device according to an embodiment of the present invention, the method being applied to a second electronic device.
  • the second electronic device is a control end device of the first electronic device.
  • the method includes the following steps:
  • Step S202 Receive first data sent by the first electronic device, where the first data is generated by the first electronic device based on the image acquired by the image capturing unit.
  • Step S204 sending, to the first electronic device, second data for controlling the first electronic device, where the second data is data obtained by the second electronic device detecting the posture (position and/or posture) of the second electronic device, wherein After receiving the second data sent by the second electronic device, the first electronic device obtains a control instruction for controlling the working state of the first electronic device based on the second data, and executes a control instruction, where the control instruction is used for at least control
  • the driving unit of the first electronic device drives the first electronic device to generate a displacement.
  • the first electronic device can be a robot, an unmanned aerial vehicle, or the like.
  • the first electronic device includes a driving unit, the driving unit is configured to provide a driving force for the first electronic device to enable the first electronic device to generate a displacement, the first electronic device further includes at least one image capturing unit, and the image collecting unit is configured to acquire the first An image of the environment in which the electronic device is located.
  • the image acquisition unit can be a camera.
  • the image acquisition unit of the first electronic device collects an image of the environment in which the first electronic device is located, and generates first data based on the acquired image.
  • the first electronic device transmits the first data to the second electronic device.
  • the second electronic device is a control end device of the first electronic device.
  • the second electronic device may be a virtual reality device or the like. The user can wear the second electronic device or wear it on the head.
  • the second electronic device receives the first data sent by the first electronic device, so that the user can obtain the environment information of the first electronic device, for example, an image of the environment in which the first electronic device is located.
  • the user can control the change of his or her position and/or posture based on the image of the environment in which the first electronic device is seen. Since the user wears the second electronic device on the body or wears it on the head, when the user's posture changes, the posture of the second electronic device also changes.
  • the second electronic device obtains the second data based on the detection of its own pose.
  • the second electronic device sends the second data to The first electronic device.
  • the control instruction is an instruction for controlling an operating state of the first electronic device, for example, the control command can control the first
  • the driving unit of the electronic device drives the first electronic device to generate a displacement, or the control command can control the driving unit of the first electronic device to drive the first electronic device to adjust the moving direction or the moving speed.
  • the driving unit is configured to provide a driving force to the first electronic device to enable the first electronic device to generate a displacement.
  • the image capturing unit of the first electronic device collects an image of the surrounding environment, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data. Detecting the self-position to obtain the second data, and transmitting the second data to the first electronic device, the first electronic device generates a control instruction according to the second data, and generates a displacement according to the control instruction, because the user can view from the perspective of the first electronic device Adjusting the pose of the second electronic device according to the image to control the movement of the first electronic device according to the image, and achieving the technical effect of providing the user with an immersive experience during the process of controlling the movement of the first electronic device Further, the technical problem that the control of the movement of the electronic device cannot provide the immersive experience to the user in the prior art is solved.
  • the first electronic device is a robot and the second electronic device is a VR device.
  • the robot collects the environmental image and generates the first data based on the image and sends it to the VR device.
  • the VR device receives the first data, generates an image based on the first data, and the user wearing the VR device sees the image generated by the VR device, and can know the information of the environment in which the robot is located, and the user controls the posture according to the environmental information of the robot. Changes are made and the pose of the VR device changes.
  • the VR device obtains the second data based on the detection of its own pose.
  • the VR device sends the second data to the robot.
  • the control command is an instruction for controlling an operating state of the robot, for example, the control command can control the driving unit of the robot to drive the robot to generate a displacement. Or, the control command can control the driving unit of the robot to drive the robot to adjust the moving direction or the moving speed.
  • the embodiment of the present invention can integrate an audio collection unit on both the first electronic device and the second electronic device, and the two can transmit data through wireless communication, so that the voice call function can be implemented.
  • the audio collection unit of the second electronic device collects the audio information sent by the user and transmits the information to the first electronic device through wireless communication.
  • the first electronic device receives and plays the audio information sent by the second electronic device.
  • the first electronic device is a robot
  • the second electronic device is a VR device
  • the user A is a user of the VR device.
  • the user A controls the robot to visit the shopping mall, and the user A sees the situation in the shopping mall through the stereoscopic image of the surrounding environment sent by the robot.
  • the audio data of the surrounding environment sent by the robot hears the sound of the environment around the robot, and there is an experience in the mall.
  • user A sees an item and wants to know its price. User A makes a sound. "How much is this thing?"
  • the audio collection unit of the VR device collects the sound from user A and sends the audio data of user A through wireless communication.
  • the robot receives and plays the audio data of the user A, so the salesperson in the mall hears the voice of the robot: "How much is this thing?"
  • the robot continues to collect the image and audio data of the surrounding environment, and synthesizes the data and sends it to the VR device, so that the user A can hear the answer from the salesperson, and the user can continue to voice.
  • the VR device then sends the voice information to the robot, and the robot receives and plays the audio information of the user A, so that the user A can communicate and communicate with the salesperson.
  • the image capturing unit of the first electronic device collects an image of the surrounding environment, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data.
  • the self-position is detected to obtain the second data, and the second data is sent to the first electronic device.
  • the second data is a measured pitch and/or yaw angle of the second electronic device, or the second data is a measured moving line speed and/or angular velocity of the second electronic device.
  • the first electronic device queries the preset first map according to the pitch and/or yaw angle measured by the second electronic device. Relationship, obtaining a pitch and/or yaw angle of a corresponding image acquisition unit of the first electronic device, and generating, based on the obtained pitch and/or yaw angle of the image acquisition unit, for controlling the image acquisition unit to adjust pitch and/or yaw Angle control instructions.
  • the first electronic device queries the preset second mapping according to the measured moving linear velocity and/or angular velocity of the second electronic device. a relationship, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device, and generating, based on the obtained moving linear velocity and/or angular velocity of the first electronic device, for controlling the first electronic device to adjust the moving linear velocity and/or the angular velocity Control instruction.
  • the first electronic device queries the preset pitch according to the pitch and/or yaw angle measured by the second electronic device. a third mapping relationship, obtaining a corresponding moving linear velocity and/or angular velocity of the first electronic device, and generating, based on the obtained moving linear velocity and/or angular velocity of the first electronic device, for controlling the first electronic device to adjust the moving linear velocity and/or Angular speed control command.
  • the first electronic device queries the preset fourth mapping according to the measured moving linear velocity and/or angular velocity of the second electronic device. Relationship, obtaining a pitch and/or yaw angle of a corresponding image acquisition unit of the first electronic device, and generating, based on the obtained pitch and/or yaw angle of the image acquisition unit, for controlling the image acquisition unit to adjust pitch and/or yaw Angle control instructions.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship are all pre-stored in the first electronic device.
  • the first mapping relationship is a relationship between a pitch and/or a yaw angle of the second electronic device and the first electronic device.
  • the second mapping relationship is a relationship between a moving linear velocity and/or an angular velocity of the second electronic device and the first electronic device.
  • the third mapping relationship is a pitch and/or yaw angle of the second electronic device, and a moving linear velocity and/or angular velocity of the first electronic device The relationship between.
  • the fourth mapping relationship is a relationship between the moving linear velocity and/or angular velocity of the second electronic device and the pitch and/or yaw angle of the first electronic device.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship may all be a functional relationship.
  • the second mapping relationship is as shown in Table 1.
  • the user B wears the VR device and turns on the robot.
  • the image acquisition unit of the robot collects an image of the surrounding environment, generates first data based on the acquired image, and transmits the first data to the VR device, and the VR device receives the first data, and provides the environment image around the robot to the user B.
  • User B finds that the robot is in a very small environment and the road is wide, so User B walks at a higher speed (greater than or equal to 3 m/s), assuming that User B walks at a speed of 4 m/s.
  • the VR device Since the VR device is worn on the user B, the VR device detects its own moving linear velocity and obtains 4 m/s, then the VR device transmits its own moving linear velocity to the robot, and after the robot receives the moving linear velocity of the VR device, In Table 1, the moving linear velocity of the robot corresponding to 4 m/s is queried. Since 3 m/s ⁇ 4 m/s, the corresponding moving linear velocity is 1.5 m/s.
  • the robot generates a control command for controlling the robot to travel at a speed of 1.5 m/s, the robot executes a control command, and the drive unit drives the robot to travel at a speed of 1.5 m/s.
  • FIG. 3 is a schematic diagram of a first electronic device according to an embodiment of the present invention.
  • the first electronic device includes: an image collecting unit 30, a generating unit 32, a first sending unit 34, and a first receiving unit. 35. Acquisition unit 37 and drive unit 39.
  • the image acquisition unit 30 is configured to collect an image of an environment in which the first electronic device is located.
  • the image acquisition unit 30 can be a camera.
  • the generating unit 32 is configured to generate first data for sending to the second electronic device based on the image; wherein the second electronic device is a control end device of the first electronic device, and the second electronic device is capable of receiving the first electronic device First data, and can be sent to the first electronic device for control
  • the second data of the electronic device, the second data is the data obtained by the second electronic device based on detecting the pose (position and/or posture) of itself.
  • the first sending unit 34 is configured to send the first data to the second electronic device.
  • the first receiving unit 35 is configured to receive the second data that is sent by the second electronic device.
  • the obtaining unit 37 is configured to obtain a control instruction for controlling an operating state of the first electronic device based on the second data, wherein the control instruction is at least used to control the driving unit 39 of the first electronic device to drive the first electronic device to generate a displacement.
  • the driving unit 39 is configured to execute a control instruction to provide a driving force to the first electronic device to enable the first electronic device to generate a displacement.
  • the first electronic device can be a robot, an unmanned aerial vehicle, or the like.
  • the second electronic device is a control end device of the first electronic device.
  • the second electronic device may be a virtual reality device or the like. The user can wear the second electronic device or wear it on the head.
  • the second electronic device receives the first data sent by the first electronic device, so that the user can obtain the environment information of the first electronic device, for example, an image of the environment in which the first electronic device is located.
  • the user can control the change of his or her position and/or posture based on the image of the environment in which the first electronic device is seen. Since the user wears the second electronic device on the body or wears it on the head, when the user's posture changes, the posture of the second electronic device also changes.
  • the second electronic device obtains the second data based on the detection of its own pose.
  • the second electronic device transmits the second data to the first electronic device.
  • the control instruction is an instruction for controlling an operating state of the first electronic device, for example, the control command can control the first
  • the driving unit of the electronic device drives the first electronic device to generate a displacement, or the control command can control the driving unit of the first electronic device to drive the first electronic device to adjust the moving direction or the moving speed.
  • the driving unit is configured to provide a driving force to the first electronic device to enable the first electronic device to generate a displacement.
  • the image collecting unit of the first electronic device collects a map of the surrounding environment. For example, generating the first data based on the acquired image and transmitting the first data to the second electronic device, the second electronic device receiving the first data, detecting the pose of the self to obtain the second data, and transmitting the second data to the first electronic
  • the first electronic device generates a control command according to the second data, and generates a displacement according to the control instruction, because the user can view the image of the surrounding environment from the perspective of the first electronic device, and adjust the posture of the second electronic device according to the image to control
  • the movement of the first electronic device achieves the technical effect of providing the user with an immersive experience in controlling the movement of the first electronic device
  • the first electronic device is a robot and the second electronic device is a VR device.
  • the robot collects the environmental image and generates the first data based on the image and sends it to the VR device.
  • the VR device receives the first data, generates an image based on the first data, and the user wearing the VR device sees the image generated by the VR device, and can know the information of the environment in which the robot is located, and the user controls the posture according to the environmental information of the robot. Changes are made and the pose of the VR device changes.
  • the VR device obtains the second data based on the detection of its own pose.
  • the VR device sends the second data to the robot.
  • the control command is an instruction for controlling an operating state of the robot, for example, the control command can control the driving unit of the robot to drive the robot to generate a displacement. Or, the control command can control the driving unit of the robot to drive the robot to adjust the moving direction or the moving speed.
  • the first electronic device includes a plurality of the image acquisition units 30, and the image acquisition unit 30 is configured to acquire a two-dimensional image of an environment in which the first electronic device is located.
  • the generating unit 32 includes a generating subunit.
  • the generating subunit is configured to generate a corresponding stereoscopic image by executing a preset stereo image building algorithm based on the two-dimensional images respectively obtained by the plurality of image capturing units.
  • the first transmitting unit 34 is configured to transmit the stereoscopic image as the first data to the second electronic device.
  • the first electronic device has a plurality of image acquisition units 30.
  • the plurality of image acquisition units may be located at different positions of the first electronic device, and the first electronic device may be collected from different directions and/or heights. Prepare an image of the surrounding environment.
  • Each image capturing unit 30 of the first electronic device collects at least one image of the environment surrounding the first electronic device, and the image captured by the image capturing unit 30 may be a two-dimensional image, and the first electronic device groups the plurality of two-dimensional images according to the preset three-dimensional image.
  • the image construction algorithm generates a corresponding stereoscopic image, and transmits the stereoscopic image as the first data to the second electronic device. After the second electronic device receives the stereoscopic image transmitted by the first electronic device, the user (worn the second electronic device) can see the stereoscopic image of the environment in which the first electronic device is located.
  • the user controls the position change of the user, the posture of the second electronic device changes accordingly, the second electronic device detects the posture of the second device to obtain the second data, and sends the second data to the first electronic device, and the first electronic device is based on the second
  • the data generation control instruction generates a displacement according to the control instruction. Since the first electronic device transmits the stereoscopic image to the second electronic device, the user using the second electronic device can view the surrounding environment of the first electronic device from the perspective of the first electronic device.
  • the stereoscopic image adjusts the pose of the second electronic device according to the stereoscopic image to control the motion of the first electronic device, and achieves the technical effect of providing the user with an immersive experience during the process of controlling the movement of the first electronic device.
  • the first electronic device further includes: an audio collection unit, a merging unit.
  • the audio collection unit is configured to collect audio data of an environment in which the first electronic device is located.
  • the merging unit is configured to synchronize the stereo image with the audio data.
  • the first transmitting unit is configured to transmit the synchronized combined stereoscopic image and audio data as the first data to the second electronic device.
  • the first electronic device has an audio collection unit.
  • the audio collection unit is capable of collecting audio data of an environment in which the first electronic device is located.
  • Each of the plurality of image acquisition units of the first electronic device acquires at least one image of the environment surrounding the first electronic device, and the image acquired by the image acquisition unit may be a two-dimensional image, and the first electronic device uses the plurality of two-dimensional images according to the image.
  • the preset stereo image construction algorithm generates a corresponding stereo image.
  • the audio collection unit of the first electronic device collects audio data of an environment in which the first electronic device is located.
  • the merging unit of the first electronic device synchronously combines the stereoscopic image and the audio data, and uses the synchronously combined stereoscopic image and the audio data as the first data transmission. Send to the second electronic device.
  • the first data sent by the first electronic device to the second electronic device includes the audio data of the environment in which the first electronic device is located
  • the user of the second electronic device can more immersively feel that the first electronic device is located The environment, therefore, the control of the first electronic device is more accurate, ensuring the security of the first electronic device.
  • the embodiment of the present invention can integrate an audio collection unit on both the first electronic device and the second electronic device, and the two can transmit data through wireless communication, so that the voice call function can be implemented.
  • the audio collection unit of the second electronic device collects the audio information sent by the user and transmits the information to the first electronic device through wireless communication.
  • the first electronic device receives and plays the audio information sent by the second electronic device.
  • the first electronic device is a robot
  • the second electronic device is a VR device
  • the user A is a user of the VR device.
  • the user A controls the robot to visit the shopping mall, and the user A sees the situation in the shopping mall through the stereoscopic image of the surrounding environment sent by the robot.
  • the audio data of the surrounding environment sent by the robot hears the sound of the environment around the robot, and there is an experience in the mall.
  • user A sees an item and wants to know its price. User A makes a sound. "How much is this thing?"
  • the audio collection unit of the VR device collects the sound from user A and sends the audio data of user A through wireless communication.
  • the robot receives and plays the audio data of the user A, so the salesperson in the mall hears the voice of the robot: "How much is this thing?"
  • the robot continues to collect the image and audio data of the surrounding environment, and synthesizes the data and sends it to the VR device, so that the user A can hear the answer from the salesperson, and the user can continue to voice.
  • the VR device then sends the voice information to the robot, and the robot receives and plays the audio information of the user A, so that the user A can communicate and communicate with the salesperson.
  • the second data is a pitch and/or yaw angle measured by the second electronic device.
  • the fetch unit 37 includes: a first query subunit, and a first generation subunit.
  • the first query subunit is configured to query the preset first mapping relationship according to the measured pitch and/or yaw angle of the second electronic device, and obtain the pitch and/or yaw of the corresponding image acquiring unit of the first electronic device. angle.
  • the first generation subunit is configured to generate a control instruction for controlling the image acquisition unit to adjust the pitch and/or yaw angle based on the obtained pitch and/or yaw angle of the image acquisition unit.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the obtaining unit 37 includes: a second query subunit, and a second generating subunit.
  • the second query sub-unit is configured to query the preset second mapping relationship according to the measured moving linear velocity and/or angular velocity of the second electronic device to obtain a corresponding moving linear velocity and/or angular velocity of the first electronic device.
  • the second generation subunit is configured to generate a control instruction for controlling the first electronic device to adjust the moving linear velocity and/or the angular velocity based on the obtained moving linear velocity and/or angular velocity of the first electronic device.
  • the second data is a pitch and/or a yaw angle measured by the second electronic device
  • the acquiring unit 37 includes: a third query subunit, and a third generation subunit.
  • the third query subunit is configured to query the preset third mapping relationship according to the measured pitch and/or yaw angle of the second electronic device to obtain a corresponding moving line speed and/or angular velocity of the first electronic device.
  • a third generation subunit configured to generate a control instruction for controlling the first electronic device to adjust the moving linear velocity and/or the angular velocity based on the obtained moving linear velocity and/or angular velocity of the first electronic device.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the obtaining unit 37 includes: a fourth query subunit and a fourth generating subunit.
  • a fourth query subunit configured to query a preset fourth mapping relationship according to the measured moving line speed and/or angular velocity of the second electronic device, to obtain pitch and/or yaw of the corresponding image acquiring unit of the first electronic device angle.
  • a fourth generation subunit configured to generate a control instruction for controlling the image acquisition unit to adjust the pitch and/or yaw angle based on the obtained pitch and/or yaw angle of the image acquisition unit.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship are all pre-stored in the first electronic device.
  • the first mapping relationship is the second electronic device and the first electrical The relationship between the pitch and/or yaw angle of the child device.
  • the second mapping relationship is a relationship between a moving linear velocity and/or an angular velocity of the second electronic device and the first electronic device.
  • the third mapping relationship is a relationship between a pitch and/or a yaw angle of the second electronic device and a moving linear velocity and/or angular velocity of the first electronic device.
  • the fourth mapping relationship is a relationship between the moving linear velocity and/or angular velocity of the second electronic device and the pitch and/or yaw angle of the first electronic device.
  • the first mapping relationship, the second mapping relationship, the third mapping relationship, and the fourth mapping relationship may all be a functional relationship.
  • the second mapping relationship is as shown in Table 1.
  • the user B wears the VR device and turns on the robot.
  • the image acquisition unit of the robot collects an image of the surrounding environment, generates first data based on the acquired image, and transmits the first data to the VR device, and the VR device receives the first data, and provides the environment image around the robot to the user B.
  • User B finds that the robot is in a very small environment and the road is wide, so User B walks at a higher speed (greater than or equal to 3 m/s), assuming that User B walks at a speed of 4 m/s.
  • the VR device Since the VR device is worn on the user B, the VR device detects its own moving linear velocity and obtains 4 m/s, then the VR device transmits its own moving linear velocity to the robot, and after the robot receives the moving linear velocity of the VR device,
  • the second query subunit queries the moving linear velocity of the robot corresponding to 4m/s in Table 1. Since 3m/s ⁇ 4m/s, the corresponding moving linear velocity is 1.5m/s.
  • the second generating subunit of the robot generates a control command based on the corresponding moving line speed of 1.5 m/s, the control command is used to control the robot to walk at a speed of 1.5 m/s, the robot executes a control command, and the driving unit drives the robot to 1.5.
  • the speed of m/s is walking.
  • the image capturing unit 30 can be implemented by a camera
  • the driving unit 39 can be implemented by a power component
  • the audio collecting unit can be implemented by a microphone or a microphone array
  • the generating unit 32 the first sending unit 34.
  • the first receiving unit 35, the obtaining unit 37, and the merging unit may each be a central processing unit (CPU), or a microprocessor (MPU), or a digital Signal processor (DSP), or programmable gate array (FPGA) implementation.
  • CPU central processing unit
  • MPU microprocessor
  • DSP digital Signal processor
  • FPGA programmable gate array
  • FIG. 4 is a schematic diagram of a second electronic device according to an embodiment of the present invention.
  • the second electronic device is a control device of the first electronic device. As shown in FIG. 4, the second electronic device includes: a second receiving unit 40. And a second transmitting unit 42.
  • the second receiving unit 40 is configured to receive the first data sent by the first electronic device, where the first data is generated by the first electronic device based on the image acquired by the image capturing unit.
  • the second sending unit 42 is configured to send, to the first electronic device, second data for controlling the first electronic device, where the second data is data obtained by the second electronic device based on the pose detection of the first electronic device, where After receiving the second data sent by the second electronic device, the electronic device obtains a control instruction for controlling the working state of the first electronic device based on the second data, and executes a control instruction, where the control command is used to control at least the first
  • the drive unit of the electronic device drives the first electronic device to generate a displacement.
  • the second data is a measured pitch and/or yaw angle of the second electronic device
  • the second electronic device further includes: a first detecting unit.
  • the first detecting unit is configured to measure a pitch and/or a yaw angle of the second electronic device before the second transmitting subunit sends the second data for controlling the first electronic device to the first electronic device.
  • the second transmitting unit is configured to deliver the pitch and/or yaw angle of the second electronic device to the first electronic device.
  • the second data is the measured moving linear velocity and/or angular velocity of the second electronic device
  • the second electronic device further includes: a second detecting unit.
  • the second detecting unit is configured to measure a moving linear velocity and/or an angular velocity of the second electronic device before the third transmitting subunit sends the second data for controlling the first electronic device to the first electronic device.
  • the second transmitting unit is configured to deliver a moving linear velocity and/or an angular velocity of the second electronic device to the first electronic device.
  • the second receiving unit 40 and the second sending unit 42 can be implemented by a CPU, or an MPU, or a DSP, or an FPGA.
  • the first detecting unit and the second detecting unit may be implemented by sensors.
  • FIG. 5 It is an interaction diagram of modules of the first electronic device and the second electronic device according to an embodiment of the present invention. Specifically, as shown in FIG. 5,
  • the first electronic device includes: a communication transmission module 12, a data processing module 14, an image acquisition module 16, a sound collection module 17, and a motion module 18.
  • the communication transmission module 12 can perform data transmission with the communication transmission module 22 of the second electronic device in a wireless communication manner.
  • the image acquisition module 16 can be the image acquisition unit described above.
  • the sound collection module 17 can be the audio collection unit described above.
  • the motion module 18 can be the above described drive unit.
  • the second electronic device includes: a communication transmission module 22, a data processing module 24, an image playback module 26, a sound playback and acquisition module 27, and a plurality of sensor combinations 28.
  • the communication transmission module 22 can perform data transmission with the communication transmission module 12 in a wireless communication manner.
  • the sound playback and acquisition module 27 can be the audio collection unit described above.
  • the data processing module 14 is responsible for processing data from the motion module 18, the image acquisition module 16, the sound collection module 17, and the communication transmission module 12, wherein the motion module 18 Responsible for the movement of the robot, the image acquisition module 16 and the sound collection module 17 replicate the image and sound of the acquisition environment, and the data processing module 14 communicates the data directly or indirectly to the virtual device via the communication transmission module via a correlation algorithm.
  • the communication transmission module 22 on the second electronic device receives the data, and then processes the image and sound through the image processing module 26 and the sound playback and acquisition module 27 for the operator to perceive.
  • a plurality of sensors are integrated on the second electronic device, and position and posture information of a certain body part (including the first class) of the operator is collected, and the control signal for converting the information into the first electronic device is transmitted to the first through the communication transmission module 22.
  • An electronic device An electronic device.
  • the following is an example of how the information processing method provided by the embodiment of the present invention is performed, taking the first electronic device as a robot and the second electronic device as a virtual reality device (VR device) as an example.
  • VR device virtual reality device
  • (a) Formulate the control rules of the robot. For example, the operator bows his head to indicate that the robot is moving forward; the operator turns left to indicate that the robot turns to the left, the operator turns to the right to indicate that the robot turns to the right, and so on.
  • the operator wears the virtual reality device, turns on the robot, and controls the motion of the robot by changing the pose of one or more parts of the operator's body.
  • the virtual reality device integrates a plurality of sensors, such as a gyroscope, to sense the posture change of the operator's head.
  • the robot integrates a plurality of cameras on the body. These cameras convert the acquired image information into a body image data through a certain process, and transmit it to the virtual reality device through the wireless communication module of the robot, so that the operator can feel the robot in real time.
  • the first perspective In order to enhance the experience, the ambient sound around the robot is also transmitted to the virtual device at the same time. You can also interact with people or other devices remotely via voice.
  • the operator can operate the robot to cross the road and cross the mall.
  • the voice call function can be realized through the call function (the sound collection module is integrated on both the robot and the virtual device, and the two transmit data through wireless communication).
  • the user can control the robot to go to museums, shopping malls and other places to play, through the robot's visual system to achieve "do not leave the house, swim the world landscape.”
  • the first electronic device includes: a processor and a memory for storing a first computer program capable of running on a processor, wherein the processor is configured to run the first computer program The steps of the above method applied to the first electronic device are performed.
  • the second electronic device includes: a processor and a memory for storing a second computer program executable on the processor, wherein the processor is configured to run the second computer At the time of the program, the steps of the above method applied to the second electronic device are performed.
  • the memory may be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), or an Erasable Programmable Read (EPROM). Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or Compact Disc Read-Only Memory (CD-ROM); the magnetic surface memory can be a disk storage or a tape storage.
  • the volatile memory can be a random access memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • SSRAM Dynamic Random Access
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhancement Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Dynamic Random Access Memory
  • DRRAM Direct Memory Bus Random Access Memory
  • the processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above processor may be a general purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a storage medium, the storage medium being located in the memory, the processor reading the information in the memory, and completing the steps of the foregoing methods in combination with the hardware thereof.
  • Embodiments of the present invention also provide a first computer readable storage medium having stored thereon a first computer program, wherein the first computer program is executed by a processor to implement the steps of the method applied to the first electronic device .
  • the embodiment of the present invention further provides a second computer readable storage medium having stored thereon a second computer program, wherein the second computer program is executed by the processor to implement the steps of the above method applied to the second electronic device.
  • the computer readable storage medium described above may be a memory such as FRAM, ROM, programmable read only memory PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; One or any combination of various devices.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interface, indirect coupling of the unit or module or The communication connection can be in electrical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
  • An embodiment of the present invention collects an image of a surrounding environment by using an image collecting unit of the first electronic device, generates first data based on the collected image, and sends the first data to the second electronic device, where the second electronic device receives the first data, and detects the first data.
  • the pose gets the second data and sends the second data Sending to the first electronic device, the first electronic device generates a control command according to the second data, and generates a displacement according to the control instruction, because the user can view the image of the surrounding environment from the perspective of the first electronic device, and adjust the second electronic device according to the image
  • the positional posture controls the movement of the first electronic device to achieve the technical effect of providing the user with an immersive experience in controlling the movement of the first electronic device, thereby solving the control of the movement of the electronic device in the prior art. Unable to provide users with technical issues of an immersive experience.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种信息处理方法、第一电子设备和第二电子设备及存储介质。其中,该方法包括:获得图像采集单元采集所得的图像,并基于图像生成用于发送给第二电子设备的第一数据(S102);其中,第二电子设备为第一电子设备的控制端设备,第二电子设备能接收第一电子设备发送的第一数据(S202),并能向第一电子设备下发用于控制第一电子设备的第二数据,第二数据为第二电子设备基于对自身的位姿检测所得到的数据(S204);接收第二电子设备下发的第二数据,基于第二数据获得用于控制第一电子设备工作状态的控制指令,并执行控制指令(S104)。该方案决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。

Description

一种信息处理方法及电子设备、存储介质
相关申请的交叉引用
本申请基于申请号为201611005964.0、申请日为2016年11月15日的中国专利申请,以及申请号为201610913186.9、申请日为2016年10月19日中国专利申请提出,并要求该两项中国专利申请的优先权,该两项中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及控制领域,尤其涉及一种信息处理方法及电子设备、存储介质。
背景技术
随着科技的发展,电子设备(例如机器人、无人机等)在生产和生活中发挥着越来越重要的作用。对电子设备的运动控制是关键问题。目前,用户在对电子设备进行控制时,主要依靠手柄、计算机、遥控器进行。
发明人发现,现有技术中对电子设备运动的控制无法向用户提供身临其境的体验。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种信息处理方法及电子设备、存储介质,以至少解决现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。
根据本发明实施例的一个方面,提供了一种信息处理方法,包括:获 得所述图像采集单元采集所得的图像,并基于所述图像生成用于发送给第二电子设备的第一数据;其中,所述第二电子设备为所述第一电子设备的控制端设备,所述第二电子设备能接收所述第一电子设备发送的所述第一数据,并能向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据;接收所述第二电子设备下发的第二数据,基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,并执行所述控制指令;其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
上述方案中,所述获得图像采集单元采集所得的图像,并基于所述图像生成用于发送给第二电子设备的第一数据,包括:基于多个所述图像采集单元分别获得的二维图像,并通过执行预设的立体图像构建算法生成对应的立体图像,并将所述立体图像作为所述第一数据发送给所述第二电子设备。
上述方案中,所述第一电子设备还具有音频采集单元,所述音频采集单元用于采集所述第一电子设备所处环境的音频数据;在生成所述立体图像后,所述方法还包括:将所述立体图像与所述音频数据进行同步合并,并将同步合并后的所述立体图像和音频数据作为所述第一数据发送给所述第二电子设备。
上述方案中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,所述基于第二数据获得用于控制所述第一电子设备工作状态的控制指令,包括:根据所述第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得所述第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的所述图像采集单元的俯仰和/或偏航角度生成用于控制所述图像采集单元调整俯仰和/或偏航角度的控制指令。
上述方案中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,所述基于第二数据获得用于控制所述第一电子设备工作状态的控制指令,包括:根据所述第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得所述第一电子设备的对应移动线速度和/或角速度,并基于获得的所述第一电子设备的移动线速度和/或角速度生成用于控制所述第一电子设备调整移动线速度和/或角速度的控制指令。
根据本发明实施例的又一方面,还提供了一种信息处理方法,包括:接收所述第一电子设备发送的第一数据,其中,所述第一数据是所述第一电子设备基于所述图像采集单元采集所得的图像生成的;向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据,其中,所述第一电子设备在接收所述第二电子设备下发的第二数据之后,基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,并执行所述控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
上述方案中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,在向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,所述方法还包括:测量所述第二电子设备的俯仰和/或偏航角度;向所述第一电子设备下发用于控制所述第一电子设备的第二数据包括:向所述第一电子设备下发所述第二电子设备的俯仰和/或偏航角度。
上述方案中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,在向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,所述方法还包括:测量所述第二电子设备的移动线速度和/或角速度;向所述第一电子设备下发用于控制所述第一电子设备的第二数 据包括:向所述第一电子设备下发所述第二电子设备的移动线速度和/或角速度。
根据本发明实施例的又一方面,还提供了一种第一电子设备,包括:图像采集单元,配置为采集所述第一电子设备所处环境的图像;生成单元,配置为基于所述图像生成用于发送给第二电子设备的第一数据;其中,所述第二电子设备为所述第一电子设备的控制端设备,所述第二电子设备能接收所述第一电子设备发送的所述第一数据,并能向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据;第一发送单元,配置为向所述第二电子设备发送所述第一数据;第一接收单元,配置为接收所述第二电子设备下发的第二数据;获取单元,配置为基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移;驱动单元,配置为执行所述控制指令,为所述第一电子设备提供驱动力以使得所述第一电子设备能够产生位移。
上述方案中,所述第一电子设备包括多个所述图像采集单元,多个所述图像采集单元配置为采集所述第一电子设备所处环境的二维图像,所述生成单元包括:生成子单元,配置为基于多个所述图像采集单元分别获得的二维图像通过执行预设的立体图像构建算法生成对应的立体图像,所述第一发送单元,还配置为将所述立体图像作为所述第一数据发送给所述第二电子设备。
上述方案中,所述第一电子设备还包括:音频采集单元,配置为采集所述第一电子设备所处环境的音频数据;合并单元,配置为将所述立体图像与所述音频数据进行同步合并;所述第一发送单元,还配置为将同步合并后的所述立体图像和音频数据作为所述第一数据发送给所述第二电子设 备。
上述方案中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,所述获取单元包括:第一查询子单元,配置为根据所述第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得所述第一电子设备的对应图像采集单元的俯仰和/或偏航角度;第一生成子单元,配置为基于获得的所述图像采集单元的俯仰和/或偏航角度生成用于控制所述图像采集单元调整俯仰和/或偏航角度的控制指令。
上述方案中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,所述获取单元包括:第二查询子单元,配置为根据所述第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得所述第一电子设备的对应移动线速度和/或角速度;第二生成子单元,配置为基于获得的所述第一电子设备的移动线速度和/或角速度生成用于控制所述第一电子设备调整移动线速度和/或角速度的控制指令。
根据本发明实施例的又一方面,还提供了一种第二电子设备,包括:第二接收单元,配置为接收所述第一电子设备发送的第一数据,其中,所述第一数据是所述第一电子设备基于所述图像采集单元采集所得的图像生成的;第二发送单元,配置为向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据,其中,所述第一电子设备在接收所述第二电子设备下发的第二数据之后,基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,并执行所述控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
上述方案中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,所述第二电子设备还包括:第一检测单元,配置为在第二发送子单元向所述第一电子设备下发用于控制所述第一电子设备的第二数据之 前,测量所述第二电子设备的俯仰和/或偏航角度;所述第二发送单元,还配置为向所述第一电子设备下发所述第二电子设备的俯仰和/或偏航角度。
上述方案中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,所述第二电子设备还包括:第二检测单元,配置为在第三发送子单元向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,测量所述第二电子设备的移动线速度和/或角速度;所述第二发送单元,还配置为向所述第一电子设备下发所述第二电子设备的移动线速度和/或角速度。
根据本发明实施例的再一方面,还提供了一种第一电子设备,包括:处理器和用于存储能够在处理器上运行的第一计算机程序的存储器,其中,所述处理器用于运行所述第一计算机程序时,执行以上应用于第一电子设备的方法的步骤。
根据本发明实施例的再一方面,还提供了第一种计算机可读存储介质,其上存储有第一计算机程序,其中,该第一计算机程序被处理器执行时实现以上应用于第一电子设备的方法的步骤。
根据本发明实施例的再一方面,还提供了一种第二电子设备,包括:处理器和用于存储能够在处理器上运行的第二计算机程序的存储器,其中,所述处理器用于运行所述第二计算机程序时,执行以上应用于第二电子设备的方法的步骤。
根据本发明实施例的再一方面,还提供了第二种计算机可读存储介质,其上存储有第二计算机程序,其中,该第二计算机程序被处理器执行时实现以上应用于第二电子设备的方法的步骤。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发 送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于用户能够以第一电子设备的视角看到周围环境的图像,根据该图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果,进而解决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1是根据本发明实施例的第一电子设备执行信息处理方法的流程图;
图2是根据本发明实施例的第二电子设备执行信息处理方法的流程图;
图3是根据本发明实施例的一种第一电子设备的示意图;
图4是根据本发明实施例的一种第二电子设备的示意图;
图5是根据本发明实施例的第一电子设备和第二电子设备的模块的交互图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后 次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本发明实施例,提供了一种信息处理方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本发明实施例的第一电子设备执行信息处理方法的流程图,该方法应用于第一电子设备。第一电子设备包括驱动单元,驱动单元用于为第一电子设备提供驱动力以使得第一电子设备能够产生位移,第一电子设备还包括至少一个图像采集单元,图像采集单元用于采集第一电子设备所处环境的图像。
如图1所示,该方法包括如下步骤:
步骤S102,获得图像采集单元采集所得的图像,并基于图像生成用于发送给第二电子设备的第一数据;其中,第二电子设备为第一电子设备的控制端设备,第二电子设备能接收第一电子设备发送的第一数据,并能向第一电子设备下发用于控制第一电子设备的第二数据,第二数据为第二电子设备基于对自身的位姿(位置和/或姿态)检测所得到的数据。
步骤S104,接收第二电子设备下发的第二数据,基于第二数据获得用于控制第一电子设备工作状态的控制指令,并执行控制指令;其中,控制指令至少用于控制第一电子设备的驱动单元驱动第一电子设备产生位移。
第一电子设备可以是机器人、无人飞行器等。
第一电子设备包括驱动单元和至少一个图像采集单元。图像采集单元可以是摄像头。图像采集单元用于采集第一电子设备所处环境的图像。
第一电子设备的图像采集单元采集第一电子设备所在环境的图像,基于采集的图像生成第一数据。
第一电子设备将第一数据发送给第二电子设备。第二电子设备是第一电子设备的控制端设备。第二电子设备可以是虚拟现实(VR,Virtual Reality)设备等。用户可以将第二电子设备穿在身上或者戴在头上。
第二电子设备接收第一电子设备发送的第一数据,这样,用户就能得到第一电子设备所处的环境信息,例如,看到第一电子设备所处的环境的图像。用户可以根据看到的第一电子设备所处的环境的图像控制自己位置和/或姿态的改变。由于用户将第二电子设备穿在身上或者戴在头上,当用户的位姿发生改变时,第二电子设备的位姿也随之发生改变。第二电子设备基于对自身位姿的检测得到第二数据。第二电子设备将第二数据发送给第一电子设备。当第一电子设备接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制第一电子设备的工作状态的指令,例如,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备产生位移,或者,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备调整运动方向或者运动速度。驱动单元用于为第一电子设备提供驱动力以使得第一电子设备能够产生位移。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于用户能够以第一电子设备的视角看到周围环境的图像,根据该图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了 在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果,进而解决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。
例如,假设第一电子设备是机器人,第二电子设备是VR设备。机器人采集到环境图像,并基于图像生成第一数据发送给VR设备。VR设备接收第一数据,基于第一数据生成图像,佩戴VR设备的用户看到VR设备生成的图像,就能知道机器人所处的环境的信息,用户根据机器人所处的环境信息控制自身位姿进行改变,VR设备的位姿也随之改变。VR设备基于对自身位姿的检测得到第二数据。VR设备将第二数据发送给机器人。当机器人接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制机器人的工作状态的指令,例如,控制指令能够控制机器人的驱动单元驱动机器人产生位移,或者,控制指令能够控制机器人的驱动单元驱动机器人调整运动方向或者运动速度。
可选地,获得图像采集单元采集所得的图像,并基于图像生成用于发送给第二电子设备的第一数据,包括:基于多个图像采集单元分别获得的二维图像,并通过执行预设的立体图像构建算法生成对应的立体图像,并将立体图像作为第一数据发送给第二电子设备。
第一电子设备有多个图像采集单元,这多个图像采集单元可以位于第一电子设备的不同位置上,可以从不同的方向和/或高度采集第一电子设备周围环境的图像。
第一电子设备的每个图像采集单元采集至少一张第一电子设备周围环境的图像,图像采集单元采集的图像可以是二维图像,第一电子设备将多张二维图像根据预设的立体图像构建算法生成对应的立体图像,将立体图像作为第一数据发送给第二电子设备。第二电子设备接收到第一电子设备发送的立体图像之后,用户(佩戴了第二电子设备)就能看到第一电子设 备所处的环境的立体图像。用户控制自身位姿改变,第二电子设备的位姿随之改变,第二电子设备检测自身位姿得到第二数据,并将第二数据发送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于第一电子设备发送立体图像给第二电子设备,使用第二电子设备的用户能够以第一电子设备的视角看到第一电子设备的周围环境的立体图像,根据该立体图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果。
可选地,第一电子设备还具有音频采集单元,音频采集单元用于采集第一电子设备所处环境的音频数据;在生成立体图像后,方法还包括:将立体图像与音频数据进行同步合并,并将同步合并后的立体图像和音频数据作为第一数据发送给第二电子设备。
第一电子设备具有音频采集单元。音频采集单元能够采集第一电子设备所处环境的音频数据。第一电子设备的多个图像采集单元中每个图像采集单元采集至少一张第一电子设备周围环境的图像,图像采集单元采集的图像可以是二维图像,第一电子设备将多张二维图像根据预设的立体图像构建算法生成对应的立体图像。第一电子设备的音频采集单元采集第一电子设备所处环境的音频数据。第一电子设备将立体图像和音频数据进行同步合并,将同步合并后的立体图像和音频数据作为第一数据发送给第二电子设备。
由于第一电子设备发送给第二电子设备的第一数据包括第一电子设备所处环境的音频数据,因此,第二电子设备的使用者能够更加身临其境地感受到第一电子设备所处的环境,因此,对第一电子设备的控制更加准确,保证了第一电子设备的安全性。
本发明实施例可以在第一电子设备和第二电子设备上都集成音频采集 单元,二者可以通过无线通信传输数据,这样就能够实现语音通话功能。这样,当第二电子设备的使用者发出声音,第二电子设备的音频采集单元采集使用者发出的音频信息,并通过无线通信传输给第一电子设设备。第一电子设备接收并播放第二电子设备发送的音频信息。
例如,第一电子设备是机器人,第二电子设备是VR设备,用户甲是VR设备的使用者,用户甲控制机器人逛商场,用户甲通过机器人发送的周围环境的立体图像看到商场内的情况,通过机器人发送的周围环境的音频数据听到机器人周围环境的声音,有身处商场中的体验。假设用户甲看到一个物品,想知道它的价格,用户甲发出声音“请问这个东西多少钱?”VR设备的音频采集单元采集用户甲发出的声音,并通过无线通信将用户甲的音频数据发送给机器人,机器人接收并播放用户甲的音频数据,于是商场的售货员就听到机器人发出的语音:“请问这个东西多少钱?”。当售货员对该物品的价格作出解答时,机器人继续采集周围环境的图像和音频数据,同步合成后发送给VR设备,这样,用户甲就能听到售货员的解答,用户甲还可以继续发出语音,VR设备再将语音信息发送给机器人,机器人接收并播放用户甲的音频信息,这样,用户甲就能够与售货员进行沟通和交流了。
通过在第一电子设备和第二电子设备上都设置音频采集单元,增强了第一电子设备与周围环境的交互能力。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发送给第一电子设备。可选地,第二数据为第二电子设备测量所得的俯仰和/或偏航角度,或者,第二数据为第二电子设备测量所得的移动线速度和/或角速度。
在第二数据为第二电子设备测量所得的俯仰和/或偏航角度的情况下, 第一电子设备根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
在第二数据为第二电子设备测量所得的移动线速度和/或角速度的情况下,第一电子设备根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得第一电子设备的对应移动线速度和/或角速度,并基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
或者,在第二数据为第二电子设备测量所得的俯仰和/或偏航角度的情况下,第一电子设备根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第三映射关系,获得第一电子设备的对应移动线速度和/或角速度,并基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
在第二数据为第二电子设备测量所得的移动线速度和/或角速度的情况下,第一电子设备根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第四映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
第一映射关系、第二映射关系、第三映射关系和第四映射关系,都是预先存储于第一电子设备当中的。第一映射关系是第二电子设备和第一电子设备的俯仰和/或偏航角度之间的关系。第二映射关系是第二电子设备和第一电子设备的移动线速度和/或角速度之间的关系。第三映射关系是第二电子设备的俯仰和/或偏航角度,与第一电子设备的移动线速度和/或角速度之间的关系。第四映射关系是第二电子设备的移动线速度和/或角速度,与 第一电子设备的俯仰和/或偏航角度之间的关系。第一映射关系、第二映射关系、第三映射关系和第四映射关系都可以是一种函数关系。
例如,作为一种可选的实施例,第二映射关系如表1所示。
表1
第二电子设备的移动线速度v0 第一电子设备的移动线速度
v0≤0.1m/s 0m/s
0.1m/s<v0≤0.2m/s 0.1m/s
0.2m/s<v0≤1m/s 0.2m/s
1m/s<v0≤3m/s 1m/s
3m/s<v0 1.5m/s
假设第一电子设备是机器人,第二电子设备是VR设备,用户乙戴上VR设备,开启机器人。
机器人的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给VR设备,VR设备接收第一数据,将机器人周围的环境图像提供给用户乙。用户乙通过观看VR设备提供的图像,发现机器人所处的环境人非常少,道路开阔,于是用户乙以较大的速度行走(大于等于3m/s),假设用户乙行走的速度是4m/s,由于VR设备佩戴于用户乙身上,因此,VR设备检测自身的移动线速度,得到4m/s,则VR设备将自身的移动线速度发送给机器人,机器人接收到VR设备的移动线速度之后,在表1中查询与4m/s对应的机器人的移动线速度,由于3m/s<4m/s,因此查询到的对应移动线速度是1.5m/s。机器人生成控制指令,该控制指令用于控制机器人以1.5m/s的速度行走,机器人执行控制指令,驱动单元驱动机器人以1.5m/s的速度行走。
图2是根据本发明实施例的第二电子设备执行信息处理方法的流程图,该方法应用于第二电子设备。第二电子设备为第一电子设备的控制端设备。
如图2所示,该方法包括以下步骤:
步骤S202,接收第一电子设备发送的第一数据,其中,第一数据是第一电子设备基于图像采集单元采集所得的图像生成的。
步骤S204,向第一电子设备下发用于控制第一电子设备的第二数据,第二数据为第二电子设备基于对自身的位姿(位置和/或姿态)检测所得到的数据,其中,第一电子设备在接收第二电子设备下发的第二数据之后,基于第二数据获得用于控制第一电子设备工作状态的控制指令,并执行控制指令,其中,控制指令至少用于控制第一电子设备的驱动单元驱动第一电子设备产生位移。
第一电子设备可以是机器人、无人飞行器等。
第一电子设备包括驱动单元,驱动单元用于为第一电子设备提供驱动力以使得第一电子设备能够产生位移,第一电子设备还包括至少一个图像采集单元,图像采集单元用于采集第一电子设备所处环境的图像。图像采集单元可以是摄像头。
第一电子设备的图像采集单元采集第一电子设备所在环境的图像,基于采集的图像生成第一数据。
第一电子设备将第一数据发送给第二电子设备。第二电子设备是第一电子设备的控制端设备。第二电子设备可以是虚拟现实设备等。用户可以将第二电子设备穿在身上或者戴在头上。
第二电子设备接收第一电子设备发送的第一数据,这样,用户就能得到第一电子设备所处的环境信息,例如,看到第一电子设备所处的环境的图像。用户可以根据看到的第一电子设备所处的环境的图像控制自己位置和/或姿态的改变。由于用户将第二电子设备穿在身上或者戴在头上,当用户的位姿发生改变时,第二电子设备的位姿也随之发生改变。第二电子设备基于对自身位姿的检测得到第二数据。第二电子设备将第二数据发送给 第一电子设备。当第一电子设备接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制第一电子设备的工作状态的指令,例如,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备产生位移,或者,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备调整运动方向或者运动速度。驱动单元用于为第一电子设备提供驱动力以使得第一电子设备能够产生位移。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于用户能够以第一电子设备的视角看到周围环境的图像,根据该图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果,进而解决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。
例如,假设第一电子设备是机器人,第二电子设备是VR设备。机器人采集到环境图像,并基于图像生成第一数据发送给VR设备。VR设备接收第一数据,基于第一数据生成图像,佩戴VR设备的用户看到VR设备生成的图像,就能知道机器人所处的环境的信息,用户根据机器人所处的环境信息控制自身位姿进行改变,VR设备的位姿也随之改变。VR设备基于对自身位姿的检测得到第二数据。VR设备将第二数据发送给机器人。当机器人接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制机器人的工作状态的指令,例如,控制指令能够控制机器人的驱动单元驱动机器人产生位移,或者,控制指令能够控制机器人的驱动单元驱动机器人调整运动方向或者运动速度。
本发明实施例可以在第一电子设备和第二电子设备上都集成音频采集单元,二者可以通过无线通信传输数据,这样就能够实现语音通话功能。这样,当第二电子设备的使用者发出声音,第二电子设备的音频采集单元采集使用者发出的音频信息,并通过无线通信传输给第一电子设设备。第一电子设备接收并播放第二电子设备发送的音频信息。
例如,第一电子设备是机器人,第二电子设备是VR设备,用户甲是VR设备的使用者,用户甲控制机器人逛商场,用户甲通过机器人发送的周围环境的立体图像看到商场内的情况,通过机器人发送的周围环境的音频数据听到机器人周围环境的声音,有身处商场中的体验。假设用户甲看到一个物品,想知道它的价格,用户甲发出声音“请问这个东西多少钱?”VR设备的音频采集单元采集用户甲发出的声音,并通过无线通信将用户甲的音频数据发送给机器人,机器人接收并播放用户甲的音频数据,于是商场的售货员就听到机器人发出的语音:“请问这个东西多少钱?”。当售货员对该物品的价格作出解答时,机器人继续采集周围环境的图像和音频数据,同步合成后发送给VR设备,这样,用户甲就能听到售货员的解答,用户甲还可以继续发出语音,VR设备再将语音信息发送给机器人,机器人接收并播放用户甲的音频信息,这样,用户甲就能够与售货员进行沟通和交流了。
通过在第一电子设备和第二电子设备上都设置音频采集单元,增强了第一电子设备与周围环境的交互能力。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发送给第一电子设备。可选地,第二数据为第二电子设备测量所得的俯仰和/或偏航角度,或者,第二数据为第二电子设备测量所得的移动线速度和/或角速度。
在第二数据为第二电子设备测量所得的俯仰和/或偏航角度的情况下,第一电子设备根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
在第二数据为第二电子设备测量所得的移动线速度和/或角速度的情况下,第一电子设备根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得第一电子设备的对应移动线速度和/或角速度,并基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
或者,在第二数据为第二电子设备测量所得的俯仰和/或偏航角度的情况下,第一电子设备根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第三映射关系,获得第一电子设备的对应移动线速度和/或角速度,并基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
在第二数据为第二电子设备测量所得的移动线速度和/或角速度的情况下,第一电子设备根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第四映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
第一映射关系、第二映射关系、第三映射关系和第四映射关系,都是预先存储于第一电子设备当中的。第一映射关系是第二电子设备和第一电子设备的俯仰和/或偏航角度之间的关系。第二映射关系是第二电子设备和第一电子设备的移动线速度和/或角速度之间的关系。第三映射关系是第二电子设备的俯仰和/或偏航角度,与第一电子设备的移动线速度和/或角速度 之间的关系。第四映射关系是第二电子设备的移动线速度和/或角速度,与第一电子设备的俯仰和/或偏航角度之间的关系。第一映射关系、第二映射关系、第三映射关系和第四映射关系都可以是一种函数关系。
例如,作为一种可选的实施例,第二映射关系如表1所示。
假设第一电子设备是机器人,第二电子设备是VR设备,用户乙戴上VR设备,开启机器人。
机器人的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给VR设备,VR设备接收第一数据,将机器人周围的环境图像提供给用户乙。用户乙通过观看VR设备提供的图像,发现机器人所处的环境人非常少,道路开阔,于是用户乙以较大的速度行走(大于等于3m/s),假设用户乙行走的速度是4m/s,由于VR设备佩戴于用户乙身上,因此,VR设备检测自身的移动线速度,得到4m/s,则VR设备将自身的移动线速度发送给机器人,机器人接收到VR设备的移动线速度之后,在表1中查询与4m/s对应的机器人的移动线速度,由于3m/s<4m/s,因此查询到的对应移动线速度是1.5m/s。机器人生成控制指令,该控制指令用于控制机器人以1.5m/s的速度行走,机器人执行控制指令,驱动单元驱动机器人以1.5m/s的速度行走。
图3是根据本发明实施例的一种第一电子设备的示意图,如图3所示,该第一电子设备包括:图像采集单元30、生成单元32、第一发送单元34、第一接收单元35、获取单元37、驱动单元39。
图像采集单元30,配置为采集第一电子设备所处环境的图像。图像采集单元30可以是摄像头。
生成单元32,配置为基于图像生成用于发送给第二电子设备的第一数据;其中,第二电子设备为第一电子设备的控制端设备,第二电子设备能接收第一电子设备发送的第一数据,并能向第一电子设备下发用于控制第 一电子设备的第二数据,第二数据为第二电子设备基于对自身的位姿(位置和/或姿态)检测所得到的数据。
第一发送单元34,配置为向第二电子设备发送第一数据。
第一接收单元35,配置为接收第二电子设备下发的第二数据。
获取单元37,配置为基于第二数据获得用于控制第一电子设备工作状态的控制指令,其中,控制指令至少用于控制第一电子设备的驱动单元39驱动第一电子设备产生位移。
驱动单元39,配置为执行控制指令,为第一电子设备提供驱动力以使得第一电子设备能够产生位移。
第一电子设备可以是机器人、无人飞行器等。第二电子设备是第一电子设备的控制端设备。第二电子设备可以是虚拟现实设备等。用户可以将第二电子设备穿在身上或者戴在头上。
第二电子设备接收第一电子设备发送的第一数据,这样,用户就能得到第一电子设备所处的环境信息,例如,看到第一电子设备所处的环境的图像。用户可以根据看到的第一电子设备所处的环境的图像控制自己位置和/或姿态的改变。由于用户将第二电子设备穿在身上或者戴在头上,当用户的位姿发生改变时,第二电子设备的位姿也随之发生改变。第二电子设备基于对自身位姿的检测得到第二数据。第二电子设备将第二数据发送给第一电子设备。当第一电子设备接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制第一电子设备的工作状态的指令,例如,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备产生位移,或者,控制指令能够控制第一电子设备的驱动单元驱动第一电子设备调整运动方向或者运动速度。驱动单元配置为为第一电子设备提供驱动力以使得第一电子设备能够产生位移。
在本发明实施例中,第一电子设备的图像采集单元采集周围环境的图 像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于用户能够以第一电子设备的视角看到周围环境的图像,根据该图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果,进而解决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。
例如,假设第一电子设备是机器人,第二电子设备是VR设备。机器人采集到环境图像,并基于图像生成第一数据发送给VR设备。VR设备接收第一数据,基于第一数据生成图像,佩戴VR设备的用户看到VR设备生成的图像,就能知道机器人所处的环境的信息,用户根据机器人所处的环境信息控制自身位姿进行改变,VR设备的位姿也随之改变。VR设备基于对自身位姿的检测得到第二数据。VR设备将第二数据发送给机器人。当机器人接收到第二电子设备发送的第二数据之后,基于第二数据生成控制指令,控制指令是用于控制机器人的工作状态的指令,例如,控制指令能够控制机器人的驱动单元驱动机器人产生位移,或者,控制指令能够控制机器人的驱动单元驱动机器人调整运动方向或者运动速度。
可选地,第一电子设备包括多个所述图像采集单元30,图像采集单元30配置为采集第一电子设备所处环境的二维图像。生成单元32包括生成子单元。生成子单元,配置为基于多个图像采集单元分别获得的二维图像通过执行预设的立体图像构建算法生成对应的立体图像。第一发送单元34配置为将立体图像作为第一数据发送给第二电子设备。
第一电子设备有多个图像采集单元30,这多个图像采集单元可以位于第一电子设备的不同位置上,可以从不同的方向和/或高度采集第一电子设 备周围环境的图像。
第一电子设备的每个图像采集单元30采集至少一张第一电子设备周围环境的图像,图像采集单元30采集的图像可以是二维图像,第一电子设备将多张二维图像根据预设的立体图像构建算法生成对应的立体图像,将立体图像作为第一数据发送给第二电子设备。第二电子设备接收到第一电子设备发送的立体图像之后,用户(佩戴了第二电子设备)就能看到第一电子设备所处的环境的立体图像。用户控制自身位姿改变,第二电子设备的位姿随之改变,第二电子设备检测自身位姿得到第二数据,并将第二数据发送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于第一电子设备发送立体图像给第二电子设备,使用第二电子设备的用户能够以第一电子设备的视角看到第一电子设备的周围环境的立体图像,根据该立体图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果。
可选地,第一电子设备还包括:音频采集单元、合并单元。音频采集单元,配置为采集第一电子设备所处环境的音频数据。合并单元,配置为将立体图像与音频数据进行同步合并。第一发送单元配置为将同步合并后的立体图像和音频数据作为第一数据发送给第二电子设备。
第一电子设备具有音频采集单元。音频采集单元能够采集第一电子设备所处环境的音频数据。第一电子设备的多个图像采集单元中每个图像采集单元采集至少一张第一电子设备周围环境的图像,图像采集单元采集的图像可以是二维图像,第一电子设备将多张二维图像根据预设的立体图像构建算法生成对应的立体图像。第一电子设备的音频采集单元采集第一电子设备所处环境的音频数据。第一电子设备的合并单元将立体图像和音频数据进行同步合并,将同步合并后的立体图像和音频数据作为第一数据发 送给第二电子设备。
由于第一电子设备发送给第二电子设备的第一数据包括第一电子设备所处环境的音频数据,因此,第二电子设备的使用者能够更加身临其境地感受到第一电子设备所处的环境,因此,对第一电子设备的控制更加准确,保证了第一电子设备的安全性。
本发明实施例可以在第一电子设备和第二电子设备上都集成音频采集单元,二者可以通过无线通信传输数据,这样就能够实现语音通话功能。这样,当第二电子设备的使用者发出声音,第二电子设备的音频采集单元采集使用者发出的音频信息,并通过无线通信传输给第一电子设设备。第一电子设备接收并播放第二电子设备发送的音频信息。
例如,第一电子设备是机器人,第二电子设备是VR设备,用户甲是VR设备的使用者,用户甲控制机器人逛商场,用户甲通过机器人发送的周围环境的立体图像看到商场内的情况,通过机器人发送的周围环境的音频数据听到机器人周围环境的声音,有身处商场中的体验。假设用户甲看到一个物品,想知道它的价格,用户甲发出声音“请问这个东西多少钱?”VR设备的音频采集单元采集用户甲发出的声音,并通过无线通信将用户甲的音频数据发送给机器人,机器人接收并播放用户甲的音频数据,于是商场的售货员就听到机器人发出的语音:“请问这个东西多少钱?”。当售货员对该物品的价格作出解答时,机器人继续采集周围环境的图像和音频数据,同步合成后发送给VR设备,这样,用户甲就能听到售货员的解答,用户甲还可以继续发出语音,VR设备再将语音信息发送给机器人,机器人接收并播放用户甲的音频信息,这样,用户甲就能够与售货员进行沟通和交流了。
通过在第一电子设备和第二电子设备上都设置音频采集单元,增强了第一电子设备与周围环境的交互能力。
可选地,第二数据为第二电子设备测量所得的俯仰和/或偏航角度,获 取单元37包括:第一查询子单元、第一生成子单元。第一查询子单元,配置为根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度。第一生成子单元,配置为基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
可选地,第二数据为第二电子设备测量所得的移动线速度和/或角速度,获取单元37包括:第二查询子单元、第二生成子单元。第二查询子单元,配置为根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得第一电子设备的对应移动线速度和/或角速度。第二生成子单元,配置为基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
可选地,第二数据为第二电子设备测量所得的俯仰和/或偏航角度,获取单元37包括:第三查询子单元、第三生成子单元。第三查询子单元,配置为根据第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第三映射关系,获得第一电子设备的对应移动线速度和/或角速度。第三生成子单元,配置为基于获得的第一电子设备的移动线速度和/或角速度生成用于控制第一电子设备调整移动线速度和/或角速度的控制指令。
可选地,第二数据为第二电子设备测量所得的移动线速度和/或角速度,获取单元37包括:第四查询子单元、第四生成子单元。第四查询子单元,配置为根据第二电子设备测量所得的移动线速度和/或角速度,查询预设的第四映射关系,获得第一电子设备的对应图像采集单元的俯仰和/或偏航角度。第四生成子单元,配置为基于获得的图像采集单元的俯仰和/或偏航角度生成用于控制图像采集单元调整俯仰和/或偏航角度的控制指令。
第一映射关系、第二映射关系、第三映射关系和第四映射关系,都是预先存储于第一电子设备当中的。第一映射关系是第二电子设备和第一电 子设备的俯仰和/或偏航角度之间的关系。第二映射关系是第二电子设备和第一电子设备的移动线速度和/或角速度之间的关系。第三映射关系是第二电子设备的俯仰和/或偏航角度,与第一电子设备的移动线速度和/或角速度之间的关系。第四映射关系是第二电子设备的移动线速度和/或角速度,与第一电子设备的俯仰和/或偏航角度之间的关系。第一映射关系、第二映射关系、第三映射关系和第四映射关系都可以是一种函数关系。
例如,作为一种可选的实施例,第二映射关系如表1所示。
假设第一电子设备是机器人,第二电子设备是VR设备,用户乙戴上VR设备,开启机器人。
机器人的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给VR设备,VR设备接收第一数据,将机器人周围的环境图像提供给用户乙。用户乙通过观看VR设备提供的图像,发现机器人所处的环境人非常少,道路开阔,于是用户乙以较大的速度行走(大于等于3m/s),假设用户乙行走的速度是4m/s,由于VR设备佩戴于用户乙身上,因此,VR设备检测自身的移动线速度,得到4m/s,则VR设备将自身的移动线速度发送给机器人,机器人接收到VR设备的移动线速度之后,第二查询子单元在表1中查询与4m/s对应的机器人的移动线速度,由于3m/s<4m/s,因此查询到的对应移动线速度是1.5m/s。机器人的第二生成子单元基于查询到的对应移动线速度1.5m/s生成控制指令,该控制指令用于控制机器人以1.5m/s的速度行走,机器人执行控制指令,驱动单元驱动机器人以1.5m/s的速度行走。
实际应用中,所述图像采集单元30可以通过摄像头实现,所述驱动单元39可以通过动力组件来实现;所述音频采集单元可通过麦克风或麦克阵列实现;所述生成单元32、第一发送单元34、第一接收单元35、获取单元37以及合并单元均可由中央处理器(CPU)、或微处理器(MPU)、或数字 信号处理器(DSP)、或可编程门阵列(FPGA)实现。
图4是根据本发明实施例的一种第二电子设备的示意图,第二电子设备为第一电子设备的控制端设备,如图4所示,该第二电子设备包括:第二接收单元40、第二发送单元42。
第二接收单元40,配置为接收第一电子设备发送的第一数据,其中,第一数据是第一电子设备基于图像采集单元采集所得的图像生成的。
第二发送单元42,配置为向第一电子设备下发用于控制第一电子设备的第二数据,第二数据为第二电子设备基于对自身的位姿检测所得到的数据,其中,第一电子设备在接收第二电子设备下发的第二数据之后,基于第二数据获得用于控制第一电子设备工作状态的控制指令,并执行控制指令,其中,控制指令至少用于控制第一电子设备的驱动单元驱动第一电子设备产生位移。
可选地,第二数据为第二电子设备测量所得的俯仰和/或偏航角度,第二电子设备还包括:第一检测单元。第一检测单元,配置为在第二发送子单元向第一电子设备下发用于控制第一电子设备的第二数据之前,测量第二电子设备的俯仰和/或偏航角度。第二发送单元配置为向第一电子设备下发第二电子设备的俯仰和/或偏航角度。
可选地,第二数据为第二电子设备测量所得的移动线速度和/或角速度,第二电子设备还包括:第二检测单元。第二检测单元,配置为在第三发送子单元向第一电子设备下发用于控制第一电子设备的第二数据之前,测量第二电子设备的移动线速度和/或角速度。第二发送单元配置为向第一电子设备下发第二电子设备的移动线速度和/或角速度。
实际应用中,所述第二接收单元40以及第二发送单元42均可由CPU、或MPU、或DSP、或FPGA实现。所述第一检测单元和第二检测单元可由传感器实现。
这里,值得注意的是,以上所描述的电子设备实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,比如,图5是根据本发明实施例的第一电子设备和第二电子设备的模块的交互图。具体地,如图5所示,
第一电子设备包括:通信传输模块12、数据处理模块14、图像采集模块16、声音采集模块17、运动模块18。通信传输模块12可以以无线通信的方式与第二电子设备的通信传输模块22进行数据传输。图像采集模块16可以是上述图像采集单元。声音采集模块17可以是上述音频采集单元。运动模块18可以是上述驱动单元。
第二电子设备包括:通信传输模块22、数据处理模块24、图像播放模块26、声音播放及采集模块27、多种传感器组合28。通信传输模块22可以以无线通信的方式与通信传输模块12进行数据传输。声音播放及采集模块27可以是上述音频采集单元。
图5中的箭头表示数据的传递方向,在第一电子设备中,数据处理模块14负责处理来自运动模块18、图像采集模块16、声音采集模块17以及通信传输模块12的数据,其中运动模块18负责机器人的运动,图像采集模块16和声音采集模块17复制采集环境的图像和声音,数据处理模块14通过相关算法将数据通过通信传输模块直接或间接传递到虚拟设备上。而第二电子设备上的通信传输模块22接收到数据,然后通过数据处理模块24处理后,将图像和声音通过图像播放模块26和声音播放及采集模块27让操作人员感知。同时,第二电子设备上集成多种传感器,采集操作人员的某个身体部位(包含头等)的位置和姿态信息,将这些信息转换成第一电子设备的控制信号通过通信传输模块22传递给第一电子设备。
下面以第一电子设备是机器人,第二电子设备是虚拟现实设备(VR设备)为例,说明本发明实施例提供的信息处理方法是如何执行的。
(a)制定机器人的控制规则,例如,操作人员低头表示机器人向前走;操作人员左转头表示机器人向左转,操作人员右转头表示机器人向右转,等等。
(b)将(a)中的规则转换成程序集成到机器人中。
(c)操作人员佩戴上虚拟现实设备,开启机器人,通过改变操作人员身体上的某一个或者多个部分的位姿控制机器人的运动。该虚拟现实设备集成了多个传感器,比如陀螺仪等,可以感知操作人员的头部的位姿变化情况。与此同时,机器人的身体上集成了多个摄像头,这些摄像头将采集的图像信息通过一定的处理转换成立体图像数据,通过机器人的无线通信模块传送到虚拟现实设备上,让操作人员实时感受机器人的第一视角。为了增强体验,还将机器人周围的环境声音同时传给虚拟设备。还可以通过语音远程与周围人或其他设备进行交互。
由于机器人的遥操作和虚拟现实的结合,不仅可以提高机器人的操作的准确性,还可以增加操作人员的积极性和增强操作的娱乐性。
比如,操作人员可以操作机器人过马路、穿越商场,遇到问题还可以通过通话功能(机器人和虚拟设备上都集成声音采集模块,二者通过无线通信传送数据)实现语音通话功能。佩戴上虚拟现实设备,用户就能控制机器人去博物馆、商场等地方游玩,通过机器人的视觉系统实现“足不出户,游尽天下风景”。
在一具体实施例中,所述第一电子设备,包括:处理器和用于存储能够在处理器上运行的第一计算机程序的存储器,其中,所述处理器用于运行所述第一计算机程序时,执行以上应用于第一电子设备的方法的步骤。
在另一具体实施例中,所述第二电子设备,包括:处理器和用于存储能够在处理器上运行的第二计算机程序的存储器,其中,所述处理器用于运行所述第二计算机程序时,执行以上应用于第二电子设备的方法的步骤。
这里,实际应用中,存储器可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,Ferromagnetic Random Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本发明实施例描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
所述处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器 (DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成前述方法的步骤。
本发明的实施例还提供了第一种计算机可读存储介质,其上存储有第一计算机程序,其中,该第一计算机程序被处理器执行时实现以上应用于第一电子设备的方法的步骤。
本发明实施例还提供了第二种计算机可读存储介质,其上存储有第二计算机程序,其中,该第二计算机程序被处理器执行时实现以上应用于第二电子设备的方法的步骤。
这里,以上所述的计算机可读存储介质可以是FRAM、ROM、可编程只读存储器PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本发明所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或 通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。
工业实用性
本发明实施例通过第一电子设备的图像采集单元采集周围环境的图像,基于采集的图像生成第一数据并将第一数据发送给第二电子设备,第二电子设备接收第一数据,检测自身位姿得到第二数据,并将第二数据发 送给第一电子设备,第一电子设备基于第二数据生成控制指令,根据控制指令产生位移,由于用户能够以第一电子设备的视角看到周围环境的图像,根据该图像调整第二电子设备的位姿以控制第一电子设备的运动,达到了在控制第一电子设备运动的过程中向用户提供身临其境的体验的技术效果,进而解决了现有技术中对电子设备运动的控制无法向用户提供身临其境的体验的技术问题。

Claims (20)

  1. 一种信息处理方法,应用于第一电子设备,所述第一电子设备包括驱动单元,所述驱动单元用于为所述第一电子设备提供驱动力以使得所述第一电子设备能够产生位移,所述第一电子设备还包括至少一个图像采集单元,所述图像采集单元用于采集所述第一电子设备所处环境的图像,所述方法包括:
    获得所述图像采集单元采集所得的图像,并基于所述图像生成用于发送给第二电子设备的第一数据;其中,所述第二电子设备为所述第一电子设备的控制端设备,所述第二电子设备能接收所述第一电子设备发送的所述第一数据,并能向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据;
    接收所述第二电子设备下发的第二数据,基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,并执行所述控制指令;其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
  2. 根据权利要求1所述信息处理方法,其中,所述获得图像采集单元采集所得的图像,并基于所述图像生成用于发送给第二电子设备的第一数据,包括:
    基于多个所述图像采集单元分别获得的二维图像,并通过执行预设的立体图像构建算法生成对应的立体图像,并将所述立体图像作为所述第一数据发送给所述第二电子设备。
  3. 根据权利要求2所述信息处理方法,其中,所述第一电子设备还具有音频采集单元,所述音频采集单元用于采集所述第一电子设备所处环境的音频数据;在生成所述立体图像后,所述方法还包括:
    将所述立体图像与所述音频数据进行同步合并,并将同步合并后的所述立体图像和音频数据作为所述第一数据发送给所述第二电子设备。
  4. 根据权利要求1所述信息处理方法,其中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,
    所述基于第二数据获得用于控制所述第一电子设备工作状态的控制指令,包括:根据所述第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得所述第一电子设备的对应图像采集单元的俯仰和/或偏航角度,并基于获得的所述图像采集单元的俯仰和/或偏航角度生成用于控制所述图像采集单元调整俯仰和/或偏航角度的控制指令。
  5. 根据权利要求1所述信息处理方法,其中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,
    所述基于第二数据获得用于控制所述第一电子设备工作状态的控制指令,包括:根据所述第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得所述第一电子设备的对应移动线速度和/或角速度,并基于获得的所述第一电子设备的移动线速度和/或角速度生成用于控制所述第一电子设备调整移动线速度和/或角速度的控制指令。
  6. 一种信息处理方法,应用于第二电子设备,所述第二电子设备为第一电子设备的控制端设备,所述第一电子设备包括驱动单元,所述驱动单元用于为所述第一电子设备提供驱动力以使得所述第一电子设备能够产生位移,所述第一电子设备还包括至少一个图像采集单元,所述图像采集单元用于采集所述第一电子设备所处环境的图像,所述方法包括:
    接收所述第一电子设备发送的第一数据,其中,所述第一数据是所述第一电子设备基于所述图像采集单元采集所得的图像生成的;
    向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数 据,其中,所述第一电子设备在接收所述第二电子设备下发的第二数据之后,基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,并执行所述控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
  7. 根据权利要求6所述信息处理方法,其中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,
    在向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,所述方法还包括:测量所述第二电子设备的俯仰和/或偏航角度;
    向所述第一电子设备下发用于控制所述第一电子设备的第二数据包括:向所述第一电子设备下发所述第二电子设备的俯仰和/或偏航角度。
  8. 根据权利要求6所述信息处理方法,其中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,
    在向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,所述方法还包括:测量所述第二电子设备的移动线速度和/或角速度;
    向所述第一电子设备下发用于控制所述第一电子设备的第二数据包括:向所述第一电子设备下发所述第二电子设备的移动线速度和/或角速度。
  9. 一种第一电子设备,所述第一电子设备包括:
    图像采集单元,配置为采集所述第一电子设备所处环境的图像;
    生成单元,配置为基于所述图像生成用于发送给第二电子设备的第一数据;其中,所述第二电子设备为所述第一电子设备的控制端设备,所述第二电子设备能接收所述第一电子设备发送的所述第一数据,并能向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据;
    第一发送单元,配置为向所述第二电子设备发送所述第一数据;
    第一接收单元,配置为接收所述第二电子设备下发的第二数据;
    获取单元,配置为基于所述第二数据获得用于控制所述第一电子设备工作状态的控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移;
    驱动单元,配置为执行所述控制指令,为所述第一电子设备提供驱动力以使得所述第一电子设备能够产生位移。
  10. 根据权利要求9所述第一电子设备,其中,
    所述第一电子设备包括多个所述图像采集单元,多个所述图像采集单元配置为采集所述第一电子设备所处环境的二维图像,
    所述生成单元包括:生成子单元,配置为基于多个所述图像采集单元分别获得的二维图像通过执行预设的立体图像构建算法生成对应的立体图像,
    所述第一发送单元,配置为将所述立体图像作为所述第一数据发送给所述第二电子设备。
  11. 根据权利要求10所述第一电子设备,其中,所述第一电子设备还包括:
    音频采集单元,配置为采集所述第一电子设备所处环境的音频数据;
    合并单元,配置为将所述立体图像与所述音频数据进行同步合并;
    所述第一发送单元,还配置为将同步合并后的所述立体图像和音频数据作为所述第一数据发送给所述第二电子设备。
  12. 根据权利要求9所述第一电子设备,其中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,所述获取单元包括:
    第一查询子单元,配置为根据所述第二电子设备测量所得的俯仰和/或偏航角度,查询预设的第一映射关系,获得所述第一电子设备的对应 图像采集单元的俯仰和/或偏航角度;
    第一生成子单元,配置为基于获得的所述图像采集单元的俯仰和/或偏航角度生成用于控制所述图像采集单元调整俯仰和/或偏航角度的控制指令。
  13. 根据权利要求9所述第一电子设备,其中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,所述获取单元包括:
    第二查询子单元,配置为根据所述第二电子设备测量所得的移动线速度和/或角速度,查询预设的第二映射关系,获得所述第一电子设备的对应移动线速度和/或角速度;
    第二生成子单元,配置为基于获得的所述第一电子设备的移动线速度和/或角速度生成用于控制所述第一电子设备调整移动线速度和/或角速度的控制指令。
  14. 一种第二电子设备,其中,所述第二电子设备为第一电子设备的控制端设备,所述第一电子设备包括驱动单元,所述驱动单元配置为为所述第一电子设备提供驱动力以使得所述第一电子设备能够产生位移,所述第一电子设备还包括至少一个图像采集单元,所述图像采集单元配置为采集所述第一电子设备所处环境的图像,所述第二电子设备包括:
    第二接收单元,配置为接收所述第一电子设备发送的第一数据,其中,所述第一数据是所述第一电子设备基于所述图像采集单元采集所得的图像生成的;
    第二发送单元,配置为向所述第一电子设备下发用于控制所述第一电子设备的第二数据,所述第二数据为所述第二电子设备基于对自身的位姿检测所得到的数据,其中,所述第一电子设备在接收所述第二电子设备下发的第二数据之后,基于所述第二数据获得用于控制所述第一电 子设备工作状态的控制指令,并执行所述控制指令,其中,所述控制指令至少用于控制所述第一电子设备的驱动单元驱动所述第一电子设备产生位移。
  15. 根据权利要求14所述第二电子设备,其中,所述第二数据为所述第二电子设备测量所得的俯仰和/或偏航角度,所述第二电子设备还包括:
    第一检测单元,配置为在第二发送子单元向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,测量所述第二电子设备的俯仰和/或偏航角度;
    所述第二发送单元,还配置为向所述第一电子设备下发所述第二电子设备的俯仰和/或偏航角度。
  16. 根据权利要求14所述第二电子设备,其中,所述第二数据为所述第二电子设备测量所得的移动线速度和/或角速度,所述第二电子设备还包括:
    第二检测单元,配置为在第三发送子单元向所述第一电子设备下发用于控制所述第一电子设备的第二数据之前,测量所述第二电子设备的移动线速度和/或角速度;
    所述第二发送单元,还配置为向所述第一电子设备下发所述第二电子设备的移动线速度和/或角速度。
  17. 一种第一电子设备,包括:处理器和用于存储能够在处理器上运行的第一计算机程序的存储器,其中,所述处理器用于运行所述第一计算机程序时,执行权利要求1至5所述方法的步骤。
  18. 第一种计算机可读存储介质,其上存储有第一计算机程序,其中,该第一计算机程序被处理器执行时实现权利要求1至5所述方法的步骤。
  19. 一种第二电子设备,包括:处理器和用于存储能够在处理器上运 行的第二计算机程序的存储器,其中,所述处理器用于运行所述第二计算机程序时,执行权利要求6至8所述方法的步骤。
  20. 第二种计算机可读存储介质,其上存储有第二计算机程序,其中,该第二计算机程序被处理器执行时实现权利要求6至8所述方法的步骤。
PCT/CN2017/111074 2016-10-19 2017-11-15 一种信息处理方法及电子设备、存储介质 WO2018072760A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201610913186 2016-10-19
CN201610913186.9 2016-10-19
CN201611005964.0 2016-11-15
CN201611005964.0A CN106569429A (zh) 2016-10-19 2016-11-15 信息处理方法、第一电子设备和第二电子设备

Publications (1)

Publication Number Publication Date
WO2018072760A1 true WO2018072760A1 (zh) 2018-04-26

Family

ID=58541932

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/111074 WO2018072760A1 (zh) 2016-10-19 2017-11-15 一种信息处理方法及电子设备、存储介质

Country Status (2)

Country Link
CN (1) CN106569429A (zh)
WO (1) WO2018072760A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113856935A (zh) * 2020-10-27 2021-12-31 上海飞机制造有限公司 人机协同控制喷涂系统及方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569429A (zh) * 2016-10-19 2017-04-19 纳恩博(北京)科技有限公司 信息处理方法、第一电子设备和第二电子设备
CN109507904B (zh) * 2018-12-18 2022-04-01 珠海格力电器股份有限公司 家居设备管理方法、服务器、及管理系统
CN111209050A (zh) * 2020-01-10 2020-05-29 北京百度网讯科技有限公司 用于切换电子设备的工作模式的方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103480152A (zh) * 2013-08-31 2014-01-01 中山大学 一种可遥控的摇距临境移动系统
US20150123966A1 (en) * 2013-10-03 2015-05-07 Compedia - Software And Hardware Development Limited Interactive augmented virtual reality and perceptual computing platform
CN105373137A (zh) * 2015-11-03 2016-03-02 上海酷睿网络科技股份有限公司 无人驾驶系统
CN205210690U (zh) * 2015-11-03 2016-05-04 上海酷睿网络科技股份有限公司 无人驾驶系统
CN106569429A (zh) * 2016-10-19 2017-04-19 纳恩博(北京)科技有限公司 信息处理方法、第一电子设备和第二电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103480152A (zh) * 2013-08-31 2014-01-01 中山大学 一种可遥控的摇距临境移动系统
US20150123966A1 (en) * 2013-10-03 2015-05-07 Compedia - Software And Hardware Development Limited Interactive augmented virtual reality and perceptual computing platform
CN105373137A (zh) * 2015-11-03 2016-03-02 上海酷睿网络科技股份有限公司 无人驾驶系统
CN205210690U (zh) * 2015-11-03 2016-05-04 上海酷睿网络科技股份有限公司 无人驾驶系统
CN106569429A (zh) * 2016-10-19 2017-04-19 纳恩博(北京)科技有限公司 信息处理方法、第一电子设备和第二电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113856935A (zh) * 2020-10-27 2021-12-31 上海飞机制造有限公司 人机协同控制喷涂系统及方法
CN113856935B (zh) * 2020-10-27 2023-08-04 上海飞机制造有限公司 人机协同控制喷涂系统及方法

Also Published As

Publication number Publication date
CN106569429A (zh) 2017-04-19

Similar Documents

Publication Publication Date Title
US11853639B2 (en) Sharing neighboring map data across devices
WO2018072760A1 (zh) 一种信息处理方法及电子设备、存储介质
JP2022533309A (ja) 画像ベースの位置特定
TWI476633B (zh) 傳輸觸覺資訊的系統和方法
US20130169626A1 (en) Distributed asynchronous localization and mapping for augmented reality
CN114127837A (zh) 内容提供系统和方法
JP6150429B2 (ja) ロボット制御システム、ロボット、出力制御プログラムおよび出力制御方法
JP6374984B2 (ja) 定位面内にロボットを定位する方法
TW202115366A (zh) 機率性多機器人slam的系統及方法
JP7316282B2 (ja) 拡張現実のためのシステムおよび方法
WO2018076777A1 (zh) 机器人的定位方法和装置、机器人
US11279036B2 (en) Methods and systems for implementing customized motions based on individual profiles for identified users
US20150138301A1 (en) Apparatus and method for generating telepresence
TW201915445A (zh) 用於頭戴式顯示裝置的定位方法、定位器以及定位系統
JPWO2019225548A1 (ja) 遠隔操作システム、情報処理方法、及びプログラム
WO2020114214A1 (zh) 导盲方法和装置,存储介质和电子设备
US10178370B2 (en) Using multiple cameras to stitch a consolidated 3D depth map
US20220400155A1 (en) Method and system for remote collaboration
Zhang et al. Binocular vision sensor (Kinect)-based pedestrian following mobile robot
TWI836498B (zh) 用於配件配對的方法、系統以及記錄介質
WO2023276215A1 (ja) 情報処理装置、情報処理方法及びプログラム
WO2022044900A1 (ja) 情報処理装置、情報処理方法、および記録媒体
CN116823928A (zh) 控制装置的定位、装置、设备、存储介质及计算机程序产品
JP7266128B2 (ja) 3次元マップ生成方法及びシステム
US20240013487A1 (en) Method and device for generating a synthesized reality reconstruction of flat video content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17862806

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17862806

Country of ref document: EP

Kind code of ref document: A1