WO2017147909A1 - 目标设备的控制方法和装置 - Google Patents

目标设备的控制方法和装置 Download PDF

Info

Publication number
WO2017147909A1
WO2017147909A1 PCT/CN2016/075667 CN2016075667W WO2017147909A1 WO 2017147909 A1 WO2017147909 A1 WO 2017147909A1 CN 2016075667 W CN2016075667 W CN 2016075667W WO 2017147909 A1 WO2017147909 A1 WO 2017147909A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
target device
input information
image
user interface
Prior art date
Application number
PCT/CN2016/075667
Other languages
English (en)
French (fr)
Inventor
辛志华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680066789.8A priority Critical patent/CN108353151A/zh
Priority to PCT/CN2016/075667 priority patent/WO2017147909A1/zh
Publication of WO2017147909A1 publication Critical patent/WO2017147909A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and apparatus for controlling a target device.
  • IoT Internet of Things
  • the Internet of Things is the Internet that connects things. This has two meanings: First, the core and foundation of the Internet of Things is still the Internet, which is an extended and extended network based on the Internet; Second, its client extends and extends to any item and item for information. Exchange and communication, that is, things and things.
  • the Internet of Things is widely used in the convergence of networks through communication-aware technologies such as intelligent sensing, identification technology and pervasive computing. It is also called the third wave of the development of the world information industry after computers and the Internet.
  • the user interface (UI) of the control device is based on the position of the text description device for the user to select, for example, the bedroom, the lamp 01.
  • the user interface based on the text description if the target device needs to be selected among multiple devices of different types or the same type, the location distribution of the scene where the target device is located needs to be presented in the mind.
  • the target device is found, but the user does not necessarily know the name of each device and the device displayed in the user interface. Therefore, the user interface based on the text description controls the target device, and the user may need to retry multiple times to select the target device.
  • the user experience is not good enough.
  • the purpose of the present application is to provide an improved control method of a target device to reduce the number of times the user selects the target device and improve the user experience.
  • the present application provides a method of controlling a target device.
  • the method includes: acquiring, by a user interface, first input information of a user, where the user interface includes an image of a three-dimensional scene, the three-dimensional scene includes at least one device that is controllable by the user, and the first input information is used for Selecting, in the at least one device, a location of the target device in the image; determining the target device according to the first input information; controlling the target device.
  • the controlling the target device includes: acquiring, by using the user interface, second input information input by the user, where the user interface provides Describe a function type that the target device is available to the user, the second input information is used to control an operation parameter of the target device, and the target device is controlled according to the second input information.
  • the second information input by the user through the user interface can further control other functions that the target device has, thereby improving the user experience.
  • the controlling the target device further includes: acquiring, by using the user interface, the third input of the user input Information, the user interface provides a management item that the user equipment is available to the user for device management; and the target device is managed according to the third input information.
  • the user inputs the third information through the user interface, and can manage the target device bound in the user interface, which can improve the user experience.
  • the determining, according to the first input information, the target device includes: according to the first input information Determining a coordinate of the target device in the image; determining, according to the coordinate, a device identifier of the target device by using a correspondence between a pre-stored coordinate and a device identifier; determining, according to the device identifier of the target device, The target device.
  • the image comprises a two-dimensional photo, a panoramic photo, or a 360-degree spherical photo.
  • Users can control the target device through a user interface based on 2D photos, panoramic photos or 360-degree spherical photos.
  • the panoramic photo or the 360-degree spherical photo can provide users with a strong sense of scene substitution and improve the user experience.
  • the obtaining, by the user interface, the first input information of the user includes: acquiring the user by using a user interface of the virtual reality device
  • the first input information, the image of the three-dimensional scene in the user interface is a stereoscopic image
  • the first input information is information input by the user through an interaction device of the VR device.
  • the user can input the first input information through the user interface of the virtual reality device to the target device. Take control.
  • the user interface of the virtual reality device can provide a strong sense of scene substitution for the user and improve the user experience.
  • the stereoscopic image is a 360-degree spherical image
  • the 360-degree spherical image is a plane through which the three-dimensional scene is collected.
  • the user controls the target device through a user interface based on a 360-degree spherical image presentation, which can provide a strong sense of scene substitution for the user and improve the user experience.
  • the present application provides a control device for a target device, where the device includes: a first acquiring module, configured to acquire first input information of a user by using a user interface, where the user interface includes an image of a three-dimensional scene, The three-dimensional scene includes at least one device that is controllable by the user, the first input information is used to select a location of the target device in the image in the at least one device; and a determining module is configured to be used according to the acquiring module Obtaining the first input information to determine the target device; and the control module, configured to control the target device determined by the determining module.
  • Target device Providing the user with the control device of the target device through the image-based user interface, so that the user can select the target device to be managed quickly, accurately, and intuitively, avoiding the multiple attempts to select the target device based on the text description in the prior art.
  • Target device to improve the user experience.
  • control module is specifically configured to acquire, by using the user interface, second input information that is input by the user, where the user interface provides the user
  • the function type of the target device is controllable by the user, the second input information is used to control an operation parameter of the target device; and the target device is controlled according to the second input information.
  • the second information input by the user through the user interface can further control other functions that the target device has, thereby improving the user experience.
  • the device further includes: a second acquiring module, configured to acquire the user input by using the user interface The third input information, the user interface provides a management item that the user equipment is available to the user for device management, and the management module is configured to manage the target device according to the third input information.
  • the user inputs the third information through the user interface, and can manage the target device bound in the user interface, which can improve the user experience.
  • the determining module is specifically configured to: determine, according to the first input information, that the target device is located Determining the coordinates of the image; determining the device identifier of the target device by using the correspondence between the pre-stored coordinates and the device identifier; and determining the target device according to the device identifier of the target device.
  • the image includes a two-dimensional photo, a panoramic photo, or a 360-degree spherical photo.
  • Users can control the target device through a user interface based on 2D photos, panoramic photos or 360-degree spherical photos.
  • the panoramic photo or the 360-degree spherical photo can provide users with a strong sense of scene substitution and improve the user experience.
  • the first acquiring module is specifically configured to: acquire, by using a user interface of the virtual reality device, the first input information of the user,
  • the image of the three-dimensional scene in the user interface is a stereoscopic image
  • the first input information is information input by the user through an interaction device of the device.
  • the user can input the first input information through the user interface of the virtual reality device to control the target device.
  • the user interface of the virtual reality device can provide a strong sense of scene substitution for the user and improve the user experience.
  • the stereoscopic image is a 360-degree spherical image
  • the 360-degree spherical image is a plane by collecting the three-dimensional scene.
  • the user controls the target device through a user interface based on a 360-degree spherical image presentation, which can provide a strong sense of scene substitution for the user and improve the user experience.
  • the present application provides a control device for a target device, the device comprising a memory, a processor, an input/output interface, a communication interface, and a bus system, wherein the memory, the processor, the input/ The output interface and the communication interface are connected by the bus system, and the input/output interface is configured to acquire first input information of the user by using a user interface, where the user interface includes an image of a three-dimensional scene, and the three-dimensional scene includes At least one device for controlling by the user, the first input information is used to select a location of the target device in the image in the at least one device; the processor is configured to be based on the input/output interface Obtaining the first input information, determining the target device; and controlling the target device determined by the determining module.
  • the processor is specifically configured to: acquire, by using the user interface, second input information that is input by the user, where the user interface provides the user The function type of the target device is controllable by the user, the second input information is used to control an operation parameter of the target device, and the target device is controlled according to the second input information.
  • the second information input by the user through the user interface can further control other functions that the target device has, thereby improving the user experience.
  • the input/output interface is further configured to: acquire, by the user interface, the third of the user input Entering information, the user interface provides a management item for the user to perform device management by the user, and the processor is further configured to manage the target device according to the third input information.
  • the user inputs the third information through the user interface, and can manage the target device bound in the user interface, which can improve the user experience.
  • the processor is specifically configured to: determine, according to the first input information, that the target device is located Determining the coordinates of the image; determining the device identifier of the target device by using the correspondence between the pre-stored coordinates and the device identifier; and determining the target device according to the device identifier of the target device.
  • the image includes a two-dimensional photo, a panoramic photo, or a 360-degree spherical photo.
  • Users can control the target device through a user interface based on 2D photos, panoramic photos or 360-degree spherical photos.
  • the panoramic photo or the 360-degree spherical photo can provide users with a strong sense of scene substitution and improve the user experience.
  • the input/output interface is specifically configured to: acquire a first input of the user by using a user interface of the virtual reality device
  • the information of the three-dimensional scene in the user interface is a stereoscopic image
  • the first input information is information input by the user through an interaction device of the device.
  • the user can input the first input information through the user interface of the virtual reality device to control the target device.
  • the user interface of the virtual reality device can provide a strong sense of scene substitution for the user and improve the user experience.
  • the stereoscopic image is a 360-degree spherical image
  • the 360-degree spherical image is acquired by acquiring the three-dimensional scene.
  • a planar image obtained by projecting a planar image in the three-dimensional scene onto the surface of the ball model.
  • the user controls the target device through a user interface based on a 360-degree spherical image presentation, which can provide a strong sense of scene substitution for the user and improve the user experience.
  • the present application provides a computer readable storage medium for storing program code of a control method of a target device, the program code comprising instructions for performing the method of the first aspect.
  • the device identifier and/or the target device identifier may be a scene name including the device and a name of the device.
  • the present application provides an improved control scheme for a device by which a user can select a target device quickly, accurately, and intuitively, thereby improving the user experience.
  • FIG. 1 shows a schematic flow chart of a control method of a target device according to an embodiment of the present invention.
  • FIG. 2 shows a schematic diagram of a menu for controlling a function of a target device in accordance with an embodiment of the present invention.
  • FIG. 3 shows a schematic diagram of a menu for managing a target device in accordance with an embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing the principle of mapping the coordinates of a touch point based on a panoramic photo to coordinates in an image.
  • Fig. 5 is a schematic block diagram showing a control device of a target device of an embodiment of the present invention.
  • Fig. 6 is a schematic block diagram showing a control device of a target device of an embodiment of the present invention.
  • FIG. 1 shows a schematic flowchart of a method of controlling a target device according to an embodiment of the present invention, and the method of FIG. 1 may be performed by a terminal device.
  • the method of Figure 1 includes:
  • first input information of a user where the user interface includes an image of a three-dimensional scene, the three-dimensional scene includes at least one device that is controllable by the user, and the first input information is used in the The location of the target device in the image is selected in at least one device.
  • the three-dimensional scene image may be constructed based on a photo of a scene in which the target device is located, an effect diagram of the scene, an engineering drawing of the scene, or a combination of two or more.
  • the image of the above three-dimensional scene may be a two-dimensional (2 dimensional, 2D) photo, a panoramic photo, or a 360 spherical photo.
  • the first input information may be a touch input of the touch screen on which the user interface is located by the user, or may be input by the user into the user interface through the voice information, and the input of the first input information is performed by the embodiment of the present invention.
  • the method is not specifically limited.
  • the method for controlling a target device in the embodiment of the present invention can provide a method for managing a device to a user through an image-based user interface, so that the user can quickly, accurately, and intuitively select a device to be managed, that is, a target device, and avoid the prior art. Selecting a target device based on text brings multiple attempts to select a target device, thereby improving the user experience.
  • step 130 may include: obtaining, by using the user interface, second input information input by the user, where the user interface provides that the target device of the user is available to the user. a function type, the second input information is used to control an operation parameter of the target device; and the target device is controlled according to the second input information.
  • FIG. 2 shows a schematic diagram of a menu for controlling the function of the target device according to an embodiment of the present invention.
  • the functions of the target device are presented to the user in the form of icons. It should be understood that the menu may also be represented in the form of text.
  • the embodiment of the present invention does not specifically limit the presentation form of the menu to the user.
  • the user can control the operating parameters of the device through this menu. For example, after the user selects the air conditioner 200 in the image through the user interface, the user interface pops up a menu 210 for controlling the air conditioning operation parameter, and the user can control the temperature 220 of the air conditioner and the operation mode 230.
  • the method shown in FIG. 1 may include: acquiring, by using the user interface, third input information input by the user, where the user interface provides that the target device is available to the user by the user. a management item of device management; managing the target device according to the third input information.
  • the user can press and hold the target device through the user interface, and the user interface can pop up a menu for managing the device.
  • 3 shows a schematic diagram of a menu for managing a target device in which the functions of the target device are presented to the user in the form of icons, in accordance with an embodiment of the present invention.
  • the menu for managing the device may also be presented to the user in the form of text.
  • the embodiment of the present invention does not specifically limit the presentation form of the menu to the user.
  • the second input information may be the same as the third input information, that is, the user may input the second input information or the third input information, so that the user interface simultaneously presents a menu for managing the device, and A menu for the user to control the function of the device.
  • the ceiling light 300 will be described as an example.
  • the user can control the brightness of the ceiling light 300 by using the button 310 for adjusting the brightness of the ceiling light in the user interface shown in FIG. 3; the user can also control the switch state of the ceiling light 300 through the button 320 for adjusting the state of the ceiling light switch in the user interface shown in FIG.
  • the user can also control the color of the light when the ceiling light 300 is turned on by using the button 330 for adjusting the color of the ceiling light in the user interface shown in FIG. 3; the user can also rename the button of the ceiling light 300 through the user interface shown in FIG.
  • the device name of the top light 300 is controlled; the user can also untie the top light 300 in the bound state by using the button 350 for unbinding the top light 300 in the user interface shown in FIG. 3, and untie the top light 300 after unbinding.
  • the device ID can be re-stored in the list of unbound devices for the next bind operation.
  • the step 120 may include: determining, according to the first input information, coordinates of the target device in the image; and, according to the coordinates, a correspondence between pre-stored coordinates and device identifiers Determining a device identifier of the target device; determining the target device according to the device identifier of the target device.
  • the corresponding relationship between the pre-stored coordinates and the device identifier may be generated by the user unselecting the device identifier that is not bound in the image from the unbound device library, and binding the device identifier to the corresponding coordinates in the image.
  • the device and the functions that the device can manage by the user can be first bound, that is, the device and the control device can run the parameter in the menu.
  • User-controlled run parameter types are bound.
  • the user may be helped to determine whether the selected device identifier is a device identifier that the user needs to bind to the selected coordinates. For example, the user needs to bind the lamp of the bedroom. After the user selects the device identifier corresponding to the lamp, the device can flash to help the user confirm whether the device identifier selected by the user corresponds to the device that the user wishes to bind, that is, the target device.
  • the above image may be a 2D photo, a panoramic photo or a 360 spherical photo.
  • the user has a linear mapping relationship between the coordinates of the touch point of the touch screen presenting the touch screen of the user interface and the image, which is satisfied.
  • (x 1 , y 1 ) represents the coordinates in the image
  • (x, y) represents the coordinates of the touched point
  • a, b, c, and d are constants, respectively.
  • FIG. 4 shows a schematic diagram of the mapping principle of the coordinates of the touch point based on the panoramic photo to the coordinates in the image. Since the panoramic photo is in a cylindrical mode, as shown in FIG. 4, the cylindrical mode can be expanded into a planar mode by extending the x-axis, and the mapping relationship between the touch point coordinates and the image can also satisfy the linear relationship.
  • the mapping relationship of the coordinates of the touch point to the image can be determined by the software for mapping calculation, based on the relevant parameters of the projection type and the image file format of the image.
  • the mapping relationship between the coordinates of the touch point and the coordinates in the image is not specifically limited in the embodiment of the present invention.
  • the acquiring, by the user interface, the first input information of the user includes: acquiring, by using a user interface of a virtual reality (VR) device, first input information of the user, where the user interface is The image of the three-dimensional scene is a stereoscopic image, and the first input information is information input by the user through an interaction device of the VR device.
  • VR virtual reality
  • the above VR device may refer to a three-dimensional visual display device such as a 3D display system, a large projection system, a head mounted stereo display, or the like.
  • the VR interaction device can be a data glove, a 3D input device (eg, a three-dimensional mouse), a motion capture device, an eye tracker, a force feedback device, and the like.
  • the stereoscopic image is a 360-degree spherical image
  • the 360-degree spherical image is obtained by acquiring a planar image of the three-dimensional scene, and projecting a planar image in the three-dimensional scene to a spherical model. The surface is obtained.
  • the above 360 spherical photograph can be formed by splicing the source image acquired by the rotating camera around the node of the camera. That is to say, the mapping relationship between the coordinates of the source image and the spherical coordinates is established, and the source is The images are stitched into 360 spherical photos.
  • the image can adopt panoramic photos and 360 spherical photos.
  • the VR device can sense the rotation of the user's head through the gyro sensor, and present the scene to the user in a stereoscopic manner.
  • the user can perform the target device through the interactive device of the VR device. Choice and control can provide a better experience for users.
  • control method of the target device of the embodiment of the present invention is described in detail above with reference to FIG. 1 to FIG. 4, and the control device of the target device according to the embodiment of the present invention will be described in detail below with reference to FIGS. 5 to 6. It should be understood that the apparatus shown in FIG. 5 and FIG. 6 can implement the various steps in FIG. 1. To avoid repetition, details are not described herein.
  • Fig. 5 is a schematic block diagram showing a control device of a target device of an embodiment of the present invention.
  • the apparatus 500 shown in FIG. 5 includes a first acquisition module 510, a determination module 520, and a control module 530.
  • the first obtaining module 510 is configured to acquire first input information of the user by using a user interface, where the user interface includes an image of a three-dimensional scene, where the three-dimensional scene includes at least one device that is controllable by the user, the first input Information for selecting a location of the target device in the image among the at least one device;
  • a determining module 520 configured to determine the target device according to the first input information acquired by the acquiring module 510;
  • the control module 530 is configured to control the target device determined by the determining module 520.
  • the control device of the device through the image-based user interface, so that the user can quickly, accurately and intuitively select the device to be managed, that is, the target device, and avoid the multiple attempts of selecting the target device based on text in the prior art. Select the target device to improve the user experience.
  • Fig. 6 is a schematic block diagram showing a control device of a target device of an embodiment of the present invention.
  • the apparatus 600 shown in FIG. 6 includes a memory 610, a processor 620, an input/output interface 630, a communication interface 640, and a bus system 650.
  • the memory 610, the processor 620, the input/output interface 630, and the communication interface 640 are connected by a bus system 650.
  • the memory 610 is configured to store instructions
  • the processor 620 is configured to execute the instructions stored by the memory 610.
  • the control input/output interface 630 receives the input data and information, outputs data such as an operation result, and controls the communication interface 640 to transmit a signal.
  • the input/output interface 630 is configured to acquire first input information of the user by using a user interface, where the user interface includes an image of a three-dimensional scene, the three-dimensional scene includes at least one device that is controllable by the user, the first input Information for selecting a target device in the at least one device The position in the image;
  • the processor 620 is configured to determine the target device according to the first input information acquired by the input/output interface 630, and control the target device determined by the determining module.
  • the control device of the device through the image-based user interface, so that the user can quickly, accurately and intuitively select the device to be managed, that is, the target device, and avoid the multiple attempts of selecting the target device based on text in the prior art. Select the target device to improve the user experience.
  • the device 600 shown in FIG. 6 may be a terminal device, and the input/output interface 630 may be a touch screen of the terminal device 600, and the terminal device 600 may present the user interface through the touch screen to acquire a user's The first input information.
  • the processor 620 may be a general-purpose central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more.
  • the integrated circuit is used to implement the related program to implement the technical solution provided by the embodiment of the present invention.
  • communication interface 640 enables communication between device 600 and other devices or communication networks using transceivers such as, but not limited to, transceivers.
  • the memory 610 can include read only memory and random access memory and provides instructions and data to the processor 620.
  • a portion of the processor 620 can also include a non-volatile random access memory.
  • the processor 620 can also store information of the device type.
  • the bus system 650 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 650 in the figure.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 620 or an instruction in a form of software.
  • the steps of the control method of the target device disclosed in the embodiment of the present invention may be directly implemented as a hardware processor execution completion, or may be performed by a combination of hardware and software modules in the processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 610, and the processor 620 reads the information in the memory 610 and completes the steps in the method shown in FIG. 1 in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • B corresponding to A means that B is associated with A, and B can be determined according to A. But it should also be understood that determining B according to A does not mean that it is only determined according to A. B, B can also be determined based on A and/or other information.
  • the size of the sequence numbers of the above processes does not mean the order of execution, and the order of execution of each process should be determined by its function and internal logic, and should not be taken to the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or made as a standalone product When used, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本发明公开了一种设备的控制方法和装置,该方法包括:通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;根据所述第一输入信息,确定所述目标设备;控制所述目标设备。通过基于图像的用户界面向用户提供设备的控制方案,使得用户能够快速、准确且直观的选择要管理的设备,即目标设备,从而能够提高用户体验。

Description

目标设备的控制方法和装置 技术领域
本发明涉及通信领域,尤其涉及一种目标设备的控制方法和装置。
背景技术
物联网(Internet of things,IoT)是新一代信息技术的重要组成部分,也是“信息化”时代的重要发展阶段。顾名思义,物联网就是物物相连的互联网。这有两层意思:其一,物联网的核心和基础仍然是互联网,是在互联网基础上的延伸和扩展的网络;其二,其用户端延伸和扩展到了任何物品与物品之间,进行信息交换和通信,也就是物物相息。物联网通过智能感知、识别技术与普适计算等通信感知技术,广泛应用于网络的融合中,也因此被称为继计算机、互联网之后世界信息产业发展的第三次浪潮。
随着物联网市场的普及,需要满足用户管理更多设备需求。现有技术中,控制设备的用户界面(User Interfaces,UI)是基于文字描述设备的位置,供用户进行选择的,例如:卧室,灯01。由于用户通过基于文字描述的用户界面对目标设备进行选择时,如果需要在多个不同类型或同一类型的多个设备中,选择目标设备,需要在脑海中呈现目标设备所在的场景的位置分布,找到目标设备,但用户并不一定知道每个实际设备与用户界面中显示的设备的名称,因此,基于文字描述的用户界面对目标设备进行控制,用户可能需要重试多次才能选中目标设备,用户体验不够好。
发明内容
本申请的目的是提供改进的目标设备的控制方法,以减少用户选择目标设备的次数,提高用户体验。
第一方面,本申请提供一种目标设备的控制方法。所述方法包括:通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;根据所述第一输入信息,确定所述目标设备;控制所述目标设备。
通过基于图像的用户界面向用户提供目标设备的控制方法,使得用户能 够快速、准确且直观的选择要管理的目标设备,避免现有技术中基于文字描述选择目标设备,带来的多次尝试选择目标设备,从而能够提高用户体验。
结合第一方面,在第一方面的一种可能的实现方式中,所述控制所述目标设备,包括:通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;根据所述第二输入信息控制所述目标设备。
用户通过用户界面输入的第二信息,可以进一步地对目标设备具有的其他功能进行控制,提高用户体验。
结合第一方面及上述任一种实现方式,在第一方面的一种可能的实现方式中,所述控制所述目标设备,还包括:通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;根据所述第三输入信息管理所述目标设备。
用户通过用户界面输入第三信息,可以对用户界面中绑定的目标设备进行管理,可以提高用户体验。
结合第一方面及上述任一种实现方式,在第一方面的一种可能的实现方式中,所述根据所述第一输入信息,确定所述目标设备,包括:根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;根据所述目标设备的设备标识,确定所述目标设备。
结合第一方面及上述任一种实现方式,在第一方面的一种可能的实现方式中,所述图像包括二维照片、全景照片或360度球形照片。
用户可以通过基于二维照片、全景照片或者360度球形照片呈现的用户界面,对目标设备进行控制。其中,全景照片或者360度球形照片可以为用户提供较强的场景代入感,提高用户体验。
结合第一方面及上述任一种实现方式,在第一方面的一种可能的实现方式中,所述通过用户界面获取用户的第一输入信息,包括:通过虚拟现实设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述VR设备的交互设备输入的信息。
用户可以通过虚拟现实设备的用户界面输入第一输入信息,对目标设备 进行控制。虚拟现实设备的用户界面可以为用户提供较强的场景代入感,提高用户体验。
结合第一方面及上述任一种实现方式,在第一方面的一种可能的实现方式中,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
用户通过基于360度球形图像呈现的用户界面,对目标设备进行控制,可以为用户提供较强的场景代入感,提高用户体验。
第二方面,本申请提供一种目标设备的控制装置,所述装置包括:第一获取模块,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;确定模块,用于根据所述获取模块获取的所述第一输入信息,确定所述目标设备;控制模块,用于控制所述确定模块确定的所述目标设备。
通过基于图像的用户界面向用户提供目标设备的控制装置,使得用户能够快速、准确且直观的选择要管理的目标设备,避免现有技术中基于文字描述选择目标设备,带来的多次尝试选择目标设备,从而能够提高用户体验。
结合第二方面,在第二方面的一种可能的实现方式中,所述控制模块具体用于通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;根据所述第二输入信息控制所述目标设备。
用户通过用户界面输入的第二信息,可以进一步地对目标设备具有的其他功能进行控制,提高用户体验。
结合第二方面以及上述任一种可能的实现方式,在第二方面的一种可能的实现方式中,所述装置还包括:第二获取模块,用于通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;管理模块,用于根据所述第三输入信息管理所述目标设备。
用户通过用户界面输入第三信息,可以对用户界面中绑定的目标设备进行管理,可以提高用户体验。
结合第二方面以及上述任一种可能的实现方式,在第二方面的一种可能的实现方式中,所述确定模块具体用于:根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;根据所述目标设备的设备标识,确定所述目标设备。
结合第二方面以及上述任一种可能的实现方式,在第二方面的一种可能的实现方式中,所述图像包括二维照片、全景照片或360度球形照片。
用户可以通过基于二维照片、全景照片或者360度球形照片呈现的用户界面,对目标设备进行控制。其中,全景照片或者360度球形照片可以为用户提供较强的场景代入感,提高用户体验。
结合第二方面及上述任一种实现方式,在第二方面的一种可能的实现方式中,所述第一获取模块具体用于:通过虚拟现实设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述设备的交互设备输入的信息。
用户可以通过虚拟现实设备的用户界面输入第一输入信息,对目标设备进行控制。虚拟现实设备的用户界面可以为用户提供较强的场景代入感,提高用户体验。
结合第二方面及上述任一种实现方式,在第二方面的一种可能的实现方式中,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
用户通过基于360度球形图像呈现的用户界面,对目标设备进行控制,可以为用户提供较强的场景代入感,提高用户体验。
第三方面,本申请提供一种目标设备的控制装置,所述装置包括存储器、处理器、输入/输出接口、通信接口和总线系统,其中,所述存储器、所述处理器、所述输入/输出接口和所述通信接口通过所述总线系统相连,所述输入/输出接口,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;所述处理器,用于根据所述输入/输出接口获取的所述第一输入信息,确定所述目标设备;以及用于控制所述确定模块确定的所述目标设备。
结合第三方面,在第三方面的一种可能的实现方式中,所述处理器具体用于:通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;根据所述第二输入信息控制所述目标设备。
用户通过用户界面输入的第二信息,可以进一步地对目标设备具有的其他功能进行控制,提高用户体验。
结合第三方面以及上述任一种可能的实现方式,在第三方面的一种可能的实现方式中,所述输入/输出接口还用于:通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;所述处理器,还用于根据所述第三输入信息管理所述目标设备。
用户通过用户界面输入第三信息,可以对用户界面中绑定的目标设备进行管理,可以提高用户体验。
结合第三方面以及上述任一种可能的实现方式,在第三方面的一种可能的实现方式中,所述处理器具体用于:根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;根据所述目标设备的设备标识,确定所述目标设备。
结合第三方面以及上述任一种可能的实现方式,在第三方面的一种可能的实现方式中,所述图像包括二维照片、全景照片或360度球形照片。
用户可以通过基于二维照片、全景照片或者360度球形照片呈现的用户界面,对目标设备进行控制。其中,全景照片或者360度球形照片可以为用户提供较强的场景代入感,提高用户体验。
结合第三方面以及上述任一种可能的实现方式,在第三方面的一种可能的实现方式中,所述输入/输出接口具体用于:通过虚拟现实设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述设备的交互设备输入的信息。
用户可以通过虚拟现实设备的用户界面输入第一输入信息,对目标设备进行控制。虚拟现实设备的用户界面可以为用户提供较强的场景代入感,提高用户体验。
结合第三方面以及上述任一种可能的实现方式,在第三方面的一种可能的实现方式中,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
用户通过基于360度球形图像呈现的用户界面,对目标设备进行控制,可以为用户提供较强的场景代入感,提高用户体验。
第四方面,本申请提供一种计算机可读存储介质,所述计算机存储介质用于存储目标设备的控制方法的程序代码,所述程序代码包括用于执行第一方面中的方法的指令。
在某些实现方式中,上述设备标识和/或目标设备标识可以为包括设备所在的场景名称和该设备的名称。
本申请提供了一种改进的设备的控制方案,通过该方案使得用户能够快速、准确且直观的选择目标设备,从而能够提高用户体验。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了根据本发明实施例的目标设备的控制方法的示意性流程图。
图2示出了根据本发明实施例的用于控制目标设备功能的菜单的示意图。
图3示出了根据本发明实施例的用于管理目标设备的菜单的示意图。
图4示出了基于全景照片的触摸点的坐标到图像中坐标的映射原理示意图。
图5示出了本发明实施例的目标设备的控制装置的示意性框图。
图6示出了本发明实施例的目标设备的控制装置的示意性框图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创 造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1示出了根据本发明实施例的目标设备的控制方法的示意性流程图,图1的方法可以由终端设备执行。图1的方法包括:
110,通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置。
应理解,所述三维场景图像可以是基于目标设备所在场景的照片、所述场景的效果图、所述场景的工程图纸中的任意一种或两种以上的组合而构建的。上述三维场景的图像可以为二维(2dimensional,2D)照片、全景照片或360球形照片。
还应理解,上述第一输入信息可以是用户通过用户界面对用户界面所在的触摸屏的触摸输入的,还可以是用户通过语音信息输入用户界面的,本发明实施例对该第一输入信息的输入方式不作具体限定。
120,根据所述第一输入信息,确定所述目标设备。
130,控制所述目标设备。
本发明实施例的目标设备的控制方法,可以通过基于图像的用户界面向用户提供管理设备的方法,使得用户能够快速、准确且直观的选择要管理的设备,即目标设备,避免现有技术中基于文字选择目标设备,带来的多次尝试选择目标设备,从而能够提高用户体验。
可选地,作为一个实施例,步骤130可包括,可以通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;根据所述第二输入信息控制所述目标设备。
具体地,用户选取用户界面中的目标设备,用户界面可以弹出一个控制该设备运行参数的菜单,图2示出了根据本发明实施例的用于控制目标设备功能的菜单的示意图。在图2所示的菜单中,目标设备的各项功能以图标的形式呈现给用户。应理解,该菜单还可以以文本的形式表示,本发明实施例对该菜单对用户的呈现形式不作具体限定。如图2所示,用户可以通过该菜单,控制设备的运行参数。例如,用户通过用户界面选择图像中的空调200后,用户界面弹出控制该空调运行参数的菜单210,可供用户对该空调的温度220以及运行模式230进行控制。
可选地,作为一个实施例,图1所示的方法可包括,通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;根据所述第三输入信息管理所述目标设备。
具体地,用户可以通过用户界面,长按目标设备,用户界面可以弹出一个用于管理设备的菜单。图3示出了根据本发明实施例的用于管理目标设备的菜单的示意图,在图3所示的菜单中,目标设备的各项功能以图标的形式呈现给用户。应理解,该用于管理设备的菜单还可以以文本的形式呈现给用户,本发明实施例对该菜单对用户的呈现形式不作具体限定。还应理解,上述第二输入信息可以和上述第三输入信息相同,也就是说,用户可以通过输入第二输入信息或者第三输入信息,使得用户界面同时呈现用于管理设备的菜单,以及可供用户控制设备功能的菜单。
在图3所示的用于管理目标设备的菜单中,以顶灯300为例进行说明。用户可以通过图3所示的用户界面中调节顶灯亮度的按钮310,控制顶灯300的亮度;用户还可以通过图3所示的用户界面中调节顶灯开关状态的按钮320,控制顶灯300的开关状态;用户还可以通过图3所示的用户界面中调节顶灯颜色的按钮330,控制顶灯300的开启时的灯光颜色;用户还可以通过图3所示的用户界面中对顶灯300进行重命名的按钮340,控制顶灯300的设备名称;用户还可以通过图3所示的用户界面中对顶灯300进行解绑操作的按钮350,将该处于绑定状态的顶灯300解绑,解绑后该顶灯300的设备标识可以重新存储在未绑定设备的列表中,以供下次绑定操作。
可选地,作为一个实施例,步骤120可包括,根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;根据所述目标设备的设备标识,确定所述目标设备。
具体地,上述预存的坐标和设备标识的对应关系,可以通过用户从未绑定设备库中选择未在图像中绑定的设备标识,将设备标识绑定在图像中地相应坐标而生成。
应理解,在设备注册到未绑定设备库的过程中,可以先将设备以及该设备具有的可供用户管理的功能进行绑定,也就是说,可以将设备与控制设备运行参数的菜单中可供用户控制的运行参数类型进行绑定。
还应理解,在用户绑定设备标识和坐标的过程中,如果设备具有识别模式,可以帮助用户确定,选择的设备标识是否为用户需要绑定在选择的坐标上的设备标识。例如,用户需要绑定卧室的台灯,在用户选择该台灯对应的设备标识之后,该台灯可以闪烁,帮助用户确认用户选择的设备标识是否对应用户希望绑定的设备,即目标设备。
可选地,作为一个实施例,上述图像可以为2D照片、全景照片或360球形照片。
具体地,当图像为2D照片时,用户对呈现用户界面的触摸屏的触摸点的坐标到图像中的映射关系为线性的映射关系,满足
Figure PCTCN2016075667-appb-000001
其中,(x1,y1)表示图像中的坐标,(x,y)表示触摸点的坐标,a、b、c、d分别为常数。
当图像为全景照片时,图4示出了基于全景照片的触摸点的坐标到图像中坐标的映射原理示意图。由于全景照片是圆柱模式的,如图4所示,所以可以通过延长x轴,将圆柱模式展成平面模式,则触摸点地坐标到图像中的映射关系也可以满足线性关系。
当图像为360球形照片时,触摸点的坐标到图像中的映射关系,可以通过用于映射计算的软件,基于投影型和图像的图像文件格式的相关参数,确定映射类型。本发明实施例对于上述触摸点的坐标到图像中坐标的映射关系不作具体限定。
可选地,作为一个实施例,所述通过用户界面获取用户的第一输入信息,包括:通过虚拟现实(Virtual Reality,VR)设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述VR设备的交互设备输入的信息。
应理解,上述VR设备可以指三维视觉显示设备,如3D展示系统、大型投影系统、头戴式立体显示器等。VR交互设备可以是数据手套、3D输入设备(例如,三维鼠标)、动作捕捉设备、眼动仪、力反馈设备等。
可选地,作为一个实施例,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
具体地,上述360球形照片可以通过将旋转相机绕着相机的节点获取的源图像拼接而成。也就是说,建立源图像的坐标和球坐标的映射关系,将源 图像拼接成360球形照片。
在VR设备中,图像可以采用全景照片和360球形照片,VR设备可以通过陀螺传感器感应用户的头部的转动,向用户以立体的方式呈现场景,用户可以通过VR设备的交互设备对目标设备进行选择和控制,可以为用户提供更好的体验。
上文结合图1至图4详细的描述了本发明实施例的目标设备的控制方法,下面将结合图5至图6,详细的描述根据本发明实施例的目标设备的控制装置。应理解,图5和图6所示的装置能够实现图1中的各个步骤,为避免重复,此处不再详述。
图5示出了本发明实施例的目标设备的控制装置的示意性框图。图5所示的装置500包括第一获取模块510、确定模块520和控制模块530。
第一获取模块510,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;
确定模块520,用于根据所述获取模块510获取的所述第一输入信息,确定所述目标设备;
控制模块530,用于控制所述确定模块520确定的所述目标设备。
通过基于图像的用户界面向用户提供设备的控制装置,使得用户能够快速、准确且直观的选择要管理的设备,即目标设备,避免现有技术中基于文字选择目标设备,带来的多次尝试选择目标设备,从而能够提高用户体验。
图6示出了本发明实施例的目标设备的控制装置的示意性框图。图6所示的装置600包括存储器610、处理器620、输入/输出接口630、通信接口640和总线系统650。其中,存储器610、处理器620、输入/输出接口630和通信接口640通过总线系统650相连,所述存储器610用于存储指令,所述处理器620用于执行所述存储器610存储的指令,以控制输入/输出接口630接收输入的数据和信息,输出操作结果等数据,并控制通信接口640发送信号。
输入/输出接口630,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所 述图像中的位置;
处理器620,用于根据所述输入/输出接口630获取的所述第一输入信息,确定所述目标设备;控制所述确定模块确定的所述目标设备。
通过基于图像的用户界面向用户提供设备的控制装置,使得用户能够快速、准确且直观的选择要管理的设备,即目标设备,避免现有技术中基于文字选择目标设备,带来的多次尝试选择目标设备,从而能够提高用户体验。
应理解,上述图6所示的装置600可以是终端设备,上述输入/输出接口630可以是所述终端设备600的触摸屏,所述终端设备600可以通过所述触摸屏呈现上述用户界面,获取用户的第一输入信息。
应理解,在本发明实施例中,该处理器620可以采用通用的中央处理器(Central Processing Unit,CPU),微处理器,应用专用集成电路(Application Specific Integrated Circuit,ASIC),或者一个或多个集成电路,用于执行相关程序,以实现本发明实施例所提供的技术方案。
还应理解,通信接口640使用例如但不限于收发器一类的收发装置,来实现装置600与其他设备或通信网络之间的通信。
该存储器610可以包括只读存储器和随机存取存储器,并向处理器620提供指令和数据。处理器620的一部分还可以包括非易失性随机存取存储器。例如,处理器620还可以存储设备类型的信息。
该总线系统650除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统650。
在实现过程中,上述方法的各步骤可以通过处理器620中的硬件的集成逻辑电路或者软件形式的指令完成。结合本发明实施例所公开的目标设备的控制方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器610,处理器620读取存储器610中的信息,结合其硬件完成图1所示方法中的各个步骤。为避免重复,这里不再详细描述。
应理解,在本发明实施例中,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定 B,还可以根据A和/或其它信息确定B。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (21)

  1. 一种目标设备的控制方法,其特征在于,包括:
    通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;
    根据所述第一输入信息,确定所述目标设备;
    控制所述目标设备。
  2. 如权利要求1所述的方法,其特征在于,所述控制所述目标设备,包括:
    通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;
    根据所述第二输入信息控制所述目标设备。
  3. 如权利要求1或2所述的方法,其特征在于,所述方法还包括:
    通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;
    根据所述第三输入信息管理所述目标设备。
  4. 如权利要求1至3中任一项所述的方法,其特征在于,所述根据所述第一输入信息,确定所述目标设备,包括:
    根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;
    根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;
    根据所述目标设备的设备标识,确定所述目标设备。
  5. 如权利要求1至4中任一项所述的方法,其特征在于,所述图像包括二维2D照片、全景照片或360度球形照片。
  6. 如权利要求1至5中任一项所述的方法,其特征在于,所述通过用户界面获取用户的第一输入信息,包括:
    通过虚拟现实VR设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述VR设备的交互设备输入的信息。
  7. 如权利要求6所述的方法,其特征在于,所述立体图像为360度球 形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
  8. 一种目标设备的控制装置,其特征在于,包括:
    第一获取模块,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;
    确定模块,用于根据所述获取模块获取的所述第一输入信息,确定所述目标设备;
    控制模块,用于控制所述确定模块确定的所述目标设备。
  9. 如权利要求8所述的装置,其特征在于,所述控制模块具体用于:
    通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;
    根据所述第二输入信息控制所述目标设备。
  10. 如权利要求8或9所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;
    管理模块,用于根据所述第三输入信息管理所述目标设备。
  11. 如权利要求8至10中任一项所述的装置,其特征在于,所述确定模块具体用于:
    根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;
    根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;
    根据所述目标设备的设备标识,确定所述目标设备。
  12. 如权利要求8至11中任一项所述的装置,其特征在于,所述图像包括二维2D照片、全景照片或360度球形照片。
  13. 如权利要求8至12中任一项所述的装置,其特征在于,所述第一获取模块具体用于:
    通过虚拟现实VR设备的用户界面获取用户的第一输入信息,所述用户 界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述VR设备的交互设备输入的信息。
  14. 如权利要求13所述的装置,其特征在于,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
  15. 一种目标设备的控制装置,其特征在于,包括:存储器、处理器、输入/输出接口、通信接口和总线系统,其中,所述存储器、所述处理器、所述输入/输出接口和所述通信接口通过所述总线系统相连,
    所述输入/输出接口,用于通过用户界面获取用户的第一输入信息,所述用户界面包括三维场景的图像,所述三维场景包括可供所述用户控制的至少一个设备,所述第一输入信息用于在所述至少一个设备中选择目标设备在所述图像中的位置;
    所述处理器,用于根据所述输入/输出接口获取的所述第一输入信息,确定所述目标设备;以及用于控制所述确定模块确定的所述目标设备。
  16. 如权利要求15所述的装置,其特征在于,所述处理器具体用于:
    通过所述用户界面获取所述用户输入的第二输入信息,所述用户界面提供所述用户所述目标设备可供所述用户控制的功能类型,所述第二输入信息用于控制所述目标设备的运行参数;
    根据所述第二输入信息控制所述目标设备。
  17. 如权利要求15或16所述的装置,其特征在于,所述输入/输出接口还用于:
    通过所述用户界面获取所述用户输入的第三输入信息,所述用户界面提供用户所述目标设备可供所述用户进行设备管理的管理项目;
    所述处理器,还用于根据所述第三输入信息管理所述目标设备。
  18. 如权利要求15至17中任一项所述的装置,其特征在于,所述处理器具体用于:
    根据所述第一输入信息,确定所述目标设备在所述图像中的坐标;
    根据所述坐标,通过预存的坐标和设备标识的对应关系,确定所述目标设备的设备标识;
    根据所述目标设备的设备标识,确定所述目标设备。
  19. 如权利要求15至18中任一项所述的装置,其特征在于,所述图像 包括二维2D照片、全景照片或360度球形照片。
  20. 如权利要求15至19中任一项所述的装置,其特征在于,所述输入/输出接口具体用于:
    通过虚拟现实VR设备的用户界面获取用户的第一输入信息,所述用户界面中的三维场景的图像为立体图像,所述第一输入信息是所述用户通过所述VR设备的交互设备输入的信息。
  21. 如权利要求20所述的装置,其特征在于,所述立体图像为360度球形图像,所述360度球形图像是通过采集所述三维场景的平面图像,并将所述三维场景中的平面图像投影到球模型的表面而获取的。
PCT/CN2016/075667 2016-03-04 2016-03-04 目标设备的控制方法和装置 WO2017147909A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680066789.8A CN108353151A (zh) 2016-03-04 2016-03-04 目标设备的控制方法和装置
PCT/CN2016/075667 WO2017147909A1 (zh) 2016-03-04 2016-03-04 目标设备的控制方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/075667 WO2017147909A1 (zh) 2016-03-04 2016-03-04 目标设备的控制方法和装置

Publications (1)

Publication Number Publication Date
WO2017147909A1 true WO2017147909A1 (zh) 2017-09-08

Family

ID=59743403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/075667 WO2017147909A1 (zh) 2016-03-04 2016-03-04 目标设备的控制方法和装置

Country Status (2)

Country Link
CN (1) CN108353151A (zh)
WO (1) WO2017147909A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109507904A (zh) * 2018-12-18 2019-03-22 珠海格力电器股份有限公司 家居设备管理方法、服务器、及管理系统
CN110047135A (zh) * 2019-04-22 2019-07-23 广州影子科技有限公司 养殖任务的管理方法、管理装置及管理系统
CN110191145A (zh) * 2018-02-23 2019-08-30 三星电子株式会社 移动装置中的用于控制连接装置的方法和系统
CN110780598A (zh) * 2019-10-24 2020-02-11 深圳传音控股股份有限公司 一种智能设备控制方法、装置、电子设备及可读存储介质
CN112292657A (zh) * 2018-05-02 2021-01-29 苹果公司 围绕计算机模拟现实布景进行移动

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112968819B (zh) * 2021-01-18 2022-07-22 珠海格力电器股份有限公司 基于tof的家电设备控制方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662374A (zh) * 2012-05-11 2012-09-12 刘书军 基于实景界面的家居控制系统及方法
CN103246267A (zh) * 2013-04-29 2013-08-14 鸿富锦精密工业(深圳)有限公司 具有三维用户界面的远程控制装置及其界面生成方法
CN103294024A (zh) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 智能家居系统控制方法
US20140257532A1 (en) * 2013-03-05 2014-09-11 Electronics And Telecommunications Research Institute Apparatus for constructing device information for control of smart appliances and method thereof
CN104181884A (zh) * 2014-08-11 2014-12-03 厦门立林科技有限公司 一种基于全景视图的智能家居控制装置及方法
CN105022281A (zh) * 2015-07-29 2015-11-04 中国电子科技集团公司第十五研究所 一种基于虚拟现实的智能家居控制系统
CN105141913A (zh) * 2015-08-18 2015-12-09 华为技术有限公司 可视化远程控制可触控设备的方法、系统和相关设备
CN105373001A (zh) * 2015-10-29 2016-03-02 小米科技有限责任公司 电子设备的控制方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104468837B (zh) * 2014-12-29 2018-04-27 小米科技有限责任公司 智能设备的绑定方法和装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662374A (zh) * 2012-05-11 2012-09-12 刘书军 基于实景界面的家居控制系统及方法
US20140257532A1 (en) * 2013-03-05 2014-09-11 Electronics And Telecommunications Research Institute Apparatus for constructing device information for control of smart appliances and method thereof
CN103294024A (zh) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 智能家居系统控制方法
CN103246267A (zh) * 2013-04-29 2013-08-14 鸿富锦精密工业(深圳)有限公司 具有三维用户界面的远程控制装置及其界面生成方法
CN104181884A (zh) * 2014-08-11 2014-12-03 厦门立林科技有限公司 一种基于全景视图的智能家居控制装置及方法
CN105022281A (zh) * 2015-07-29 2015-11-04 中国电子科技集团公司第十五研究所 一种基于虚拟现实的智能家居控制系统
CN105141913A (zh) * 2015-08-18 2015-12-09 华为技术有限公司 可视化远程控制可触控设备的方法、系统和相关设备
CN105373001A (zh) * 2015-10-29 2016-03-02 小米科技有限责任公司 电子设备的控制方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191145A (zh) * 2018-02-23 2019-08-30 三星电子株式会社 移动装置中的用于控制连接装置的方法和系统
CN112292657A (zh) * 2018-05-02 2021-01-29 苹果公司 围绕计算机模拟现实布景进行移动
CN109507904A (zh) * 2018-12-18 2019-03-22 珠海格力电器股份有限公司 家居设备管理方法、服务器、及管理系统
CN109507904B (zh) * 2018-12-18 2022-04-01 珠海格力电器股份有限公司 家居设备管理方法、服务器、及管理系统
CN110047135A (zh) * 2019-04-22 2019-07-23 广州影子科技有限公司 养殖任务的管理方法、管理装置及管理系统
CN110780598A (zh) * 2019-10-24 2020-02-11 深圳传音控股股份有限公司 一种智能设备控制方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN108353151A (zh) 2018-07-31

Similar Documents

Publication Publication Date Title
US9311756B2 (en) Image group processing and visualization
US11513608B2 (en) Apparatus, method and recording medium for controlling user interface using input image
JP6529659B2 (ja) 情報処理方法、端末及びコンピュータ記憶媒体
WO2017147909A1 (zh) 目标设备的控制方法和装置
US10068373B2 (en) Electronic device for providing map information
JP5942456B2 (ja) 画像処理装置、画像処理方法及びプログラム
KR101737725B1 (ko) 컨텐츠 생성 툴
WO2018153074A1 (zh) 一种预览图像的显示方法及终端设备
US9268410B2 (en) Image processing device, image processing method, and program
JP2022537614A (ja) マルチ仮想キャラクターの制御方法、装置、およびコンピュータプログラム
US20150187137A1 (en) Physical object discovery
US10416783B2 (en) Causing specific location of an object provided to a device
EP4195664A1 (en) Image processing method, mobile terminal, and storage medium
JP2018026064A (ja) 画像処理装置、画像処理方法、システム
JP2013164697A (ja) 画像処理装置、画像処理方法、プログラム及び画像処理システム
CN112767248A (zh) 红外相机图片拼接方法、装置、设备及可读存储介质
US10573090B2 (en) Non-transitory computer-readable storage medium, display control method, and display control apparatus
JP6304305B2 (ja) 画像処理装置、画像処理方法及びプログラム
CN112767484B (zh) 定位模型的融合方法、定位方法、电子装置
CN112988007B (zh) 三维素材的交互方法及装置
CN115222923A (zh) 在漫游制作应用中切换视点的方法、装置、设备和介质
JP2021086355A (ja) 情報処理方法、プログラム、及び情報処理装置

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892100

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892100

Country of ref document: EP

Kind code of ref document: A1