WO2022037412A1 - 管理物联网设备的方法和装置 - Google Patents
管理物联网设备的方法和装置 Download PDFInfo
- Publication number
- WO2022037412A1 WO2022037412A1 PCT/CN2021/110623 CN2021110623W WO2022037412A1 WO 2022037412 A1 WO2022037412 A1 WO 2022037412A1 CN 2021110623 W CN2021110623 W CN 2021110623W WO 2022037412 A1 WO2022037412 A1 WO 2022037412A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- iot
- iot device
- user
- icon
- virtual
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 92
- 238000003672 processing method Methods 0.000 claims abstract description 16
- 230000001960 triggered effect Effects 0.000 claims abstract description 6
- 238000004891 communication Methods 0.000 claims description 39
- 230000006854 communication Effects 0.000 claims description 39
- 230000009471 action Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 4
- 230000003993 interaction Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 72
- 230000015654 memory Effects 0.000 description 40
- 238000007726 management method Methods 0.000 description 35
- 238000010586 diagram Methods 0.000 description 18
- 230000005012 migration Effects 0.000 description 17
- 238000013508 migration Methods 0.000 description 17
- 238000010295 mobile communication Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 15
- 238000012545 processing Methods 0.000 description 14
- 210000000988 bone and bone Anatomy 0.000 description 10
- 230000005236 sound signal Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 8
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000007667 floating Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 229920001621 AMOLED Polymers 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 102100026436 Regulator of MON1-CCZ1 complex Human genes 0.000 description 1
- 101710180672 Regulator of MON1-CCZ1 complex Proteins 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000001976 improved effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/70—Services for machine-to-machine communication [M2M] or machine type communication [MTC]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y40/00—IoT characterised by the purpose of the information processing
- G16Y40/30—Control
- G16Y40/35—Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives
Definitions
- the embodiments of the present application relate to the field of electronic technologies, and in particular, to a method and apparatus for managing IoT devices.
- the Internet of Things is the Internet that connects all things. It is a network formed by combining various sensors with the Internet, which can realize the interconnection of people, machines and things.
- One way to manage IoT devices is to add an IoT device management list on the phone, and manage IoT devices through the IoT device management list. For example, the user opens the application (application, APP) for managing IoT devices, clicks the icon of the target IoT device in the IoT device icon displayed on the APP, and then enters the management interface of the target IoT device to select the corresponding function, thereby completing the management of the IoT device.
- application application, APP
- the existing technology can only manage IoT devices separately, and cannot control multiple devices simultaneously in one operation. Therefore, realizing the management of multiple IoT devices at the same time is a problem that needs to be solved now.
- the embodiment of the present application provides a method for managing IoT devices, which can seamlessly switch services between IoT devices.
- a method for managing IoT devices comprising: acquiring a first trigger signal; displaying a virtual device interface according to the first trigger signal, the virtual device interface including virtual device information of at least two IoT devices; Acquire an operation signal, where the operation signal is a signal triggered by a user on the virtual device interface to control the at least two IoT devices to interact; and execute a processing method corresponding to the operation signal.
- the apparatus for executing the above method may be one of at least two IoT devices, or may be a different apparatus from the at least two IoT devices.
- the first trigger signal can be an electrical signal generated by sliding a finger on the touch screen, or a physical action (such as a two-finger pinch action) captured by the camera of the execution device, or an infrared signal generated by a control device such as a remote control.
- the application does not limit the specific form of the first trigger signal.
- the virtual device interface may be an interface displayed on the screen of the execution device, or an interface displayed by the execution device through an augmented reality (AR) technology or a virtual reality (virtual reality, VR) technology, and the virtual device information may be in the form of images
- AR augmented reality
- VR virtual reality
- the virtual device information may be in the form of images
- the information can also be information in the form of text, and this application does not limit the specific forms of the virtual device interface and the virtual device information.
- the virtual device information of at least two IoT devices is displayed in the same interface, the user can perform operations on the virtual device interface to control at least two IoT devices to interact. For example, the user can trigger the execution of device generation by dragging or clicking.
- the operation signal the execution device can control at least two IoT devices to perform interaction such as device sharing and function migration based on the processing method corresponding to the operation signal. Based on the above method, the user does not need to open the management interface of different IoT devices to control different IoT devices to interact, thereby
- the virtual device information of the at least two IoT devices includes: virtual device icons and logical port icons of the at least two IoT devices.
- the virtual device information in the form of icons is more intuitive than the virtual device information in the form of text, and can enhance customer experience.
- the at least two IoT devices include a first IoT device and a second IoT device
- the operation signal includes: the user drags a logical port icon of the first IoT device to the second IoT device The virtual device icon of the device; the executing the processing method corresponding to the operation signal includes: migrating the function corresponding to the logical port icon of the first IoT device to the second IoT device, wherein the second IoT device The device has a function corresponding to the logical port icon of the first IoT device.
- the user can drag the logical port icon on the display screen of the first IoT device to generate the operation signal, and can also drag the logical port icon in the VR interface or AR interface to generate the operation signal.
- the specific method is not limited.
- the logical port icon in this embodiment is, for example, a microphone icon, and the function corresponding to the microphone icon is a sound pickup function.
- the first IoT device can transfer the sound pickup function to the second IoT device, and use the microphone of the second IoT device to transmit the user's voice. , when the distance between the user and the first IoT device is farther and the distance between the user and the second IoT device is short, the sound pickup effect can be improved. Therefore, in this embodiment, the user can obtain a better experience with a simple operation (dragging the icon of the logical port) in a specific scenario.
- the at least two IoT devices include a first IoT device and a second IoT device
- the operation signal includes: the user drags a virtual device icon of the first IoT device to the second IoT device The virtual device icon of the device; the executing the processing method corresponding to the operation signal includes: migrating the function of the target application of the first IoT device to the second IoT device, wherein the target application is the first IoT device An application that the IoT device is running, and the target application is installed on the second IoT device.
- the target application is, for example, a video chat APP.
- the video chat APP When the video chat APP is running on the first IoT device, the user can disable the function of the video chat APP by dragging and operating the virtual device icon of the first IoT device to the virtual icon of the second IoT device.
- the first IoT device is, for example, a smart TV
- the second IoT device is, for example, a mobile phone, and the user can use the mobility of the mobile phone to realize more convenient video chat. Therefore, in this embodiment, the user can obtain a better experience with a simple operation (drag the icon of the virtual device) in a specific scene.
- the at least two IoT devices include a first IoT device and a second IoT device
- the operation signal includes: the user drags a virtual device icon of the first IoT device to the second IoT device The virtual device icon of the device; the executing the processing method corresponding to the operation signal includes: establishing a communication connection between the target application of the first IoT device and the target application of the second IoT device, wherein the The first IoT device does not run the target application before acquiring the operation signal.
- the user's drag operation may be to establish a communication connection between the target application of the first IoT device and the target application of the second IoT device.
- the target application may be a preset APP or an APP selected by the user in real time.
- the target application is a video chat APP
- the user can implement video chat between the first IoT device and the second IoT device without opening the video chat APP. Therefore, in this embodiment, the user can obtain a better experience with a simple operation (drag the icon of the virtual device) in a specific scene.
- the at least two IoT devices include a first IoT device and a second IoT device
- the operation signal includes: the user drags the logical port icon of the first IoT device and the second IoT device with two fingers.
- the logical port icons of the two IoT devices are merged; the executing the processing method corresponding to the operation signal includes: sharing the function of the logical port icon of the first IoT device and the logical port icon of the second IoT device. Function.
- the user can realize the function of sharing logical ports by dragging the port icons of two logical devices. For example, when user A is using a mobile phone to make a video call with user C, and user B wants to join the video call through the smart TV, user A can drag the microphone icon of the mobile phone and the microphone icon of the smart TV to make user B join the video call .
- This embodiment can enable the user to obtain a better experience with a simple operation (drag the icon of the logical port of the virtual device) in a specific scene.
- the at least two IoT devices include a first IoT device and a second IoT device
- the operation signal includes: the user clicks on a virtual device icon of the second IoT device; the execution and the operation A processing method corresponding to a signal, comprising: establishing a control event mapping relationship between the first IoT device and the second IoT device, wherein the first IoT device is a preset control device, and the second IoT device is a controlled device.
- the first IoT device is, for example, a smart TV
- the second IoT device is, for example, a mobile phone.
- the user can use the mobile phone to control the smart TV.
- this embodiment can enable the user to obtain a better experience in a specific scenario.
- the acquiring the first trigger signal includes: acquiring the first trigger signal through a touch screen, where the first trigger signal is a trigger generated by the user performing a preset action on the touch screen Signal.
- the acquiring the first trigger signal includes: acquiring the first trigger signal through a camera, where the first trigger signal is a trigger signal generated by the user performing a preset action in the air.
- the method further includes: exiting the virtual device interface.
- the exiting the virtual device interface includes: acquiring a second trigger signal; and exiting the virtual device interface according to the second trigger signal.
- an apparatus for managing IoT devices including a unit composed of software and/or hardware, and the unit is configured to execute any one of the methods in the technical solutions described in the first aspect.
- an electronic device including a processor and a memory, the memory is used for storing a computer program, the processor is used for calling and running the computer program from the memory, so that the electronic device executes the first aspect. Any method in the technical solution.
- a computer-readable medium stores program codes, which, when the computer program codes are executed on an electronic device, cause the electronic device to execute the technical solution described in the first aspect any of the methods.
- a computer program product includes: computer program code, when the computer program code is run on an electronic device, the electronic device is made to perform the technical solution described in the first aspect. any method.
- FIG. 1 is a schematic diagram of an IoT system suitable for an embodiment of the present application
- FIG. 2 is a schematic diagram of a hardware system of an IoT device provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of a software system of an IoT device provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of a topology connection of logical devices of several IoT devices provided by an embodiment of the present application;
- FIG. 5 is a method for entering a display interface of a logic device through a smart TV provided by an embodiment of the present application
- FIG. 6 is a method for entering a display interface of a logical device through a mobile phone provided by an embodiment of the present application
- FIG. 8 is a schematic diagram of a display interface of a logic device provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of a method for setting a video call provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a method for setting a shared Bluetooth headset provided by an embodiment of the present application.
- FIG. 11 is a schematic diagram of another method for setting a video call provided by an embodiment of the present application.
- FIG. 12 is a schematic diagram of another method for setting a multi-party video call provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of a method for setting a camera provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of a method for migrating an APP function provided by an embodiment of the present application.
- 15 is a schematic diagram of another method for migrating an APP function provided by an embodiment of the present application.
- 16 is a schematic diagram of a method for establishing a video call provided by an embodiment of the present application.
- 17 is a schematic diagram of another method for establishing a video call provided by an embodiment of the present application.
- FIG. 18 is a schematic diagram of a method for controlling a smart TV through a mobile phone provided by an embodiment of the present application
- FIG. 19 is a schematic diagram of an electronic device for managing IoT devices provided by an embodiment of the present application.
- the IoT system 100 includes a smart TV 101, a mobile phone 102, a smart speaker 103, and a router 104. These devices may be referred to as IoT devices.
- the user can send an instruction to the smart TV 101 through the mobile phone 102, the instruction is forwarded and transmitted to the smart TV 101 via the router 104, and the smart TV 101 performs corresponding operations according to the instruction, such as turning on the camera, screen, microphone and speaker.
- the user can also send an instruction to the smart speaker 103 through the mobile phone 102, the instruction is transmitted to the smart speaker 103 through the Bluetooth connection between the mobile phone 102 and the smart speaker 103, and the smart speaker 103 performs corresponding operations according to the instruction, such as turning on the microphone and speaker.
- IoT system 100 is one example, but not all, of IoT systems suitable for use in the present application.
- the IoT devices may also communicate through wired connection; the user may control the smart TV 101 and the smart speaker 103 through the AR device or the VR device.
- FIG. 2 uses FIG. 2 as an example to introduce the hardware structure of the IoT device provided by the embodiment of the present application.
- the IoT device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and user Identity module (subscriber identification module, SIM) card interface 195 and so on.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
- the structure shown in FIG. 2 does not constitute a specific limitation on IoT devices.
- the IoT device may include more or less components than those shown in FIG. 2 , or the IoT device may include a combination of some of the components shown in FIG. 2 , or the IoT The device may include sub-components of some of the components shown in FIG. 2 .
- the components shown in FIG. 2 may be implemented in hardware, software, or a combination of software and hardware.
- Processor 110 may include one or more processing units.
- the processor 110 may include at least one of the following processing units: an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor (image signal processor) , ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, neural network processor (neural-network processing unit, NPU).
- the different processing units may be independent devices or integrated devices.
- the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
- the processor 110 may include one or more interfaces.
- the processor 110 may include at least one of the following interfaces: an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM interface, USB interface.
- I2C inter-integrated circuit
- I2S inter-integrated circuit sound
- PCM pulse code modulation
- UART universal asynchronous receiver/transmitter
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
- the processor 110 may contain multiple sets of I2C buses.
- the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
- the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface, so as to realize the touch function of the IoT device.
- the I2S interface can be used for audio communication.
- the processor 110 may contain multiple sets of I2S buses.
- the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
- the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
- the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
- the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
- the UART interface is a universal serial data bus used for asynchronous communication.
- the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
- a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
- the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
- the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
- MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
- the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the IoT device.
- the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the IoT device.
- the GPIO interface can be configured by software.
- the GPIO interface can be configured as a control signal interface or as a data signal interface.
- the GPIO interface may be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 and the sensor module 180 .
- the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface or MIPI interface.
- the USB interface 130 is an interface conforming to the USB standard specification, for example, it may be a mini (Mini) USB interface, a micro (Micro) USB interface or a USB Type C (USB Type C) interface.
- the USB interface 130 can be used to connect a charger to charge the IoT device, and can also be used to transmit data between the IoT device and peripheral devices, and can also be used to connect an earphone to play audio through the earphone.
- the USB interface 130 can also be used to connect other electronic devices, such as AR devices.
- connection relationship between the modules shown in FIG. 2 is only a schematic illustration, and does not constitute a limitation on the connection relationship between the modules of the IoT device.
- each module of the IoT device may also adopt a combination of multiple connection manners in the foregoing embodiments.
- the charge management module 140 is used to receive power from the charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 may receive current from the wired charger through the USB interface 130 .
- the charging management module 140 may receive electromagnetic waves (current paths are shown as dotted lines) through the wireless charging coil of the IoT device. While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle times, and battery health (eg, leakage, impedance).
- the power management module 141 may be provided in the processor 110, or the power management module 141 and the charging management module 140 may be provided in the same device.
- the wireless communication function of the IoT device may be implemented by components such as the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, and a baseband processor.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in an IoT device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 can provide a wireless communication solution applied on the IoT device, for example, at least one of the following solutions: a second generation (2 th generation, 2G) mobile communication solution, a third generation (3 th generation, 3G) ) mobile communication solutions, fourth generation ( 4th generation, 5G) mobile communication solutions, fifth generation ( 5th generation, 5G) mobile communication solutions.
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and then transmit them to a modulation and demodulation processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and the amplified signal is converted into electromagnetic waves through the antenna 1 and radiated out.
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
- at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
- the application processor outputs sound signals through audio devices (eg, speaker 170A, receiver 170B), or displays images or video through display screen 194 .
- the modem processor may be a stand-alone device.
- the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can also provide a wireless communication solution applied on the IoT device, such as at least one of the following solutions: wireless local area networks (WLAN), bluetooth (BT) , global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), infrared (infrared, IR).
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation and amplification on the signal, and the signal is converted into electromagnetic waves and radiated out through the antenna 2 .
- the antenna 1 of the IoT device is coupled to the mobile communication module 150
- the antenna 2 of the IoT device is coupled to the wireless communication module 160 .
- the IoT device can implement display functions through the GPU, the display screen 194, and the application processor.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
- Display screen 194 may be used to display images or video.
- Display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible Light-emitting diode (flex light-emitting diode, FLED), mini light-emitting diode (mini light-emitting diode, Mini LED), micro light-emitting diode (micro light-emitting diode, Micro LED), micro OLED (Micro OLED) or quantum dot light-emitting Diodes (quantum dot light emitting diodes, QLED).
- the IoT device may include 1 or N display screens 194, where N is a positive integer greater than 1.
- the IoT device can realize the shooting function through ISP, camera 193, video codec, GPU, display screen 194, and application processor.
- the ISP is used to process the data fed back by the camera 193 .
- the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can algorithmically optimize the noise, brightness and color of the image, and ISP can also optimize parameters such as exposure and color temperature of the shooting scene.
- the ISP may be provided in the camera 193 .
- Camera 193 is used to capture still images or video.
- the object is projected through the lens to generate an optical image onto the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard red green blue (RGB), YUV and other formats of image signals.
- the IoT device may include 1 or N cameras 193 , where N is a positive integer greater than 1.
- a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the IoT device selects the frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, etc.
- Video codecs are used to compress or decompress digital video.
- IoT devices can support one or more video codecs. In this way, IoT devices can play or record videos in multiple encoding formats, such as: Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, and MPEG4.
- MPEG Moving Picture Experts Group
- MPEG2 MPEG2, MPEG3, and MPEG4.
- NPU is a processor that draws on the structure of biological neural network. For example, it can quickly process input information by drawing on the transmission mode between neurons in the human brain, and it can also continuously learn by itself. Through the NPU, functions such as intelligent cognition of IoT devices can be realized, such as image recognition, face recognition, speech recognition and text understanding.
- the external memory interface 120 may be used to connect an external memory card, such as a secure digital (SD) card, to expand the storage capacity of the IoT device.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
- Internal memory 121 may be used to store computer executable program code, which includes instructions.
- the internal memory 121 may include a storage program area and a storage data area.
- the storage program area may store an operating system, an application program required for at least one function (eg, a sound playback function and an image playback function).
- the storage data area can store data (for example, audio data and phone book) created during the use of the IoT device.
- the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as: at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
- the processor 110 executes various functional applications and data processing of the IoT device by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
- the IoT device can implement audio functions, such as music playback and recording, through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone jack 170D, and the application processor.
- audio functions such as music playback and recording
- the audio module 170 is used for converting digital audio information into analog audio signal output, and can also be used for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 or some functional modules of the audio module 170 may be provided in the processor 110 .
- Speaker 170A also referred to as a horn, is used to convert audio electrical signals into sound signals. IoT devices can listen to music or make hands-free calls through the speaker 170A.
- the receiver 170B also referred to as an earpiece, is used to convert audio electrical signals into sound signals.
- the voice can be answered by placing the receiver 170B close to the ear.
- Microphone 170C also known as a microphone or microphone, is used to convert acoustic signals into electrical signals. When the user makes a call or sends a voice message, a sound signal can be input into the microphone 170C by sounding close to the microphone 170C.
- the IoT device may be provided with at least one microphone 170C. In other embodiments, the IoT device may be provided with two microphones 170C to realize the noise reduction function. In other embodiments, the IoT device may also be provided with three, four or more microphones 170C to realize functions such as identifying sound sources and directional recording.
- the earphone jack 170D is used to connect wired earphones.
- the earphone interface 170D can be the USB interface 130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA
- the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
- the pressure sensor 180A may be provided on the display screen 194 .
- the capacitive pressure sensor may include at least two parallel plates with conductive materials.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, the instruction to view the short message is executed; when a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon , execute the command to create a new short message.
- the gyroscope sensor 180B can be used to determine the motion attitude of the IoT device.
- the angular velocity of the IoT device about three axes ie, the x-axis, the y-axis, and the z-axis
- the gyro sensor 180B can be used for image stabilization. For example, when the shutter is pressed, the gyroscope sensor 180B detects the shaking angle of the IoT device, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to offset the shaking of the IoT device through reverse motion to achieve anti-shake.
- the gyro sensor 180B can also be used in scenarios such as navigation and somatosensory games.
- the air pressure sensor 180C is used to measure air pressure.
- the IoT device calculates altitude from the air pressure value measured by the air pressure sensor 180C to aid in positioning and navigation.
- the magnetic sensor 180D includes a Hall sensor. IoT devices can use the magnetic sensor 180D to detect the opening and closing of the flip holster. In some embodiments, when the IoT device is a flip machine, the IoT device can detect the opening and closing of the flip according to the magnetic sensor 180D. The IoT device can set features such as automatic unlocking of the flip cover based on the detected opening and closing state of the leather case or the opening and closing state of the flip cover.
- the acceleration sensor 180E can detect the magnitude of the acceleration of the IoT device in various directions (generally the x-axis, the y-axis and the z-axis). The magnitude and direction of gravity can be detected when the IoT device is stationary. The acceleration sensor 180E can also be used to identify the posture of the IoT device, as an input parameter for applications such as horizontal and vertical screen switching and pedometer.
- the distance sensor 180F is used to measure the distance.
- IoT devices can measure distance via infrared or laser. In some embodiments, such as in a shooting scene, the IoT device may utilize a distance sensor 180F for ranging for fast focus.
- Proximity light sensor 180G may include, for example, light-emitting diodes (LEDs) and light detectors, such as photodiodes.
- the LEDs may be infrared LEDs.
- IoT devices emit infrared light outward through LEDs.
- IoT devices use photodiodes to detect infrared reflected light from nearby objects. When reflected light is detected, IoT devices can determine the presence of objects nearby. When no reflected light is detected, IoT devices can determine that there are no objects nearby.
- the IoT device can use the proximity light sensor 180G to detect whether the user holds the IoT device close to the ear to talk, so as to automatically turn off the screen to save power.
- Proximity light sensor 180G can also be used for automatic unlocking and automatic screen locking in holster mode or pocket mode.
- the ambient light sensor 180L is used to sense ambient light brightness.
- the IoT device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the IoT device is in the pocket to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints. IoT devices can use the collected fingerprint characteristics to unlock, access app locks, take pictures, and answer incoming calls.
- the temperature sensor 180J is used to detect the temperature.
- the IoT device utilizes the temperature detected by the temperature sensor 180J to implement a temperature handling strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the IoT device executes to reduce the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
- the IoT device when the temperature is lower than another threshold, the IoT device heats the battery 142 to avoid abnormal shutdown of the IoT device caused by the low temperature.
- the IoT device when the temperature is lower than another threshold, the IoT device performs a boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- the touch sensor 180K is also called a touch device.
- the touch sensor 180K may be disposed on the display screen 194 , and a touch screen is formed by the touch sensor 180K and the display screen 194 , and the touch screen is also called a touch screen.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor 180K may pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to touch operations may be provided through display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the IoT device and disposed at a different location from the display screen 194 .
- the bone conduction sensor 180M can acquire vibration signals.
- the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
- the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
- the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
- the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
- the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
- the keys 190 include a power key and a volume key.
- the key 190 may be a mechanical key or a touch key.
- IoT devices can receive key input signals and implement functions related to case input signals.
- the motor 191 may generate vibration.
- the motor 191 can be used for incoming call alerts, and can also be used for touch feedback.
- the motor 191 can generate different vibration feedback effects for touch operations acting on different applications.
- touch operations acting on different areas of the display screen 194 the motor 191 can also generate different vibration feedback effects.
- Different application scenarios for example, time reminder, receiving information, alarm clock and game
- the touch vibration feedback effect can also support customization.
- the indicator 192 can be an indicator light, which can be used to indicate the charging status and power change, and can also be used to indicate messages, missed calls, and notifications.
- the SIM card interface 195 is used to connect a SIM card.
- the SIM card can be inserted into the SIM card interface 195 to achieve contact with the IoT device, or can be pulled out from the SIM card interface 195 to achieve separation from the IoT device.
- the IoT device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1. Multiple cards can be inserted into the same SIM card interface 195 at the same time, and the types of the multiple cards can be the same or different.
- the SIM card interface 195 is also compatible with external memory cards. IoT devices interact with the network through the SIM card to realize functions such as calls and data communication.
- the IoT device adopts an embedded SIM (embedded-SIM, eSIM) card, and the eSIM card can be embedded in the IoT device and cannot be separated from the IoT device.
- the hardware system of the IoT device is described in detail above, and the software system of the IoT device provided by the embodiments of the present application is introduced below.
- the software system of the IoT device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
- the embodiments of the present application take the layered architecture as an example to exemplarily describe the software system of the IoT device.
- the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
- the software system is divided into three layers, which are an application layer, an operating system layer, and a logical device layer from top to bottom.
- the application layer can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and SMS.
- applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and SMS.
- SDK software development kit
- the operating system layer provides an application programming interface (API) and background services for the APP of the application layer, and the background services are, for example, some predefined functions.
- API application programming interface
- the corresponding hardware interrupt is sent to the operating system layer, and the operating system layer processes the touch operation into an original input event.
- the original input event includes, for example, information such as touch coordinates and a timestamp of the touch operation. ; Then, the operating system layer identifies the control corresponding to the original access event, and notifies the APP corresponding to the control. For example, if the above-mentioned touch operation is a single-click operation, and the APP corresponding to the above-mentioned control is a camera APP, the camera APP can call the background service through the API, transmit control instructions to the logical port management module, and control the camera 193 to shoot through the logical port management module.
- the logical device layer includes three main modules: the logical device port management module, the logical device management module and the logical device user interface (UI) module.
- the logical device port management module the logical device management module
- the logical device management module the logical device management module
- the logical device user interface (UI) module the logical device user interface
- the logical port management module is used to manage the routing of each logical port, and realize the function sharing and function reference of the logical port, and the port of the remote IoT device can be referenced through the network connection.
- the smart TV 101 sets the status of the camera function to shareable
- the logical port management module of the mobile phone 102 references the camera function of the smart TV 101 through the network connection; then , the APP on the mobile phone 102 can use the camera of the smart TV 101 to perform operations such as video chatting.
- the functions of the logical device management module include the addition, deletion and permission management of IoT devices.
- the logical device UI module is used to display the logical device list to the user in a visual form, so that the user can manage IoT devices.
- the logical device management module can activate the function /dev/mic1 based on the user operation information, and the Add to the logical device list, the logical port management module can use the port of the microphone 1 to pick up sound;
- the logical device management module can activate the function /dev/mic2, and the microphone 2 Adding it to the logical device list, the logical port management module can use the port of microphone 2 to pick up sound.
- IoT devices can be virtualized as logical devices.
- the topology of the logical devices of the smart TV 101 , the mobile phone 102 and the smart speaker 103 is shown in FIG. 4 .
- the modules with user interaction functions are usually microphones, speakers, cameras and screens. Therefore, the logical devices of the smart TV 101 and the mobile phone 102 may include logical ports corresponding to the above modules.
- the modules with user interaction functions are usually microphones and speakers. Therefore, the logical ports included in the logic device of the smart speaker 103 may be microphones and speakers.
- the topology shown in FIG. 4 may be generated by the handset 102 .
- the mobile phone 102 can send instruction information to the smart TV 101 and the smart speaker 103, instructing the smart TV 101 and the smart speaker 103 to report their respective capability information, where the capability information indicates the functions supported by each IoT device.
- the capability information reported by the smart TV 101 indicates that the functions supported by the smart TV 101 include a microphone, a speaker, a camera and a screen
- the capability information reported by the smart speaker 103 indicates that the functions supported by the smart speaker include a microphone and a speaker.
- the mobile phone 102 can also send a query request to the server, and obtain the capability information of the smart TV 101 and the smart speaker 103 from the server according to the device brand and/or device model.
- the mobile phone 102 can synchronize the capability information of the smart TV 101 and the smart speaker 103 .
- the smart TV 101 and the smart speaker 103 can periodically send capability information to the mobile phone 102, or the mobile phone 102 periodically queries the capabilities of the smart TV 101 and the smart speaker 103, or the smart TV 101 and the smart speaker 103 support their own The capability information is sent to the mobile phone 102 when the function of the device changes.
- the devices also need to synchronize the device status (for example, the power of the logical device, whether the screen of the logical device is off, whether the logical port is occupied, etc.).
- the synchronization method can optionally refer to the synchronization of capability information, which is not limited in this application.
- the user can enter the logical device display interface through the smart TV 101 or the smart speaker 103. No matter which IoT device enters the logical device display interface, the user can see the status of each IoT device, and can manage each IoT device in the same way.
- FIG. 5 shows a method for entering the display interface of the logical device through the smart TV 101 .
- the user can make a two-finger pinch action when the smart TV 101 is in any display interface, and the action is used to trigger the smart TV 101 to enter the logic device display interface.
- the smart TV 101 can capture the pinch action through the camera, and can also capture the pinch action through the screen with touch function; that is, the user can make a pinch action in the air, and the smart TV 101 can be triggered to enter the logic device display through the camera.
- the user can also make a two-finger pinch action on the screen with touch function, and trigger the smart TV 101 to enter the logic device display interface through the screen.
- the processor of the smart TV 101 After the processor of the smart TV 101 detects the two-finger pinch action, it can reduce the current display interface, and display the small size picture of the current display interface on the screen as a logical device of the smart TV 101 .
- the user can also trigger the smart TV 101 to enter the logical device display interface through sound, a remote control or other actions.
- the present application does not limit the specific manner of triggering the smart TV 101 to enter the logical device display interface.
- FIG. 6 shows a method of entering the display interface of the logical device through the mobile phone 102 .
- the user can click or double-click the floating button when the mobile phone 102 is in any display interface. Clicking or double-clicking the floating button is used to trigger the mobile phone 102 to enter the logic device display interface.
- the floating button can be set to a translucent state and can be dragged. to any location on the screen of the cell phone 102 .
- FIG. 7 shows another method of entering the display interface of the logical device through the mobile phone 102 .
- the user can long press the screen to enter the logical device display interface, and the position where the finger presses can be any position on the screen.
- the display interfaces of the virtual devices of the smart TV 101 and the mobile phone 102 are shown in FIG. 8 .
- the lower parts of the virtual devices of the smart TV 101 and the mobile phone 102 both display their respective logical ports, and these logical ports can be displayed on the screen in the form of 2D icons.
- the four 2D icons displayed below the virtual device of the smart TV 101 are, from left to right, the microphone, speaker, camera, and screen, respectively, and the four 2D icons displayed below the virtual device of the mobile phone 102 are, from left to right, the microphone, speaker, camera, respectively. and screen.
- the logical port can also be displayed on the screen in the form of a 3D model. If the user is currently using an AR device, the logical port of the 3D model can also be displayed to the user through the AR device.
- the audio and video in the examples of this application are managed separately.
- microphones and speakers are mainly used for audio capture and playback, and when transmitting data across devices, raw audio data can be transmitted.
- the camera and the display screen are mainly used for video collection and playback.
- the video data transmission can be realized through video encoding and decoding.
- the audio and video in the embodiments of the present application need to be transmitted at the same time, and in this case, an encapsulated screen projection protocol, an audio and video transmission protocol, and the like are optionally used.
- the smart TV 101 can synchronously display the real-time status of each IoT device on the corresponding virtual device. As shown in FIG. 8 , the user is using the smart TV 101 to make a video call, and the current video call content can be displayed on the virtual device of the smart TV 101; when the mobile phone 102 is in the locked screen state, the lock screen image can be displayed on the mobile phone 102. on the virtual device.
- the user When the user needs to exit the logical device display interface, he can click the blank space of the logical device display interface to exit the logical device display interface, or click the virtual device to exit the logical device display interface, or click the virtual return key or the physical return key to exit the logic device.
- the device display interface the present application does not limit the specific manner of exiting the logical device display interface.
- the method for entering and exiting the display interface of the logical device is described in detail above.
- the operation method of the display interface of the logical device will be described below.
- a video call is a common application scenario.
- the user can see the other party's picture through the screen, and can also hear the other party's voice through the speaker. It can be transmitted to each other through the camera and microphone.
- the smart TV 101 and the mobile phone 102 have different advantages in video calls.
- the smart TV 101 has a larger screen and a wider camera viewing angle
- the mobile phone 102 has the characteristics of flexible movement. Users can make video calls in specific ways in different scenarios to meet individual needs.
- FIG. 9 shows a setting method of a video call.
- the user is currently using the smart TV 101 to make a video call.
- the microphone of the smart TV 101 has poor sound pickup effect, and the user can use the microphone of the mobile phone 102 to pick up sound.
- the user can move two fingers together in the air. After the camera of the smart TV 101 captures the action, it enters the logical device display interface to display the virtual devices of the smart TV 101 and the mobile phone 102 . The user can select the microphone icon of the smart TV 101, and perform a drag operation in the air, drag the microphone icon of the smart TV 101 to the microphone icon of the mobile phone 102, or drag the microphone icon of the smart TV 101 to the mobile phone 102. Virtual device icon (hereinafter, referred to as "virtual device"). After the smart TV 101 detects the drag operation, it sends a request message to the mobile phone 102, requesting to use the microphone of the mobile phone 102.
- Smart TV 101 After the smart TV 101 obtains audio data from the mobile phone 102, the audio data can be encapsulated with the video data obtained by the smart TV 101 and sent to the opposite end of the video call. This embodiment does not need to close the current video call to perform microphone function migration settings, which enhances the user experience.
- a connection is added between the microphone icon of the smart TV 101 and the microphone icon of the mobile phone 102, and the mobile phone 102 can also display the microphone icon on the screen to remind the user of the smart TV 101 respectively.
- the microphone function migration has been completed with the mobile phone 102 to enhance the user's experience.
- the user can click the exit button of the remote control to exit the logic device display interface.
- the smart TV 101 can also use the microphone and speaker of a Bluetooth headset connected to the mobile phone 102 to facilitate video calls when the user cannot hold the mobile phone 102 with both hands.
- the user can move two fingers together in the air. After the camera of the smart TV 101 captures the action, it enters the logical device display interface and displays the virtual devices of the smart TV 101 and the mobile phone 102 .
- the mobile phone 102 is connected with a Bluetooth headset, and the virtual device of the mobile phone 102 includes a Bluetooth icon.
- the user can select the microphone icon of the smart TV 101, and perform a drag operation in the air.
- the user can drag the microphone icon and speaker icon of the smart TV 101 to the Bluetooth icon of the mobile phone 102 respectively, instructing the mobile phone 102.
- the speaker is open for use by the smart TV 101 .
- the user can also drag the microphone icon and speaker icon of the smart TV 101 to the virtual device of the mobile phone 102 respectively, and the mobile phone 102 decides whether to open the microphone and speaker of the Bluetooth headset to the smart TV 101 for use.
- the smart TV 101 After the smart TV 101 detects the operation of dragging the microphone icon, it sends a request message to the mobile phone 102, requesting to use the microphone of the mobile phone 102. After the mobile phone 102 receives the request message, it starts the sound pickup function, and obtains the user's voice and combines the user's voice with the user's voice. It is transmitted to the smart TV 101; after the smart TV 101 obtains the audio data from the mobile phone 102, the audio data can be encapsulated with the video data obtained by the smart TV 101 and sent to the opposite end of the video call.
- the smart TV 101 After the smart TV 101 detects the operation of dragging the speaker icon, it sends a request message to the mobile phone 102 again, requesting to use the speaker of the mobile phone 102. After the mobile phone 102 receives the request message, it starts the speaker function and plays the audio data obtained from the smart TV 101. .
- the embodiment shown in FIG. 10 does not need to close the current video call for microphone and speaker function migration settings, which enhances the user experience. After the microphone and speaker function migration settings are completed, the user can click the exit button on the remote control to exit the logic device display interface.
- FIG. 11 shows another setting method of a video call.
- the user is currently using the mobile phone 102 to make a video call.
- the screen of the smart TV 101 can be used to watch the video call to obtain better visual effects.
- the user can long press the screen of the mobile phone 102 , and after detecting the action, the mobile phone 102 enters the logical device display interface, and displays the virtual devices of the smart TV 101 and the mobile phone 102 .
- the user can drag the screen icon of the mobile phone 102 to the virtual device of the smart TV 101 .
- the mobile phone 102 sends a request message to the smart TV 101 based on the drag operation, requesting to project the screen of the video call to the smart TV 101.
- the smart TV 101 starts the screen projection function, and obtains the video call from the mobile phone 102.
- the video data and the picture of the video call are displayed on the screen, and the mobile phone 102 continues to process the audio data of the video call.
- This embodiment does not need to close the current video call to perform screen projection settings, which enhances the user experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- FIG. 12 shows yet another method for setting a video call provided by the present application, and the method is applied to a three-party video call scenario.
- User A is currently using the mobile phone 102 to make a video call with user C, and user B wishes to join the video call through the smart TV 101, where user A and user B are in the same geographical location.
- User A can press and hold the screen of the mobile phone 102 for a long time. After detecting the action, the mobile phone 102 enters the logical device display interface and displays the virtual devices of the smart TV 101 and the mobile phone 102 . User A can use two fingers to drag the virtual devices of the smart TV 101 and the mobile phone 102 at the same time.
- the mobile phone 102 After detecting the drag operation, the mobile phone 102 sends a request message to the smart TV 101, and requests the smart TV 101 to share the microphone, camera and speaker based on the currently running video call APP;
- the media data (such as video data and audio data) is sent to the mobile phone 102, and the mobile phone 102 can package the media data of user B and the media data of user A to send to user C, and package the media data of user C and the media data of user A It is sent to the smart TV 101, so that user B joins the video call between user and user C.
- User A can also use two fingers to drag the camera icons of the smart TV 101 and the mobile phone 102 at the same time, so that the smart TV 101 and the mobile phone 102 share the camera independently. This embodiment does not need to close the current video call to set the video call, which enhances the user experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- the user when the user is using the mobile phone 102 to make a video call, the user can use the camera of the smart TV 101 to enable the other party to see a picture with a wider viewing angle.
- the method of using the camera of the smart TV 101 is shown in FIG. 13 .
- the user can long press the screen of the mobile phone 102 , and after detecting the action, the mobile phone 102 enters the logical device display interface, and displays the virtual devices of the smart TV 101 and the mobile phone 102 .
- the user can drag the camera icon of the mobile phone 102 to the virtual device of the smart TV 101 .
- the mobile phone 102 sends a request message to the smart TV 101 based on the drag operation, requesting to obtain the picture captured by the camera of the smart TV 101.
- the smart TV 101 starts the camera to shoot, and sends the captured picture to the mobile phone 102 , the local video picture of the mobile phone 102 (the video picture displayed in the upper right corner of the mobile phone 102 ) is the same as the video picture displayed by the smart TV 101 .
- This embodiment does not need to close the current video call to set the camera, which enhances the user experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- the user can migrate the status of the APP from the smart TV 101 to the mobile phone 102 , or the user can migrate the status of the APP from the mobile phone 102 to the smart TV 101 .
- the mobile phone 102 has the advantage of being more mobile than the smart TV 101, and the user can migrate the ongoing video call from the smart TV 101 to the mobile phone 102 for better mobility.
- the migration process of video calls is shown in Figure 14.
- the user can make a two-finger pinch action on the screen of the smart TV 101 to trigger the smart TV 101 to enter the logical device display interface; then, the user can click to select the virtual device of the smart TV 101 and drag the virtual device of the smart TV 101 to the mobile phone. 102; after the smart TV 101 detects the drag operation, it sends a request message to the mobile phone 102, requesting to migrate the video call to the mobile phone 102; after the mobile phone 102 receives the request message, it executes the migration process of the video call; the video call After the migration is completed, the virtual device of the mobile phone 102 displays the video call interface, and the virtual device of the smart TV 101 removes the video call interface.
- This embodiment does not need to close the current video call to perform the migration setting of the video call, which enhances the user experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- the smart TV 101 has the advantage of a larger screen compared to the mobile phone 102, and the user can migrate the ongoing video call from the mobile phone 102 to the smart TV 101 to obtain a better visual experience.
- the migration process of video calls is shown in Figure 15.
- the user can long press the screen of the mobile phone 102 to trigger the mobile phone 102 to enter the logical device display interface; then, the user can click to select the virtual device of the mobile phone 102 and drag the virtual device of the mobile phone 102 to the virtual device of the smart TV 101; the mobile phone 102 detects the After the drag operation, a request message is sent to the smart TV 101 to request to migrate the video call to the smart TV 101; after receiving the request message, the smart TV 101 executes the video call migration process; after the video call migration is completed, the smart TV 101
- the virtual device of 102 displays the video call interface, and the virtual device of the mobile phone 102 removes the video call interface.
- This embodiment does not need to close the current video call to perform the migration setting of the video call, which enhances the user experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- the display interface of the logic device can also be used to obtain a better user experience.
- Figure 16 shows a method of establishing a video call. If the user wishes to use the mobile phone 102 to establish a video call with the smart TV 101, the operation can be performed as described below.
- the user can long press the screen of the mobile phone 102 , and after detecting the action, the mobile phone 102 enters the logical device display interface, and displays the virtual devices of the smart TV 101 and the mobile phone 102 .
- the user can drag the virtual device of the mobile phone 102 to the virtual device of the smart TV 101 .
- the mobile phone 102 sends a request message to the smart TV 101 based on the drag operation, requesting to establish a video call connection with the smart TV 101.
- the smart TV 101 can display the video call establishment request dialog box on the screen, so as to facilitate the smart TV 101.
- the user of the TV 101 chooses to accept or reject the video call; after receiving the request message, the smart TV 101 can also directly establish a video call according to the preset information, and send the captured image to the mobile phone 102, So that the user can see the environment in which the smart TV 101 is located (eg, the user's home environment).
- This embodiment adopts an intuitive way to establish a video call, which enhances the user's experience.
- the user can click the blank space of the logical device display interface to exit the logical device display interface.
- the user can enter the logical device display interface again, and click the arrow between the virtual device of the smart device 101 and the virtual device of the mobile phone 102 to disconnect the video call.
- the user can use the smart TV in one residence to establish a video call with the smart TV in another residence.
- Figure 17 shows another method of establishing a video call.
- the user, the smart TV 101 and the mobile phone 102 are located in one place of residence, and the smart TV 105 is located in another place of residence. If the user wishes to use the mobile phone 102 to establish a video call between the smart TV 101 and the smart TV 105, he can proceed as follows: operate.
- the user can long press the screen of the mobile phone 102 .
- the mobile phone 102 After the mobile phone 102 detects the action, it enters the logical device display interface and displays the virtual devices of the smart TV 101 , the smart TV 105 and the mobile phone 102 .
- the user can drag the virtual device of the smart TV 101 to the virtual device of the smart TV 105 .
- the mobile phone 102 sends a notification message to the smart TV 101 based on the drag operation to notify the smart TV 101 to establish a video call connection with the smart TV 105.
- the smart TV 101 After receiving the notification message, the smart TV 101 sends a video call establishment request to the smart TV 105, and the smart TV 105 After receiving the request message, a video call establishment request dialog box can be displayed on the screen, so that the user of the smart TV 105 (such as the user's family) can choose to accept or reject the video call; after receiving the request message, the smart TV 105 can also A video call is directly established according to the preset information, and the captured picture is sent to the smart TV 101 so that the user can see the environment where the smart TV 105 is located. This embodiment adopts an intuitive way to establish a video call, which enhances the user's experience.
- the user may click on the blank space of the virtual device or logical device display interface of the mobile phone 102 to exit the logical device display interface.
- the above describes how to use the display interface of some logical devices in the video call scenario. Users can also use the display interface of logical devices to perform other operations. For example, some smart TVs have non-touch screens, and use the remote control to input content on the smart TV. Inconvenient, the user can use the mobile phone to input content on the smart TV.
- the operation method is shown in Figure 18.
- the user can long press the screen of the mobile phone 102 , and after detecting the action, the mobile phone 102 enters the logical device display interface, and displays the virtual devices of the smart TV 101 and the mobile phone 102 .
- the user can click the virtual device of the smart TV 101.
- the mobile phone 102 detects the click action, it exits the logical device display interface and displays the screen of the smart TV 101 on the screen of the mobile phone.
- the mobile phone 102 also needs to map the control event to the smart TV 101. That is, the touch screen event (TouchEvent) of the mobile phone 102 is converted into a touch screen event for the smart TV 101 , so that a click operation or an input operation can be performed on the smart TV 101 through the mobile phone 102 .
- TouchEvent the touch screen event
- the mobile phone 102 can send the coordinate information of the touch screen event of the mobile phone 102 to the smart TV 101, and the smart TV 101 performs mapping according to the screen parameters, determines the equivalent position of the coordinate information on the screen, and then generates The touch event corresponding to this equivalent location.
- the user can long press the screen of the mobile phone 102 again to enter the logical device display interface, and then click the virtual device of the mobile phone 102 or the virtual device of the smart TV 101 to terminate the mobile phone 102 to the smart TV 101 control.
- the corresponding apparatuses include corresponding hardware structures and/or software modules for performing each function.
- the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software-driven hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
- the present application can divide the functional units of the apparatus for managing IoT devices according to the above method examples. For example, each function can be divided into each functional unit, or two or more functions can be integrated into one unit.
- the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in this application is schematic, and is only a logical function division, and other division methods may be used in actual implementation.
- FIG. 19 shows a schematic structural diagram of an electronic device for managing IoT devices provided by the present application.
- the electronic device 1900 may be used to implement the methods described in the above method embodiments.
- the electronic device 1900 includes one or more processors 1901, and the one or more processors 1901 can support the electronic device 1900 to implement the methods in the method embodiments.
- the processor 1901 may be a general-purpose processor or a special-purpose processor, for example, the processor 1901 may be a central processing unit (CPU).
- the CPU may be used to control the electronic device 1900 and execute software programs to implement the function of managing IoT devices.
- the processor 1901 can also be a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, For example, discrete gates, transistor logic devices, or discrete hardware components. This application does not limit the specific type of the processor.
- the electronic device 1900 may further include a communication module 1905 and an input module 1906, wherein the communication module 1905 is used to implement the input (reception) and/or output (transmission) of signals with the IoT device, and the input module 1906 is used to implement user input Function.
- the communication module 1905 is used to implement the input (reception) and/or output (transmission) of signals with the IoT device
- the input module 1906 is used to implement user input Function.
- the communication module 1905 may be a transceiver or a communication interface of the electronic device 1900 through which the electronic device 1900 transmits or receives wireless signals, or the electronic device 1900 transmits or receives wired signals through the communication interface, the wireless signal or the wired signal It can be used to control IoT devices;
- the input module 1906 can be a touch screen or a camera of the electronic device 1900, and the electronic device 1900 can obtain a trigger signal input by a user through the touch screen or the camera.
- the electronic device 1900 may include one or more memories 1902 on which programs 1904 are stored.
- the programs 1904 can be executed by the processor 1901 to generate instructions 1903, so that the processor 1901 executes the methods described in the above method embodiments according to the instructions 1903.
- the input module 1906 is used to: obtain the first trigger signal
- the processor 1901 is configured to: display a virtual device interface according to the first trigger signal, where the virtual device interface includes virtual device information of at least two Internet of Things IoT devices;
- the input module 1906 is further configured to: acquire an operation signal, where the operation signal is a signal triggered by the user on the virtual device interface to control the at least two IoT devices to interact;
- the processor 1901 is further configured to: execute a processing method corresponding to the operation signal.
- data (such as virtual device information of IoT devices) may also be stored in the memory 1902 .
- the processor 1901 may also read data stored in the memory 1902 , the data may be stored at the same storage address as the program 1904 , or the data may be stored at a different storage address from the program 1904 .
- the processor 1901 and the memory 1902 can be provided separately, or can be integrated together, for example, integrated on a system on chip (system on chip, SOC).
- SOC system on chip
- the present application further provides a computer program product, which implements the method described in any method embodiment in the present application when the computer program product is executed by the processor 1901 .
- the computer program product can be stored in the memory 1902 , such as a program 1904 , and the program 1904 is finally converted into an executable object file that can be executed by the processor 1901 after processing such as preprocessing, compilation, assembly, and linking.
- the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a computer, implements the method described in any method embodiment of the present application.
- the computer program can be a high-level language program or an executable object program.
- the computer-readable storage medium is, for example, the memory 1902 .
- the memory 1902 may be volatile memory or non-volatile memory, or the memory 1902 may include both volatile memory and non-volatile memory.
- the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- Volatile memory may be random access memory (RAM), which acts as an external cache.
- RAM random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- SDRAM synchronous DRAM
- SDRAM double data rate synchronous dynamic random access memory
- double data rate SDRAM double data rate SDRAM
- DDR SDRAM enhanced synchronous dynamic random access memory
- ESDRAM enhanced synchronous dynamic random access memory
- SCRAM synchronous link dynamic random access memory
- direct rambus RAM direct rambus RAM
- the disclosed systems, devices and methods may be implemented in other manners.
- some features of the method embodiments described above may be omitted, or not implemented.
- the apparatus embodiments described above are only illustrative, and the division of units is only a logical function division. In actual implementation, there may be other division methods, and multiple units or components may be combined or integrated into another system.
- the coupling between the various units or the coupling between the various components may be direct coupling or indirect coupling, and the above-mentioned coupling includes electrical, mechanical or other forms of connection.
- the size of the sequence number does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiments of the present application .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- User Interface Of Digital Computer (AREA)
- Telephone Function (AREA)
Abstract
本申请实施例提供了一种管理物联网设备的方法和装置,该方法包括: 获取第一触发信号; 根据所述第一触发信号显示虚拟设备界面,所述虚拟设备界面包括至少两个 IoT 设备的虚拟设备信息; 获取操作信号,所述操作信号为用户在所述虚拟设备界面上触发的控制所述至少两个IoT设备进行交互的信号;执行与所述操作信号对应的处理方法。由于至少两个 IoT 设备的虚拟设备信息显示在同一个界面中,用户可以在虚拟设备界面上执行操作控制至少两个 IoT 设备进行交互,从而实现了 IoT 设备之间无缝切换业务。
Description
本申请要求于2020年8月18日提交国家知识产权局、申请号为202010846926.8、申请名称为“管理物联网设备的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及电子技术领域,具体涉及一种管理物联网设备的方法和装置。
物联网(internet of things,IoT)即万物相连的互联网,是将各种传感器与互联网结合起来而形成的网络,能够实现人、机、物的互联互通。
一种管理IoT设备的方法是在手机上添加IoT设备管理列表,通过IoT设备管理列表来管理IoT设备。例如,用户打开管理IoT设备的应用程序(application,APP),在APP展示的IoT设备图标中点击目标IoT设备图标,随后进入目标IoT设备的管理界面选择对应的功能,从而完成IoT设备的管理。
现有技术只能够对IoT设备进行分别管理,无法在一步操作中同时控制多个设备。因此,实现同时管理多个IoT设备,是现在需要解决的问题。
发明内容
本申请实施例提供了一种管理IoT设备的方法,能够在IoT设备之间无缝切换业务。
第一方面,提供了一种管理IoT设备的方法,包括:获取第一触发信号;根据所述第一触发信号显示虚拟设备界面,所述虚拟设备界面包括至少两个IoT设备的虚拟设备信息;获取操作信号,所述操作信号为用户在所述虚拟设备界面上触发的控制所述至少两个IoT设备进行交互的信号;执行与所述操作信号对应的处理方法。
上述方法的执行装置可以是至少两个IoT设备中的一个,也可以是与该至少两个IoT设备不同的装置。第一触发信号可以是手指在触摸屏上滑动生成的电信号,也可以是执行装置的摄像头捕捉到的肢体动作(如双指收拢动作),还可以是遥控器等控制装置生成的红外信号,本申请对第一触发信号的具体形式不做限定。虚拟设备界面可以是执行装置的屏幕上显示的界面,也可以是执行装置通过增强现实(augmented reality,AR)技术或虚拟现实(virtual reality,VR)技术显示的界面,虚拟设备信息可以是图像形式的信息,也可以是文字形式的信息,本申请对虚拟设备界面和虚拟设备信息的具体形式不做限定。由于至少两个IoT设备的虚拟设备信息显示在同一个界面中,用户可以在虚拟设备界面上执行操作控制至少两个IoT设备进行交互,例如,用户可以通过拖动或点击等方式触发执行设备生成操作信号,执行设备可以基于该操作信号对应的处理方法控制至少两个IoT设备进行设备共享、功能迁移等交互。基于上述方法,用户无需分别打开不同IoT设备的管理界面控制不同IoT设备进行交互,从而实现了IoT设备之间无缝切换业务。
可选地,所述至少两个IoT设备的虚拟设备信息包括:所述至少两个IoT设备的虚拟设备图标和逻辑端口图标。
图标形式的虚拟设备信息相比于文字形式的虚拟设备信息更加直观,能够增强客户体验。
可选地,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的逻辑端口图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:将所述第一IoT设备的逻辑端口图标对应的功能迁移至所述第二IoT设备,其中,所述第二IoT设备具有所述第一IoT设备的逻辑端口图标对应的功能。
用户可以在第一IoT设备的显示屏上拖动逻辑端口图标生成操作信号,也可以在VR界面或AR界面中拖动逻辑端口图标生成操作信号,本申请对拖动逻辑端口图标生成操作信号的具体方式不做限定。本实施例中的逻辑端口图标例如是麦克风图标,麦克风图标对应的功能是拾音功能,第一IoT设备可以将拾音功能迁移至第二IoT设备,利用第二IoT设备的麦克风传输用户的声音,当用户与第一IoT设备的距离较远而与第二IoT设备的距离较近时,可以提高拾音效果。因此,本实施例可以在特定场景中以简单的操作(拖动逻辑端口的图标)使用户获得更好的体验。
可选地,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:将所述第一IoT设备的目标应用的功能迁移至所述第二IoT设备,其中,目标应用为所述第一IoT设备正在运行的应用,并且,所述第二IoT设备安装有所述目标应用。
目标应用例如是视频聊天APP,当视频聊天APP正在第一IoT设备上运行时,用户可以通过拖动操作第一IoT设备的虚拟设备图标至第二IoT设备的虚拟图标将视频聊天APP的功能无缝迁移至第二IoT设备,其中,第一IoT设备例如是智能电视,第二IoT设备例如是手机,用户可以利用手机的移动性实现更方便的视频聊天。因此,本实施例可以在特定场景中以简单的操作(拖动虚拟设备的图标)使用户获得更好的体验。
可选地,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:建立所述第一IoT设备的目标应用与所述第二IoT设备的目标应用之间的通信连接,其中,所述第一IoT设备在获取所述操作信号前未运行所述目标应用。
当第一IoT设备未运行目标应用时,用户的拖动操作可能是想在第一IoT设备的目标应用和第二IoT设备的目标应用之间建立通信连接。目标应用可以是预设的APP,也可以是用户实时选择的APP。当目标应用为视频聊天APP时,用户无需打开视频聊天APP即可在第一IoT设备与第二IoT设备之间实现视频聊天。因此,本实施例可以在特定场景中以简单的操作(拖动虚拟设备的图标)使用户获得更好的体验。
可选地,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户通过双指拖动所述第一IoT设备的逻辑端口图标和所述第二IoT设 备的逻辑端口图标进行合并;所述执行与所述操作信号对应的处理方法,包括:共享所述第一IoT设备的逻辑端口图标的功能和所述第二IoT设备的逻辑端口图标的功能。
用户可以通过拖动两个逻辑设备的端口图标实现共享逻辑端口的功能。例如,当用户A正在使用手机与用户C进行视频通话时,用户B希望通过智能电视加入该视频通话,则用户A可以拖动手机的麦克风图标和智能电视的麦克风图标使得用户B加入该视频通话。本实施例可以在特定场景中以简单的操作(拖动虚拟设备的逻辑端口的图标)使用户获得更好的体验。
可选地,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户点击所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:建立所述第一IoT设备与所述第二IoT设备的控制事件映射关系,其中,所述第一IoT设备为预设的控制设备,所述第二IoT设备为被控制设备。
第一IoT设备例如是智能电视,第二IoT设备例如是手机,用户可以利用手机实现对智能电视的控制,例如,可以利用手机键盘在智能电视的浏览器中输入网址,相比于通过遥控器控制智能电视,本实施例可以在特定的场景中使用户获得更好的体验。
可选地,所述获取第一触发信号,包括:通过触控屏幕获取所述第一触发信号,所述第一触发信号为所述用户在所述触控屏幕上执行预设动作生成的触发信号。
可选地,所述获取第一触发信号,包括:通过摄像头获取所述第一触发信号,所述第一触发信号为所述用户在空中执行预设动作生成的触发信号。
可选地,还包括:退出所述虚拟设备界面。
可选地,所述退出所述虚拟设备界面,包括:获取第二触发信号;根据所述第二触发信号退出所述虚拟设备界面。
第二方面,提供了一种管理IoT设备的装置,包括由软件和/或硬件组成的单元,该单元用于执行第一方面所述的技术方案中任意一种方法。
第三方面,提供了一种电子设备,包括处理器和存储器,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得该电子设备执行第一方面所述的技术方案中任意一种方法。
第四方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在电子设备上运行时,使得该电子设备执行第一方面所述的技术方案中任意一种方法。
第五方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在电子设备上运行时,使得该电子设备执行第一方面所述的技术方案中任意一种方法。
图1是一种适用于本申请实施例的IoT系统的示意图;
图2是本申请实施例提供的一种IoT设备的硬件系统示意图;
图3是本申请实施例提供的一种IoT设备的软件系统示意图;
图4是本申请实施例提供的几种IoT设备的逻辑设备的拓扑接构示意图;
图5是本申请实施例提供的一种通过智能电视进入逻辑设备显示界面的方法;
图6是本申请实施例提供的一种通过手机进入逻辑设备显示界面的方法;
图7是本申请实施例提供的另一种通过手机进入逻辑设备显示界面的方法;
图8是本申请实施例提供的一种逻辑设备显示界面的示意图;
图9是本申请实施例提供的一种设置视频通话的方法的示意图;
图10是本申请实施例提供的一种设置共享蓝牙耳机的方法的示意图;
图11是本申请实施例提供的另一种设置视频通话的方法的示意图;
图12是本申请实施例提供的另一种设置多方视频通话的方法的示意图;
图13是本申请实施例提供的一种设置摄像头的方法的示意图;
图14是本申请实施例提供的一种迁移APP功能的方法的示意图;
图15是本申请实施例提供的另一种迁移APP功能的方法的示意图;
图16是本申请实施例提供的一种建立视频通话的方法的示意图;
图17是本申请实施例提供的另一种建立视频通话的方法的示意图;
图18是本申请实施例提供的一种通过手机控制智能电视的方法的示意图;
图19是本申请实施例提供的一种管理IoT设备的电子设备的示意图。
下面将结合附图,对本申请实施例中的技术方案进行描述。
图1是一种适用于本申请实施例的IoT系统100的示意图,IoT系统100包括智能电视101、手机102、智能音箱103和路由器104,这些设备可以称为IoT设备。
用户可以通过手机102向智能电视101发送指令,该指令经由路由器104转发传输至智能电视101,智能电视101根据该指令执行相应的操作,如打开摄像头、屏幕、麦克风和扬声器。
用户也可以通过手机102向智能音箱103发送指令,该指令通过手机102与智能音箱103之间的蓝牙连接传输至智能音箱103,智能音箱103根据该指令执行相应的操作,如打开麦克风和扬声器。
IoT系统100是适用于本申请的IoT系统的一个示例而非全部。例如,适用于本申请实施例的IoT系统中,IoT设备之间还可以通过有线连接方式进行通信;用户可以通过AR设备或VR设备控制智能电视101和智能音箱103。
下面以图2为例介绍本申请实施例提供的IoT设备的硬件结构。
IoT设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
需要说明的是,图2所示的结构并不构成对IoT设备的具体限定。在本申请另一些实施例中,IoT设备可以包括比图2所示的部件更多或更少的部件,或者,IoT设备可以包括图2所示的部件中某些部件的组合,或者,IoT设备可以包括图2所示的部 件中某些部件的子部件。图2示的部件可以以硬件、软件、或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元。例如,处理器110可以包括以下处理单元中的至少一个:应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、神经网络处理器(neural-network processing unit,NPU)。其中,不同的处理单元可以是独立的器件,也可以是集成的器件。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。例如,处理器110可以包括以下接口中的至少一个:内部集成电路(inter-integrated circuit,I2C)接口、内部集成电路音频(inter-integrated circuit sound,I2S)接口、脉冲编码调制(pulse code modulation,PCM)接口、通用异步接收传输器(universal asynchronous receiver/transmitter,UART)接口、移动产业处理器接口(mobile industry processor interface,MIPI)、通用输入输出(general-purpose input/output,GPIO)接口、SIM接口、USB接口。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K、充电器、闪光灯、摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现IoT设备的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播 放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194和摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI)、显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现IoT设备的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现IoT设备的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号接口,也可被配置为数据信号接口。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194、无线通信模块160、音频模块170和传感器模块180。GPIO接口还可以被配置为I2C接口、I2S接口、UART接口或MIPI接口。
USB接口130是符合USB标准规范的接口,例如可以是迷你(Mini)USB接口、微型(Micro)USB接口或C型USB(USB Type C)接口。USB接口130可以用于连接充电器为IoT设备充电,也可以用于IoT设备与外围设备之间传输数据,还可以用于连接耳机以通过耳机播放音频。USB接口130还可以用于连接其他电子设备,例如AR设备。
图2所示的各模块间的连接关系只是示意性说明,并不构成对IoT设备的各模块间的连接关系的限定。可选地,IoT设备的各模块也可以采用上述实施例中多种连接方式的组合。
充电管理模块140用于从充电器接收电力。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的电流。在一些无线充电的实施例中,充电管理模块140可以通过IoT设备的无线充电线圈接收电磁波(电流路径如虚线所示)。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量、电池循环次数和电池健康状态(例如,漏电、阻抗)等参数。可选地,电源管理模块141可以设置于处理器110中,或者,电源管理模块141和充电管理模块140可以设置于同一个器件中。
IoT设备的无线通信功能可以通过天线1、天线2、移动通信模块150、无线通信模块160、调制解调处理器以及基带处理器等器件实现。
天线1和天线2用于发射和接收电磁波信号。IoT设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在IoT设备上的无线通信的解决方案,例如下列方案中的至少一个:第二代(2
th generation,2G)移动通信解决方案、第三代(3
th generation,3G)移动通信解决方案、第四代(4
th generation,5G)移动通信解决方案、第五代(5
th generation,5G)移动通信解决方案。移动通信模块150可以包括至少一 个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波和放大等处理,随后传送至调制解调处理器进行解调。移动通信模块150还可以放大经调制解调处理器调制后的信号,放大后的该信号经天线1转变为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(例如,扬声器170A、受话器170B)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
与移动通信模块150类似,无线通信模块160也可以提供应用在IoT设备上的无线通信解决方案,例如下列方案中的至少一个:无线局域网(wireless local area networks,WLAN)、蓝牙(bluetooth,BT)、全球导航卫星系统(global navigation satellite system,GNSS)、调频(frequency modulation,FM)、近场通信(near field communication,NFC)、红外(infrared,IR)。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,并将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频和放大,该信号经天线2转变为电磁波辐射出去。
在一些实施例中,IoT设备的天线1和移动通信模块150耦合,IoT设备的天线2和无线通信模块160耦合。
IoT设备可以通过GPU、显示屏194以及应用处理器实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194可以用于显示图像或视频。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体(active-matrix organic light-emitting diode,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、迷你发光二极管(mini light-emitting diode,Mini LED)、微型发光二极管(micro light-emitting diode,Micro LED)、微型OLED(Micro OLED)或量子点发光二极管(quantum dot light emitting diodes,QLED)。在一些实施例中,IoT设备可以包括1个或N个显示屏194,N为大于1的正整数。
IoT设备可以通过ISP、摄像头193、视频编解码器、GPU、显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP可以对图像的噪点、亮度和色彩进行算法优化,ISP还可以优化拍摄场景的曝光和色温等参数。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的红绿蓝(red green blue,RGB),YUV等格式的图像信号。在一些实施例中,IoT设备可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当IoT设备在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。IoT设备可以支持一种或多种视频编解码器。这样,IoT设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1、MPEG2、MPEG3和MPEG4。
NPU是一种借鉴生物神经网络结构的处理器,例如借鉴人脑神经元之间传递模式对输入信息快速处理,还可以不断地自学习。通过NPU可以实现IoT设备的智能认知等功能,例如:图像识别、人脸识别、语音识别和文本理解。
外部存储器接口120可以用于连接外部存储卡,例如安全数码(secure digital,SD)卡,实现扩展IoT设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能(例如,声音播放功能和图像播放功能)所需的应用程序。存储数据区可存储IoT设备使用过程中所创建的数据(例如,音频数据和电话本)。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如:至少一个磁盘存储器件、闪存器件和通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令和/或存储在设置于处理器中的存储器的指令,执行IoT设备的各种功能应用以及数据处理。
IoT设备可以通过音频模块170、扬声器170A、受话器170B、麦克风170C、耳机接口170D以及应用处理器等实现音频功能,例如,音乐播放和录音。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也可以用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170或者音频模块170的部分功能模块可以设置于处理器110中。
扬声器170A,也称为喇叭,用于将音频电信号转换为声音信号。IoT设备可以通 过扬声器170A收听音乐或免提通话。
受话器170B,也称为听筒,用于将音频电信号转换成声音信号。当用户使用IoT设备接听电话或语音信息时,可以通过将受话器170B靠近耳朵接听语音。
麦克风170C,也称为话筒或传声器,用于将声音信号转换为电信号。当用户拨打电话或发送语音信息时,可以通过靠近麦克风170C发声将声音信号输入麦克风170C。IoT设备可以设置至少一个麦克风170C。在另一些实施例中,IoT设备可以设置两个麦克风170C,以实现降噪功能。在另一些实施例中,IoT设备还可以设置三个、四个或更多麦克风170C,以实现识别声音来源和定向录音等功能。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,例如可以是电阻式压力传感器、电感式压力传感器或电容式压力传感器。电容式压力传感器可以是包括至少两个具有导电材料的平行板,当力作用于压力传感器180A,电极之间的电容改变,IoT设备根据电容的变化确定压力的强度。当触摸操作作用于显示屏194时,IoT设备根据压力传感器180A检测所述触摸操作。IoT设备也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令;当触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定IoT设备的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定IoT设备围绕三个轴(即,x轴、y轴和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。例如,当快门被按下时,陀螺仪传感器180B检测IoT设备抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消IoT设备的抖动,实现防抖。陀螺仪传感器180B还可以用于导航和体感游戏等场景。
气压传感器180C用于测量气压。在一些实施例中,IoT设备通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。IoT设备可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当IoT设备是翻盖机时,IoT设备可以根据磁传感器180D检测翻盖的开合。IoT设备可以根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测IoT设备在各个方向上(一般为x轴、y轴和z轴)加速度的大小。当IoT设备静止时可检测出重力的大小及方向。加速度传感器180E还可以用于识别IoT设备的姿态,作为横竖屏切换和计步器等应用的输入参数。
距离传感器180F用于测量距离。IoT设备可以通过红外或激光测量距离。在一些 实施例中,例如在拍摄场景中,IoT设备可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(light-emitting diode,LED)和光检测器,例如,光电二极管。LED可以是红外LED。IoT设备通过LED向外发射红外光。IoT设备使用光电二极管检测来自附近物体的红外反射光。当检测到反射光时,IoT设备可以确定附近存在物体。当检测不到反射光时,IoT设备可以确定附近没有物体。IoT设备可以利用接近光传感器180G检测用户是否手持IoT设备贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式或口袋模式的自动解锁与自动锁屏。
环境光传感器180L用于感知环境光亮度。IoT设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测IoT设备是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。IoT设备可以利用采集的指纹特性实现解锁、访问应用锁、拍照和接听来电等功能。
温度传感器180J用于检测温度。在一些实施例中,IoT设备利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,IoT设备执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,IoT设备对电池142加热,以避免低温导致IoT设备异常关机。在其他一些实施例中,当温度低于又一阈值时,IoT设备对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称为触控器件。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,触摸屏也称为触控屏。触摸传感器180K用于检测作用于其上或其附近的触摸操作。触摸传感器180K可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于IoT设备的表面,并且与显示屏194设置于不同的位置。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键和音量键。按键190可以是机械按键,也可以是触摸式按键。IoT设备可以接收按键输入信号,实现于案件输入信号相关的功能。
马达191可以产生振动。马达191可以用于来电提示,也可以用于触摸反馈。马达191可以对作用于不同应用的触摸操作产生不同的振动反馈效果。对于作用于显示屏194的不同区域的触摸操作,马达191也可产生不同的振动反馈效果。不同的应用场景(例如,时间提醒、接收信息、闹钟和游戏)可以对应不同的振动反馈效果。触 摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态和电量变化,也可以用于指示消息、未接来电和通知。
SIM卡接口195用于连接SIM卡。SIM卡可以插入SIM卡接口195实现与IoT设备的接触,也可以从SIM卡接口195拔出实现与IoT设备的分离。IoT设备可以支持1个或N个SIM卡接口,N为大于1的正整数。同一个SIM卡接口195可以同时插入多张卡,所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容外部存储卡。IoT设备通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,IoT设备采用嵌入式SIM(embedded-SIM,eSIM)卡,eSIM卡可以嵌在IoT设备中,不能和IoT设备分离。
上文详细描述了IoT设备的硬件系统,下面介绍本申请实施例提供的IoT设备的软件系统。IoT设备的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构,本申请实施例以分层架构为例,示例性地描述IoT设备的软件系统。
如图3所示,分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将软件系统分为三层,从上至下分别为应用层、操作系统层和逻辑设备层。
应用层可以包括相机、图库、日历、通话、地图、导航、WLAN、蓝牙、音乐、视频、短信息等应用程序。在一些设备能力较弱的IoT设备中,应用层还可以以软件开发工具包(software development kit,SDK)的形式存在。
操作系统层为应用层的APP提供应用编程接口(application programming interface,API)和后台服务,后台服务例如是一些预先定义的函数。
当用户在触摸传感器180K上进行触摸操作时,相应的硬件中断被发送至操作系统层,操作系统层将触摸操作加工成原始输入事件,原始输入事件例如包括触摸坐标和触摸操作的时间戳等信息;随后,操作系统层识别出原始出入事件对应的控件,并通知该控件对应的APP。例如,上述触摸操作为单击操作,上述控件对应的APP为相机APP,则相机APP可以通过API调用后台服务,将控制指令传输至逻辑端口管理模块,通过逻辑端口管理模块控制摄像头193进行拍摄。
逻辑设备层包括逻辑设备端口管理模块、逻辑设备管理模块和逻辑设备用户界面(user interface,UI)模块三个主要模块。
逻辑端口管理模块用于管理各逻辑端口的路由,并实现逻辑端口的功能共享和功能引用,可以通过网络连接来引用远端IoT设备的端口。例如,手机102使用智能电视101(远端IoT设备)的摄像头时,智能电视101将摄像头功能的状态设置为可共享,手机102的逻辑端口管理模块通过网络连接引用智能电视101的摄像头功能;随后,手机102上的APP就可以使用智能电视101的摄像头进行视频聊天等操作。
逻辑设备管理模块的功能包括IoT设备的添加、删除和权限管理。
逻辑设备UI模块用于将逻辑设备列表以可视化的形式展示给用户,以便于用户管理IoT设备。
例如,当用户设置手机102使用本地的麦克风1时,逻辑设备UI模块获取的用户操作信息传输至逻辑设备管理模块,逻辑设备管理模块可以基于该用户操作信息激活 函数/dev/mic1,将麦克风1添加至逻辑设备列表,逻辑端口管理模块即可利用麦克风1的端口进行拾音;当用户设置手机102使用智能电视101的麦克风2时,逻辑设备管理模块可以激活函数/dev/mic2,将麦克风2添加至逻辑设备列表,逻辑端口管理模块即可利用麦克风2的端口进行拾音。
为了便于用户控制IoT设备,可以将IoT设备虚拟为逻辑设备。智能电视101、手机102和智能音箱103的逻辑设备的拓扑结构如图4所示。
智能电视101和手机102包含的模块中具有用户交互功能的模块通常是麦克风、扬声器、摄像头和屏幕,因此,智能电视101和手机102的逻辑设备可以包括上述模块对应的逻辑端口。
智能音箱103包含的模块中具有用户交互功能的模块通常是麦克风和扬声器,因此,智能音箱103的逻辑设备包括的逻辑端口可以是麦克风和扬声器。
图4所示的拓扑结构可以由手机102生成。
手机102可以向智能电视101和智能音箱103发送指示信息,指示智能电视101和智能音箱103上报各自的能力信息,该能力信息指示各个IoT设备所支持的功能。例如,智能电视101上报的能力信息指示智能电视101支持的功能包括麦克风、扬声器、摄像头和屏幕,智能音箱103上报的能力信息指示智能音箱支持的功能包括麦克风和扬声器。
手机102也可以向服务器发送查询请求,根据设备品牌和/或设备型号,从服务器获取智能电视101和智能音箱103的能力信息。
此外,当手机102与智能电视101和智能音箱103登录相同的管理账号时,手机102可以同步智能电视101和智能音箱103的能力信息。例如,智能电视101和智能音箱103可以周期性地向手机102发送能力信息,或者,手机102周期性地查询智能电视101和智能音箱103的能力,或者,智能电视101和智能音箱103在自己支持的功能发生变化时向手机102发送能力信息。
在一些可能的实施方案中,设备间还需要同步设备状态(例如逻辑设备电量、逻辑设备是否熄屏、逻辑端口是否被占用等)。该同步的方式可选地参考能力信息的同步,本申请对此不做限定。
用户可以通过智能电视101或者智能音箱103进入逻辑设备显示界面,不管从哪个IoT设备进入逻辑设备显示界面,用户均可以看到各个IoT设备的状态,并且可以以相同的方式管理各个IoT设备。
下面,以智能电视101或者手机102为例介绍本申请提供的进入逻辑设备显示界面的方法。
图5示出了一种通过智能电视101进入逻辑设备显示界面的方法。
用户可以在智能电视101处于任意显示界面时做出双指收拢动作,该动作用于触发智能电视101进入逻辑设备显示界面。智能电视101可以通过摄像头捕捉双指收拢动作,也可以通过具有触控功能的屏幕捕捉双指收拢动作;即,用户可以在空中做出双指收拢动作,通过摄像头触发智能电视101进入逻辑设备显示界面,用户也可以在具有触控功能的屏幕上做出双指收拢动作,通过该屏幕触发智能电视101进入逻辑设备显示界面。
智能电视101的处理器检测到双指收拢动作后,可以缩小当前显示界面,将当前显示界面的小尺寸画面作为智能电视101的逻辑设备显示在屏幕上。
用户还可以通过声音或者遥控器或者其它动作触发智能电视101进入逻辑设备显示界面,本申请对触发智能电视101进入逻辑设备显示界面的具体方式不做限定。
图6示出了一种通过手机102进入逻辑设备显示界面的方法。
用户可以在手机102处于任意显示界面时单击或双击悬浮按键,单击或双击悬浮按键用于触发手机102进入逻辑设备显示界面,其中,悬浮按键可以设置为半透明状态,并且可以被拖动至手机102的屏幕的任意位置。
图7示出了另一种通过手机102进入逻辑设备显示界面的方法。
用户可以在手机102处于任意显示界面时长按屏幕进入逻辑设备显示界面,手指按压的位置可以是屏幕的任意位置。
智能电视101和手机102的虚拟设备的显示界面如图8所示。智能电视101和手机102的虚拟设备的下方均显示各自的逻辑端口,这些逻辑端口可以以2D图标的形式显示在屏幕上。智能电视101的虚拟设备下方显示的四个2D图标从左至右分别为麦克风、扬声器、摄像头和屏幕,手机102的虚拟设备下方显示的四个2D图标从左至右分别为麦克风、扬声器、摄像头和屏幕。
逻辑端口也可以以3D模型的形式显示在屏幕上,若用户当前正在使用AR设备,3D模型的逻辑端口还可以通过AR设备展示给用户。
在一些可能的实施方案中,本申请实施例中的音频和视频分别进行管理。例如,麦克风和扬声器主要用于音频的采集和播放,在进行跨设备传输数据时,可传输原始音频数据。又例如,摄像头和显示屏主要用于视频的采集和播放,在进行跨设备传输数据时,可通过视频的编解码实现视频数据的传输。
在一些可能的实施方案中,本申请实施例中的音频和视频需要同时传输,此时可选地使用封装后的投屏协议、音视频传输协议等。
智能电视101可以将各个IoT设备的实时状态同步显示在对应的虚拟设备上。如图8所示,用户正在使用智能电视101进行视频通话,则当前的视频通话内容可以显示在智能电视101的虚拟设备上;当手机102处于锁屏状态时,锁屏画面可以显示在手机102的虚拟设备上。
当用户需要退出逻辑设备显示界面时,可以单击逻辑设备显示界面的空白处退出逻辑设备显示界面,也可以单击虚拟设备退出逻辑设备显示界面,还可以点击虚拟返回键或者实体返回键退出逻辑设备显示界面,本申请对退出逻辑设备显示界面的具体方式不做限定。
上文详细说明了进入和退出逻辑设备显示界面的方法,下面,将介绍逻辑设备显示界面的操作方法。
视频通话是一种常见的应用场景,当用户使用智能电视101或者手机102进行视频通话时,用户能够通过屏幕看到对方的画面,还可以通过扬声器听到对方的声音,用户的声音和影像则可以通过摄像头和麦克风传输至对方。
智能电视101和手机102在视频通话中有不同的优点,例如,智能电视101的屏幕较大,摄像头视角较广,手机102具有灵活移动的特点。用户可以在不同的场景中 使用以特定的方式进行视频通话,满足个性化需求。
图9示出了一种视频通话的设置方法。用户当前正在使用智能电视101进行视频通话,当用户与智能电视101的距离较远时,智能电视101的麦克风的拾音效果较差,则用户可以使用手机102的麦克风拾音。
用户可以在空中做出双指并拢动作,智能电视101的摄像头捕捉到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。用户可以选中智能电视101的麦克风图标,并在空中做出拖动操作,将智能电视101的麦克风图标拖动至手机102的麦克风图标,或者,将智能电视101的麦克风图标拖动至手机102的虚拟设备图标(以下,简称为“虚拟设备”)。智能电视101检测到该拖动操作后,向手机102发送请求消息,请求使用手机102的麦克风,手机102接收到该请求消息后,启动拾音功能,获取用户的声音并将用户的声音传输至智能电视101;智能电视101从手机102获取音频数据后,可以将该音频数据与智能电视101获取的视频数据封装起来,发送至视频通话的对端。该实施例无需关闭当前的视频通话进行麦克风功能迁移设置,增强了用户的体验。
可选地,智能电视101在麦克风功能迁移完成后,在智能电视101的麦克风图标与手机102的麦克风图标之间添加连线,手机102也可以在屏幕上显示麦克风图标,分别提示用户智能电视101与手机102已完成麦克风功能迁移,增强用户的体验。
麦克风功能迁移完成后,用户可以点击遥控器的退出按键退出逻辑设备显示界面。
除了将智能电视101的麦克风功能迁移至手机102之外,智能电视101还可以使用与手机102连接的蓝牙耳机的麦克风和扬声器,以便于用户在双手不能持有手机102时进行视频通话。
如图10所示,用户可以在空中做出双指并拢动作,智能电视101的摄像头捕捉到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。手机102与蓝牙耳机连接,手机102的虚拟设备包括蓝牙图标。用户可以选中智能电视101的麦克风图标,并在空中做出拖动操作,用户可以将智能电视101的麦克风图标和扬声器图标分别拖动至手机102的蓝牙图标,指示手机102将蓝牙耳机的麦克风和扬声器开放给智能电视101使用。用户也可以将智能电视101的麦克风图标和扬声器图标分别拖动至手机102的虚拟设备,由手机102决定是否将蓝牙耳机的麦克风和扬声器开放给智能电视101使用。
智能电视101检测到拖动麦克风图标的操作后,向手机102发送请求消息,请求使用手机102的麦克风,手机102接收到该请求消息后,启动拾音功能,获取用户的声音并将用户的声音传输至智能电视101;智能电视101从手机102获取音频数据后,可以将该音频数据与智能电视101获取的视频数据封装起来,发送至视频通话的对端。
智能电视101检测到拖动扬声器图标的操作后,再次向手机102发送请求消息,请求使用手机102的扬声器,手机102接收到该请求消息后,启动扬声器功能,播放从智能电视101获取的音频数据。
图10所示的实施例无需关闭当前的视频通话进行麦克风和扬声器功能迁移设置,增强了用户的体验。麦克风和扬声器功能迁移设置完成后,用户可以点击遥控器的退出按键退出逻辑设备显示界面。
图11示出了另一种视频通话的设置方法。用户当前正在使用手机102进行视频通话,当用户与智能电视101的距离较近时,可以使用智能电视101的屏幕观看视频通话的画面,以获取更好的视觉效果。
用户可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。用户可以拖动手机102的屏幕图标至智能电视101的虚拟设备。手机102基于该拖动操作向智能电视101发送请求消息,请求将视频通话的画面投屏到智能电视101,智能电视101接收到该请求消息后,启动投屏功能,从手机102获取视频通话的视频数据并将视频通话的画面并显示在屏幕上,手机102继续处理该视频通话的音频数据。该实施例无需关闭当前的视频通话进行投屏设置,增强了用户的体验。
投屏设置完成后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界面。
图12示出了本申请提供的再一种视频通话的设置方法,该方法应用于三方视频通场景。用户A当前正在使用手机102与用户C进行视频通话,用户B希望通过智能电视101加入该视频通话,其中,用户A和用户B处于相同的地理位置。
用户A可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。用户A可以使用两指同时拖动智能电视101和手机102的虚拟设备。手机102检测到该拖动操作后向智能电视101发送请求消息,基于当前正在运行的视频通话APP请求智能电视101共享麦克风、摄像头和扬声器;智能电视101接收到该请求消息后,将用户B的媒体数据(如视频数据和音频数据)发送至手机102,手机102可以将用户B的媒体数据和用户A的媒体数据打包发送至用户C,并且将用户C的媒体数据和用户A的媒体数据打包发送至智能电视101,使得用户B加入用户和用户C之间的视频通话。用户A也可以使用两指同时拖动智能电视101和手机102的摄像头图标,使得智能电视101和手机102单独共享摄像头。该实施例无需关闭当前的视频通话进行视频通话设置,增强了用户的体验。
视频通话设置完成后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界面。
与投屏类似,用户正在使用手机102进行视频通话时,可以使用智能电视101的摄像头以使对方能够看到视角更广的画面。使用智能电视101的摄像头的方法如图13所示。
用户可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。用户可以拖动手机102的摄像头图标至智能电视101的虚拟设备。手机102基于该拖动操作向智能电视101发送请求消息,请求获取智能电视101的摄像头拍摄的画面,智能电视101接收到该请求消息后,启动摄像头进行拍摄,并将拍摄到的画面发送至手机102,手机102的本地视频画面(手机102的右上角显示的视频画面)与智能电视101显示的视频画面相同。该实施例无需关闭当前的视频通话进行摄像头设置,增强了用户的体验。
摄像头设置完成后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界面。
当智能电视101和手机102安装有相同的APP时,用户可以将该APP的状态从智能电视101迁移至手机102,或者,用户可以将该APP的状态从手机102迁移至智能电视101。
例如,手机102相比于智能电视101具有移动性强的优势,用户可以将正在进行的视频通话从智能电视101迁移至手机102以获得更好的移动性。
视频通话的迁移流程如图14所示。用户可以在智能电视101的屏幕上做出双指收拢动作,触发智能电视101进入逻辑设备显示界面;随后,用户可以点击选中智能电视101的虚拟设备,将智能电视101的虚拟设备拖动至手机102的虚拟设备;智能电视101检测到该拖动操作后,向手机102发送请求消息,请求将视频通话迁移至手机102;手机102接收到该请求消息后,执行视频通话的迁移流程;视频通话迁移完成后,手机102的虚拟设备显示视频通话界面,智能电视101的虚拟设备移除视频通话界面。该实施例无需关闭当前的视频通话进行视频通话的迁移设置,增强了用户的体验。
APP迁移完成后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界面。
此外,智能电视101相比于手机102具有大屏幕的优势,用户可以将正在进行的视频通话从手机102迁移至智能电视101以获得更好的视觉体验。
视频通话的迁移流程如图15所示。用户可以长按手机102的屏幕,触发手机102进入逻辑设备显示界面;随后,用户可以点击选中手机102的虚拟设备,将手机102的虚拟设备拖动至智能电视101的虚拟设备;手机102检测到该拖动操作后,向智能电视101发送请求消息,请求将视频通话迁移至智能电视101;智能电视101接收到该请求消息后,执行视频通话的迁移流程;视频通话迁移完成后,智能电视101的虚拟设备显示视频通话界面,手机102的虚拟设备移除视频通话界面。该实施例无需关闭当前的视频通话进行视频通话的迁移设置,增强了用户的体验。
APP迁移完成后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界面。
上文描述了视频通话过程中对逻辑设备显示界面的一些操作方法,在视频通话的准备阶段,也可以利用逻辑设备显示界面获得更好的用户体验。
图16示出了一种建立视频通话的方法。用户希望使用手机102与智能电视101建立视频通话,则可以按照如下所述的内容进行操作。
用户可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。当前手机102处于桌面显示状态时,用户可以拖动手机102的虚拟设备至智能电视101的虚拟设备。手机102基于该拖动操作向智能电视101发送请求消息,请求获取与智能电视101建立视频通话连接,智能电视101接收到该请求消息后可以在屏幕上显示视频通话建立请求对话框,以便于智能电视101的使用者(如用户的家人)选择接受或拒绝视频通话;智能电视101接收到该请求消息后也可以根据预先设置的信息直接建立视频通话,并将拍摄到的画面发送至手机102,以便于用户能够看到智能电视101所处的环境(如用户的家庭环境)。该实施例采用了直观的方式建立视频通话,增强了用户的体验。
视频通话建立后,用户可以点击逻辑设备显示界面的空白处退出逻辑设备显示界 面。当用户需要退出视频通话时,可以再次进入逻辑设备显示界面,点击智能设备101的虚拟设备与手机102的虚拟设备之间的箭头,断开视频通话。
当用户有多个居住地时,并且该多个居住地均有智能电视时,用户可以利用一个居住地的智能电视与另外一个居住地的智能电视建立视频通话。
图17示出了另一种建立视频通话的方法。用户、智能电视101和手机102位于一个居住地,智能电视105位于另一个居住地,用户希望使用手机102在智能电视101与智能电视105之间建立视频通话,则可以按照如下所述的内容进行操作。
用户可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101、智能电视105和手机102的虚拟设备。用户可以拖动智能电视101的虚拟设备至智能电视105的虚拟设备。手机102基于该拖动操作向智能电视101发送通知消息,通知智能电视101与智能电视105建立视频通话连接,智能电视101接收到该通知消息后向智能电视105发送视频通话建立请求,智能电视105接收到该请求消息后可以在屏幕上显示视频通话建立请求对话框,以便于智能电视105的使用者(如用户的家人)选择接受或拒绝视频通话;智能电视105接收到该请求消息后也可以根据预先设置的信息直接建立视频通话,并将拍摄到的画面发送至智能电视101,以便于用户能够看到智能电视105所处的环境。该实施例采用了直观的方式建立视频通话,增强了用户的体验。
视频通话建立后,用户可以点击手机102的虚拟设备或者逻辑设备显示界面的空白处退出逻辑设备显示界面。
上文介绍了在视频通话场景中的一些逻辑设备显示界面的使用方法,用户也可以利用逻辑设备显示界面进行其它操作,例如,一些智能电视的屏幕是非触摸屏,使用遥控器在智能电视上输入内容不方便,用户可以使用手机在智能电视上输入内容,操作方法如图18所示。
用户可以长按手机102的屏幕,手机102检测到该动作后,进入逻辑设备显示界面,显示智能电视101和手机102的虚拟设备。用户可以单击智能电视101的虚拟设备,手机102检测到该点击动作后,退出逻辑设备显示界面并在手机屏幕上显示智能电视101的画面,手机102还需要将控制事件映射给智能电视101,即,把手机102的触屏事件(TouchEvent)转换成对智能电视101的触屏事件,从而可以通过手机102对智能电视101进行点击操作或输入操作。在触屏事件转换过程中,手机102可以将手机102的触屏事件的坐标信息发送至智能电视101,智能电视101根据屏幕参数进行映射,确定该坐标信息在屏幕上的等效位置,进而生成与该等效位置对应的触屏事件。
当用户需要终止手机102对智能电视101的控制时,可以再次长按手机102的屏幕进入逻辑设备显示界面,然后单击手机102的虚拟设备或者智能电视101的虚拟设备终止手机102对智能电视101的控制。
上文详细介绍了本申请提供的管理IoT设备的方法的示例。可以理解的是,相应的装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟 以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请可以根据上述方法示例对管理IoT设备的装置进行功能单元的划分,例如,可以将各个功能划分为各个功能单元,也可以将两个或两个以上的功能集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
图19示出了本申请提供的一种管理IoT设备的电子设备的结构示意图。电子设备1900可用于实现上述方法实施例中描述的方法。
电子设备1900包括一个或多个处理器1901,该一个或多个处理器1901可支持电子设备1900实现方法实施例中的方法。处理器1901可以是通用处理器或者专用处理器,例如,处理器1901可以是中央处理器(central processing unit,CPU)。CPU可以用于对电子设备1900进行控制,执行软件程序,以实现管理IoT设备的功能。
处理器1901也可以是数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件,例如,分立门、晶体管逻辑器件或分立硬件组件。本申请对处理器的具体类型不做限定。
电子设备1900还可以包括通信模块1905和输入模块1906,其中,通信模块1905用以实现与IoT设备之间的信号的输入(接收)和/或输出(发送),输入模块1906用以实现用户输入功能。
例如,通信模块1905可以是电子设备1900的收发器或通信接口,电子设备1900通过收发器发送或接收无线信号,或者,电子设备1900通过通信接口发送或接收有线信号,该无线信号或该有线信号可以用于控制IoT设备;输入模块1906可以是电子设备1900的触控屏幕或者摄像头,电子设备1900可以通过触控屏幕或摄像头获取用户输入的触发信号。
电子设备1900中可以包括一个或多个存储器1902,其上存有程序1904,程序1904可被处理器1901运行,生成指令1903,使得处理器1901根据指令1903执行上述方法实施例描述的方法。
例如,输入模块1906用于:获取第一触发信号;
处理器1901用于:根据所述第一触发信号显示虚拟设备界面,所述虚拟设备界面包括至少两个物联网IoT设备的虚拟设备信息;
输入模块1906还用于:获取操作信号,所述操作信号为用户在所述虚拟设备界面上触发的控制所述至少两个IoT设备进行交互的信号;
处理器1901还用于:执行与所述操作信号对应的处理方法。
可选地,存储器1902中还可以存储有数据(如IoT设备的虚拟设备信息)。可选地,处理器1901还可以读取存储器1902中存储的数据,该数据可以与程序1904存储在相同的存储地址,该数据也可以与程序1904存储在不同的存储地址。
处理器1901和存储器1902可以单独设置,也可以集成在一起,例如,集成在系 统级芯片(system on chip,SOC)上。
应理解,上述方法实施例的各步骤可以通过处理器1901中的硬件形式的逻辑电路或者软件形式的指令完成,电子设备1900执行管理IoT设备的方法的具体方式以及产生的有益效果可以参见方法实施例中的相关描述。
本申请还提供了一种计算机程序产品,该计算机程序产品被处理器1901执行时实现本申请中任一方法实施例所述的方法。
该计算机程序产品可以存储在存储器1902中,例如是程序1904,程序1904经过预处理、编译、汇编和链接等处理过程最终被转换为能够被处理器1901执行的可执行目标文件。
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现本申请中任一方法实施例所述的方法。该计算机程序可以是高级语言程序,也可以是可执行目标程序。
该计算机可读存储介质例如是存储器1902。存储器1902可以是易失性存储器或非易失性存储器,或者,存储器1902可以同时包括易失性存储器和非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
本领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的装置和设备的具体工作过程以及产生的技术效果,可以参考前述方法实施例中对应的过程和技术效果,在此不再赘述。
在本申请所提供的几个实施例中,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的方法实施例的一些特征可以忽略,或不执行。以上所描述的装置实施例仅仅是示意性的,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,多个单元或组件可以结合或者可以集成到另一个系统。另外,各单元之间的耦合或各个组件之间的耦合可以是直接耦合,也可以是间接耦合,上述耦合包括电的、机械的或其它形式的连接。
在本申请的各种实施例中,序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请的实施例的实施过程构成任何限定。
另外,本文中的术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
总之,以上所述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
Claims (21)
- 一种管理物联网设备的方法,其特征在于,包括:获取第一触发信号;根据所述第一触发信号显示虚拟设备界面,所述虚拟设备界面包括至少两个物联网IoT设备的虚拟设备信息;获取操作信号,所述操作信号为用户在所述虚拟设备界面上触发的控制所述至少两个IoT设备进行交互的信号;执行与所述操作信号对应的处理方法。
- 根据权利要求1所述的方法,其特征在于,所述至少两个IoT设备的虚拟设备信息包括:所述至少两个IoT设备的虚拟设备图标和逻辑端口图标。
- 根据权利要求2所述的方法,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的逻辑端口图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:将所述第一IoT设备的逻辑端口图标对应的功能迁移至所述第二IoT设备,其中,所述第二IoT设备具有所述第一IoT设备的逻辑端口图标对应的功能。
- 根据权利要求2所述的方法,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:将所述第一IoT设备的目标应用的功能迁移至所述第二IoT设备,其中,所述目标应用为所述第一IoT设备正在运行的应用,并且,所述第二IoT设备安装有所述目标应用。
- 根据权利要求2所述的方法,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:建立所述第一IoT设备的目标应用与所述第二IoT设备的所述目标应用之间的通信连接,其中,所述第一IoT设备在获取所述操作信号前未运行所述目标应用。
- 根据权利要求2所述的方法,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户通过双指拖动所述第一IoT设备的逻辑端口图标和所述第二IoT设备的逻辑端口图标进行合并;所述执行与所述操作信号对应的处理方法,包括:共享所述第一IoT设备的逻辑端口图标的功能和所述第二IoT设备的逻辑端口图标的功能。
- 根据权利要求2所述的方法,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户点击所述第二IoT设备的虚拟设备图标;所述执行与所述操作信号对应的处理方法,包括:建立所述第一IoT设备与所述第二IoT设备的控制事件映射关系,其中,所述第一IoT设备为预设的控制设备,所述第二IoT设备为被控制设备。
- 根据权利要求1至7中任一项所述的方法,其特征在于,所述获取第一触发信号,包括:通过触控屏幕获取所述第一触发信号,所述第一触发信号为所述用户在所述触控屏幕上执行预设动作生成的触发信号。
- 根据权利要求1至7中任一项所述的方法,其特征在于,所述获取第一触发信号,包括:通过摄像头获取所述第一触发信号,所述第一触发信号为所述用户在空中执行预设动作生成的触发信号。
- 根据权利要求1至9中任一项所述的方法,其特征在于,还包括:获取第二触发信号;根据所述第二触发信号退出所述虚拟设备界面。
- 一种管理物联网设备的电子设备,其特征在于,包括输入模块和处理器,所述输入模块用于:获取第一触发信号;所述处理器用于:根据所述第一触发信号显示虚拟设备界面,所述虚拟设备界面包括至少两个物联网IoT设备的虚拟设备信息;所述输入模块还用于:获取操作信号,所述操作信号为用户在所述虚拟设备界面上触发的控制所述至少两个IoT设备进行交互的信号;所述处理器还用于:执行与所述操作信号对应的处理方法。
- 根据权利要求11所述的电子设备,其特征在于,所述至少两个IoT设备的虚拟设备信息包括:所述至少两个IoT设备的虚拟设备图标和逻辑端口图标。
- 根据权利要求12所述的电子设备,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的逻辑端口图标至所述第二IoT设备的虚拟设备图标;所述处理器具体用于:将所述第一IoT设备的逻辑端口图标对应的功能迁移至所述第二IoT设备,其中,所述第二IoT设备具有所述第一IoT设备的逻辑端口图标对应的功能。
- 根据权利要求12所述的电子设备,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述处理器具体用于:将所述第一IoT设备的目标应用的功能迁移至所述第二IoT设备,其中,目标应用为所述第一IoT设备正在运行的应用,并且,所述第二IoT设备安装有所述目标应用。
- 根据权利要求12所述的电子设备,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户拖动所述第一IoT设备的虚拟设备图标至所述第二IoT设备的虚拟设备图标;所述处理器具体用于:建立所述第一IoT设备的目标应用与所述第二IoT设备的目标应用之间的通信连接,其中,所述第一IoT设备在获取所述操作信号前未运行所述目标应用。
- 根据权利要求12所述的电子设备,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户通过双指拖动所述第一IoT设备的逻辑端口图标和所述第二IoT设备的逻辑端口图标进行合并;所述处理器具体用于:共享所述第一IoT设备的逻辑端口图标的功能和所述第二IoT设备的逻辑端口图标的功能。
- 根据权利要求12所述的电子设备,其特征在于,所述至少两个IoT设备包括第一IoT设备和第二IoT设备,所述操作信号包括:所述用户点击所述第二IoT设备的虚拟设备图标;所述处理器具体用于:建立所述第一IoT设备与所述第二IoT设备的控制事件映射关系,其中,所述第一IoT设备为预设的控制设备,所述第二IoT设备为被控制设备。
- 根据权利要求11至17中任一项所述的电子设备,其特征在于,所述输入模块包括触控屏幕,所述输入模块具体用于:通过所述触控屏幕获取所述第一触发信号,所述第一触发信号为所述用户在所述触控屏幕上执行预设动作生成的触发信号。
- 根据权利要求11至17中任一项所述的电子设备,其特征在于,所述输入模块 包括摄像头,所述输入模块具体用于:通过所述摄像头获取所述第一触发信号,所述第一触发信号为所述用户在空中执行的预设动作。
- 根据权利要求11至19中任一项所述的电子设备,其特征在于,所述处理器还用于:通过所述输入模块获取第二触发信号;根据所述第二触发信号退出所述虚拟设备界面。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储了计算机程序,当所述计算机程序被处理器执行时,使得处理器执行权利要求1至10中任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/041,779 US20230305693A1 (en) | 2020-08-18 | 2021-08-04 | Internet-of-things device management method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010846926.8 | 2020-08-18 | ||
CN202010846926.8A CN114153531A (zh) | 2020-08-18 | 2020-08-18 | 管理物联网设备的方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022037412A1 true WO2022037412A1 (zh) | 2022-02-24 |
Family
ID=80323367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/110623 WO2022037412A1 (zh) | 2020-08-18 | 2021-08-04 | 管理物联网设备的方法和装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230305693A1 (zh) |
CN (1) | CN114153531A (zh) |
WO (1) | WO2022037412A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102460363A (zh) * | 2009-06-09 | 2012-05-16 | 三星电子株式会社 | 提供示出设备之间的连接关系和排列的图形用户界面的方法以及应用该方法的设备 |
CN102999251A (zh) * | 2012-10-31 | 2013-03-27 | 东莞宇龙通信科技有限公司 | 终端和设备连接管理方法 |
US20190392085A1 (en) * | 2018-06-26 | 2019-12-26 | International Business Machines Corporation | Search exploration using drag and drop |
CN111123723A (zh) * | 2019-12-30 | 2020-05-08 | 星络智能科技有限公司 | 编组交互方法、电子设备以及存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9367224B2 (en) * | 2011-04-29 | 2016-06-14 | Avaya Inc. | Method and apparatus for allowing drag-and-drop operations across the shared borders of adjacent touch screen-equipped devices |
KR102009928B1 (ko) * | 2012-08-20 | 2019-08-12 | 삼성전자 주식회사 | 협업 구현 방법 및 장치 |
WO2016052876A1 (en) * | 2014-09-30 | 2016-04-07 | Samsung Electronics Co., Ltd. | Display apparatus and controlling method thereof |
KR102254699B1 (ko) * | 2015-12-29 | 2021-05-24 | 삼성전자주식회사 | 사용자 단말 장치 및 그 제어 방법 |
KR20170088691A (ko) * | 2016-01-25 | 2017-08-02 | 엘지전자 주식회사 | 페어링된 장치, 알림 및 어플리케이션의 제어에 관한 한 손 조작 모드를 적용한 이동 통신 단말기 |
US20170311368A1 (en) * | 2016-04-25 | 2017-10-26 | Samsung Electronics Co., Ltd. | Methods and systems for managing inter device connectivity |
CN106161100B (zh) * | 2016-08-03 | 2019-09-27 | 青岛海信电器股份有限公司 | 一种物联网设备配置方法及物联网终端 |
-
2020
- 2020-08-18 CN CN202010846926.8A patent/CN114153531A/zh active Pending
-
2021
- 2021-08-04 US US18/041,779 patent/US20230305693A1/en active Pending
- 2021-08-04 WO PCT/CN2021/110623 patent/WO2022037412A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102460363A (zh) * | 2009-06-09 | 2012-05-16 | 三星电子株式会社 | 提供示出设备之间的连接关系和排列的图形用户界面的方法以及应用该方法的设备 |
CN102999251A (zh) * | 2012-10-31 | 2013-03-27 | 东莞宇龙通信科技有限公司 | 终端和设备连接管理方法 |
US20190392085A1 (en) * | 2018-06-26 | 2019-12-26 | International Business Machines Corporation | Search exploration using drag and drop |
CN111123723A (zh) * | 2019-12-30 | 2020-05-08 | 星络智能科技有限公司 | 编组交互方法、电子设备以及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230305693A1 (en) | 2023-09-28 |
CN114153531A (zh) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111316598B (zh) | 一种多屏互动方法及设备 | |
WO2021052214A1 (zh) | 一种手势交互方法、装置及终端设备 | |
WO2020244623A1 (zh) | 一种空鼠模式实现方法及相关设备 | |
WO2021104104A1 (zh) | 一种高能效的显示处理方法及设备 | |
CN113923230B (zh) | 数据同步方法、电子设备和计算机可读存储介质 | |
WO2022100610A1 (zh) | 投屏方法、装置、电子设备及计算机可读存储介质 | |
WO2021017909A1 (zh) | 一种通过nfc标签实现功能的方法、电子设备及系统 | |
US12058486B2 (en) | Method and apparatus for implementing automatic translation by using a plurality of TWS headsets connected in forwarding mode | |
WO2022001619A1 (zh) | 一种截屏方法及电子设备 | |
WO2022033320A1 (zh) | 蓝牙通信方法、终端设备及计算机可读存储介质 | |
WO2022042770A1 (zh) | 控制通信服务状态的方法、终端设备和可读存储介质 | |
CN114040242A (zh) | 投屏方法和电子设备 | |
EP4293997A1 (en) | Display method, electronic device, and system | |
WO2022063159A1 (zh) | 一种文件传输的方法及相关设备 | |
WO2021218544A1 (zh) | 一种提供无线上网的系统、方法及电子设备 | |
WO2020237617A1 (zh) | 控屏方法、装置、设备及存储介质 | |
EP4339770A1 (en) | Screen sharing method and related device | |
WO2021052388A1 (zh) | 一种视频通信方法及视频通信装置 | |
CN114827098A (zh) | 合拍的方法、装置、电子设备和可读存储介质 | |
WO2022152174A1 (zh) | 一种投屏的方法和电子设备 | |
WO2021110117A1 (zh) | 事件订阅方法及电子设备 | |
WO2021110115A1 (zh) | 事件订阅方法及电子设备 | |
WO2022037412A1 (zh) | 管理物联网设备的方法和装置 | |
WO2024093614A1 (zh) | 设备输入方法、系统、电子设备及存储介质 | |
WO2023093778A1 (zh) | 一种截屏方法及相关装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21857507 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21857507 Country of ref document: EP Kind code of ref document: A1 |