WO2022222768A1 - 一种多设备配合的方法及设备 - Google Patents

一种多设备配合的方法及设备 Download PDF

Info

Publication number
WO2022222768A1
WO2022222768A1 PCT/CN2022/085793 CN2022085793W WO2022222768A1 WO 2022222768 A1 WO2022222768 A1 WO 2022222768A1 CN 2022085793 W CN2022085793 W CN 2022085793W WO 2022222768 A1 WO2022222768 A1 WO 2022222768A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
subsystem
information
electronic device
image
Prior art date
Application number
PCT/CN2022/085793
Other languages
English (en)
French (fr)
Inventor
王成录
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022222768A1 publication Critical patent/WO2022222768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • G06F21/46Structures or tools for the administration of authentication by designing passwords or checking the strength of passwords
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • G06F9/4484Executing subprograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present application relates to the technical field of the Internet of Things, and in particular, to a method and device for cooperating with multiple devices.
  • the present application provides a method and device for cooperating with multiple devices.
  • the scene adaptive service solves the problem of unnatural and smooth communication between people in two places.
  • a method for multi-device cooperation is provided, which is applied to a first master device in a virtual sharing system, the virtual sharing system includes at least a first subsystem and a second subsystem, and the first master device belongs to
  • the method includes: acquiring user information of a first user, the first user being a member of the virtual sharing system; identifying a user intent associated with the user information, the user intent It includes causing at least one electronic device in the second subsystem to perform a service operation; according to the user information and the shared configuration information, sending a request message to the second master device in the second subsystem, the request message using
  • the shared configuration information includes member information and device information corresponding to each subsystem in the virtual shared system.
  • a virtual sharing system is formed by network connection of multiple subsystems in different spaces, and the electronic devices in the virtual sharing system work together according to the user's intention, so as to provide the user with a scene spontaneously as needed.
  • Adaptive services can bring natural and smooth communication effects to people who are separated from two places, and improve the convenience of users' lives.
  • the user intent includes causing at least one electronic device in the second subsystem to perform a service operation, and specifically includes: the user intent includes causing the first electronic device to perform a service operation. At least one electronic device in the two subsystems performs video call service operations.
  • the identifying the user intent associated with the user information specifically includes: determining the current state of the first user according to the acquired user information; According to the current state of the first user, the corresponding user intention of the first user is determined.
  • the main device and other subsystems can establish communication that meets the user's needs, provide the user with the most suitable service, and improve the user experience. experience.
  • the current state of the first user includes at least one of the following: the first user enters a room; or, the vital signs of the first user are abnormal ; or, the body posture of the first user is abnormal; or, the distance between the first user and the destination is less than a first threshold.
  • the acquiring the user information of the first user specifically includes: receiving the user information sent by at least one electronic device in the first subsystem, and the At least one electronic device in the first subsystem is different from the first main device.
  • the at least one electronic device in the first subsystem may refer to an electronic device different from the first master device, such as a slave device in the first subsystem, such as an indoor camera, a cat's eye camera, and the like.
  • a subsystem may include a master device and at least one slave device connected to the master device, and the slave device may have user information collection capabilities, such as image collection capabilities, voice collection capabilities, etc.
  • the collected user information It is sent to the main device, so that the main device can identify the user's intention according to the user information, and provide the user with appropriate adaptive services.
  • the method specifically includes: receiving a first image sent by a first indoor camera, where the first image includes all the image of the first user, the first indoor camera belongs to the first subsystem; when it is determined according to the first image that the first user enters the room, the video call is initiated to the second main device .
  • the indoor camera refers to a camera installed inside a room, which can be used to collect images in the room.
  • the image collected by the indoor camera includes the image of the first user, it means that the first user is located in the house.
  • the method specifically includes: receiving a second image sent by a first indoor camera, where the second image includes all the image of the first user, the first indoor camera belongs to the first subsystem; the body posture of the first user is identified according to the second image information; when the first user is determined according to the body posture of the first user When a user's body posture is abnormal, the video call is initiated to the second main device.
  • the abnormal body posture may include abnormal postures such as a falling posture and a curled posture. When the user's body posture is abnormal, it may indicate that the user has an emergency.
  • the indoor camera in the first subsystem collects the user's image, and when the first main device recognizes that the user's body posture is abnormal according to the user image, the first main device can determine that an emergency has occurred for the first user.
  • a master device can automatically initiate a video call to a second master device.
  • the method specifically includes: acquiring location information of the first user; The location information of a user, when it is determined that the distance between the first user and the destination is less than a first threshold, initiate a video call to the second main device.
  • the method further includes: performing identity authentication on the first user according to the user information and the shared configuration information; when the first user is determined to be a member of the virtual sharing system.
  • the user is authenticated according to the user information, and the follow-up operation is performed only when it is determined that the user is a member of the virtual sharing system, which can ensure the security of the system and the user, and prevent non-virtual sharing system members from occupying system resources .
  • the shared configuration information further includes device usage rights corresponding to members in the virtual sharing system;
  • the sending of the request message by the second master device in the second subsystem specifically includes: determining, according to the user information and the shared configuration information, that the first user has access to at least one second electronic device in the second subsystem. the permission; send the request message to the second master device in the second subsystem.
  • the security of the system and the user can be guaranteed, and system resources can be avoided for those who obtain the relevant equipment use authority.
  • a method for multi-device cooperation is provided, which is applied to a second master device in a virtual sharing system, where the virtual sharing system at least includes a first subsystem and a second subsystem, and the second master device belongs to
  • the method includes: receiving a request message sent by a first master device in the first subsystem, where the request message is used to request at least one electronic device in the second subsystem to execute A service operation; in response to the request message, instructing the at least one second electronic device to perform the service operation.
  • a virtual sharing system is formed by network connection of multiple subsystems in different spaces, and the electronic devices in the virtual sharing system work together according to the user's intention, so as to provide the user with a scene spontaneously as needed.
  • Adaptive services can bring natural and smooth communication effects to people who are separated from two places, and improve the convenience of users' lives.
  • the service operation includes: establishing a video call service operation with the first subsystem.
  • the instructing the at least one second electronic device to perform the service operation in response to the request message specifically includes: determining, according to the request message, the service operation. the capability required for the service operation; according to the priority corresponding to the electronic device with the capability in the second subsystem, instruct the second electronic device to perform the service operation, the second electronic device is the second electronic device The electronic device with the highest priority among the electronic devices with the stated capability in the subsystem.
  • the completion effect of the service operation can be ensured, so that the user can obtain a better use experience.
  • a multi-device cooperation system including at least a first subsystem and a second subsystem, the first subsystem includes a first master device, the second subsystem includes a second master device, The first master device is configured to execute the method according to any one of the implementation manners of the foregoing first aspect, and the second master device is configured to execute the method described in any one of the foregoing implementation manners of the second aspect.
  • a computer-readable storage medium which stores computer instructions, and when the computer instructions are executed in a computer, enables the method according to any one of the first aspect or the second aspect to be implemented. accomplish.
  • a computer product which stores computer instructions, and when the computer instructions are executed in a computer, enables the method according to any one of the implementation manners of the first aspect or the second aspect to be implemented.
  • a chip is provided, storing computer instructions, and when the computer instructions are executed in the chip, the method described in any one of the implementation manners of the first aspect or the second aspect can be implemented.
  • FIG. 1 is a schematic diagram of a system architecture of a multi-device cooperation provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of another system architecture of multi-device cooperation provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device corresponding to a master device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device corresponding to another master device provided in an embodiment of the present application.
  • 5A to 5F are schematic diagrams of some graphical user interfaces provided by embodiments of the present application.
  • FIG. 6 is a schematic diagram of an application scenario of a method for multi-device cooperation provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an application scenario of another method for multi-device cooperation provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an application scenario of another method for multi-device cooperation provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an application scenario of another method for multi-device cooperation provided by an embodiment of the present application.
  • FIG. 10A and FIG. 10B are schematic flowcharts of some multi-device cooperation methods provided by embodiments of the present application.
  • FIG. 11A and FIG. 11B are schematic diagrams of refined structures of some electronic devices provided by embodiments of the present application.
  • FIG. 12 is a schematic diagram of a detailed structure of another electronic device provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a detailed structure of another electronic device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a detailed structure of another electronic device provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a reference to a "first”, “second” feature may expressly or implicitly include one or more of that feature.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • GSM global system of mobile communication
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • GPRS general packet radio service
  • long term evolution long term evolution
  • LTE long term evolution
  • LTE frequency division duplex frequency division duplex
  • FDD frequency division duplex
  • TDD time division duplex
  • UMTS general mobile communication system
  • WiMAX worldwide interoperability for microwave access
  • IoT Internet of things
  • applications application, APP
  • global wide area network World Wide Web, Web
  • the smart device in this situation is in passive management mode. For example, when the room owner is not at home, if someone else needs to open the door, the person needs to initiate remote communication with the room owner first. After the owner confirms his identity, he can control the door remotely. Complete the door opening action. IoT devices cannot understand user needs and spontaneously provide services according to user needs.
  • an embodiment of the present application provides a method for multi-device cooperation.
  • the method combines electronic devices located in multiple scenes in different spaces into a virtual sharing system, and the electronic devices cooperate with each other to spontaneously provide people with services across the space, so that people in different areas can feel as if they are in the same space. That way, the effect of communication is triggered on demand.
  • the system architecture provided by the embodiments of the present application is first introduced by taking a family scenario as an example.
  • the methods provided in the embodiments of the present application are not limited to family scenarios, but can also be applied to scenarios such as office spaces located in different spaces, specific public spaces (such as hospitals, etc.), and vehicles in travel. This is not limited.
  • FIG. 1 a schematic diagram of a system architecture of multi-device cooperation provided by an embodiment of the present application.
  • the system architecture includes at least two subsystems, such as subsystem 1 and subsystem 2.
  • the subsystem 1 may include, for example, multiple electronic devices in the family 1
  • the subsystem 2 may include, for example, multiple electronic devices in the family 2 .
  • Subsystem 1 and Subsystem 2 can be connected through a network (such as a wide area network (WAN) (such as the Internet), etc.) to form a virtual shared system.
  • WAN wide area network
  • the subsystem 1 and the subsystem 2 may be located in different spaces, but the present application does not limit the actual distance between the subsystems.
  • each subsystem may include multiple types of electronic devices.
  • a subsystem may include multiple electronic devices owned by a family.
  • subsystem 1 includes large-screen device 1, cat-eye camera 1, Bluetooth speaker 1, and indoor camera 1
  • subsystem 2 includes large-screen device 2 and cat-eye camera 2 , Bluetooth speaker 2, indoor camera 2, among which, the cat's eye camera can refer to the camera installed at the door, which can collect images in a certain area of the door; the indoor camera can refer to the camera installed inside the room, which can be used to collect images in the room.
  • the subsystem may also include tablet computers, personal computers (PCs), smart door locks, smart air conditioners, water heaters, and wearable devices worn by subsystem members, such as smart watches, smart bracelets, smart shoes, smart glasses etc. This application does not limit the specific types of electronic devices.
  • the electronic devices in the subsystem can be divided into master devices (or rich devices) and slave devices (or light devices and thin devices).
  • the master device refers to a device with relatively complete functions and strong computing power, such as smart phones, tablet computers, large-screen devices (such as smart screens), and personal computers (personal computers, PC), etc.; Devices with specific functions have weak computing power, such as wearable devices such as smart bracelets, smart watches, and smart shoes, as well as IoT devices such as Bluetooth speakers and web cameras.
  • the large-screen device 1 shown in FIG. 1 is used as the main device (referred to as the main device 1) in the subsystem 1, and the large-screen device 2 shown in FIG.
  • the master device in 2 (referred to as master device 2) is taken as an example for description, but in practical application, the master device in the subsystem may also be other types of electronic devices.
  • the master device in this embodiment of the present application may be one device, or may be a distributed master device including multiple devices, wherein the multiple devices respectively perform different master device functions, which are not limited in this application. .
  • the master device has a radio frequency module, which can be connected to a public network, and communicates with master devices in other subsystems through the public network, so that subsystems in different spaces are associated to form a virtual shared system.
  • the large-screen device 1 and the large-screen device 2 can establish a communication connection through the Internet, so that the subsystem 1 and the subsystem 2 are associated as a virtual shared system.
  • the communication capability of the slave device is weak, and it may not be able to directly connect to the public network, so the slave device cannot directly communicate with devices in other subsystems, and even the slave device cannot directly communicate with devices in the same subsystem. to communicate with other slave devices.
  • slave devices in the same subsystem can be connected to the master device in this subsystem (for example, in subsystem 1, slave devices such as cat-eye camera 1, indoor camera 1, Bluetooth speaker 1, etc. can be connected to large-screen device 1; In system 2, the slave device cat-eye camera 2, indoor camera 2, Bluetooth speaker 2, etc. can be connected to the large-screen device 2), and the slave device can use the communication capability of the master device to communicate with other devices.
  • Bluetooth speaker 2 in the subsystem 2 requests to share the song playlist with the Bluetooth speaker 1 in the subsystem 1, the Bluetooth speaker 2 needs to initiate a request to the large-screen device 2 first.
  • the large-screen device 2 communicates with the large-screen device 1 via the public network, and then the large-screen device 1 instructs the Bluetooth speaker 1 to share the song playlist, and then shares the song playlist to the Bluetooth speaker 2 through the reverse path.
  • Bluetooth speaker 2 may not be able to communicate directly with Bluetooth speaker 1, by using the master device in the two subsystems as a communication bridge, Bluetooth speaker 2 and Bluetooth speaker 1 can also share song lists across spaces.
  • WLAN wireless local area network
  • Bluetooth blue, Wireless Fidelity, WiFi
  • zigbee zigbee
  • the master device has strong computing capability, and it can perform task distribution based on the capabilities of devices in the subsystem, such as using its own computing capabilities to select appropriate auxiliary devices to coordinately complete event processing with its own specific capabilities.
  • the master device assigns the smart unlocking task to the appropriate device (smart door lock) according to the capabilities of each electronic device in this subsystem.
  • the large-screen device 2 in the subsystem 2 requests to establish a voice call with the subsystem 1, then the large-screen device 1 can play the voice according to the voice playback capability of each electronic device in the subsystem 1. Select appropriate electronic devices to perform the voice call task.
  • auxiliary devices may include devices (including master devices and slave devices) in this subsystem.
  • the auxiliary device may also include devices in other subsystems in the virtual sharing system, such as electronic devices (such as smart watches, mobile phones, etc.) with independent communication functions in other subsystems.
  • electronic devices such as smart watches, mobile phones, etc.
  • multiple equipment can work together to provide users with scene-adaptive services on demand.
  • the computing capability of the slave device is relatively weak, and it may only have specific capabilities in one or several aspects.
  • smart door locks have the ability to unlock intelligently
  • cat-eye cameras and indoor cameras have image (or video) acquisition capabilities
  • Bluetooth speakers have audio playback capabilities.
  • the master device and the slave device have differences in capabilities, the master device and the slave device are not an absolute concept, and the master device may have stronger capabilities (such as communication capabilities, computing capabilities, etc.) than the slave devices. , but the slave device may have more capabilities than the master device for a particular function.
  • the playback function of Bluetooth speakers is higher than that of large-screen devices, so users will prefer to use Bluetooth speakers to play music at home; the screen of home smart screens is large, and its video playback effect is better than that of smartphones. Watch a movie on the screen.
  • multiple subsystems in different spaces can be connected to the network to form a virtual sharing system.
  • Multiple types of electronic devices in the virtual sharing system can then work together to provide users with scene adaptive services as needed. Improve the convenience of users' lives.
  • each subsystem may collect device information and member information of the subsystem in advance.
  • subsystem 1 is an elderly family
  • the members include grandfather and grandmother
  • the equipment includes large-screen device 1 (as the main device of subsystem 1), indoor camera 1, cat-eye camera 1, and smart devices worn by the elderly. Watch 1, smart shoes, etc.
  • Subsystem 2 is a family of children, including fathers, mothers, and children.
  • the devices can include large-screen device 2 (as the main device of Subsystem 2), indoor camera 2, cat-eye camera 2, and children's smart
  • the wristwatch 2 and the like will be described as an example.
  • the members and devices listed in the present application are all exemplary examples, and in practical applications, the members and devices are not limited to the types listed in the embodiments of the present application.
  • the device information in the subsystem may include an identification (identificaton, ID) of the electronic device, an access address (such as a media access control address (MAC)), capabilities, and the like.
  • the membership information in the subsystem may include the membership of the subsystem, the member ID, available device permissions, and the like. Exemplarily, the device information of the elderly family may be shown in Table 1, and the member information of the elderly family may be shown in Table 2.
  • the device information and member information of the subsystem can be collected by the master device of the subsystem, and the master device can share the device information and member information of the subsystem with the master devices in other subsystems, and can also Obtain device information and member information shared by master devices in other subsystems.
  • the master device may form unified shared profile information (hereinafter referred to as shared profile information) of the virtual shared system based on the device information and member information shared by multiple subsystems.
  • shared profile information unified shared profile information
  • the device information and member information in the shared configuration information of the virtual shared system may be shown in Table 3 and Table 4, respectively.
  • the master device in each subsystem can initiate adaptive communication and electronic device management on demand according to the member information in the shared setting information (such as user identity, device use authority, etc.), so that the electronic devices in the subsystem cooperate and automatically Demand provides the most appropriate service in the relevant scenario.
  • the member information in the shared setting information such as user identity, device use authority, etc.
  • the services provided according to user needs mentioned in the embodiments of the present application refer to providing services that meet the natural needs of users in daily life.
  • natural needs can include: the natural need to greet family members and acquaintances; the natural need to confirm the identity of strangers when they see strangers arriving in their territory (such as home or office); Natural needs; the natural needs of the elderly at home who need to urgently notify their family members or medical staff for treatment in an accident.
  • the method provided by the embodiments of the present application enables the virtual sharing system to select an optimal mode according to preset rules and automatically provide an adaptation service that suits the scene.
  • FIG. 1 introduces the system architecture of the embodiment of the present application from the device level, and the following describes the structure of the multi-device cooperation system from the function level with reference to FIG. 2 .
  • FIG. 2 it is a schematic diagram of another system architecture of multi-device cooperation provided by an embodiment of the present application.
  • the subsystem 1 in FIG. 2 may correspond to the subsystem 1 in FIG. 1
  • the subsystem 2 may correspond to the subsystem 2 in FIG. 1 .
  • each subsystem needs to include at least a device center, a security center, a perception center, an application center, a communication center, and a storage center.
  • the device center may be used to schedule all available electronic devices in this subsystem.
  • the available electronic device may refer to a device currently connected to the subsystem and capable of executing the to-be-processed event using the function supported by the electronic device itself.
  • Different electronic devices may have better capabilities than other devices in a specific function.
  • Each electronic device can exist as a component in a subsystem to achieve at least one specific function. Multiple electronic devices cooperate with each other. The cooperation can provide users with scene adaptation services, so that users can experience the adaptation services automatically provided by electronic devices in different scenarios.
  • Electronic devices can register their capabilities in the device center, and the device center can divide them into component sets of different capability categories according to their capabilities. Each component set can be automatically combined in real time to provide the ability to complete specific events (or provide specific services).
  • the component sets in the subsystem may include a visual component set, an auditory component set, an image acquisition component set, a control component set, a wearable component set, and the like.
  • the electronic equipment concentrated in visual components can be used to provide image display capability or video playback ability, including electronic equipment such as large-screen devices, projectors, PCs, etc.; electronic equipment concentrated in auditory components is used to provide audio playback.
  • capabilities including electronic devices such as large-screen devices, bluetooth speakers, etc.; electronic devices concentrated in image acquisition components are used to provide the ability to collect surrounding images in real time, including electronic devices such as cameras (including cat-eye cameras, indoor cameras, etc.), etc.;
  • the electronic devices concentrated in the control components are used to provide at least one smart home service capability, including electronic devices such as smart door locks, air conditioners, smart water heaters, etc. capabilities, including electronic devices such as smart watches, smart bracelets, smart shoes, etc.
  • Electronic devices in component sets of different capability categories are at different priorities in the corresponding component sets according to different capabilities of the corresponding categories.
  • the electronic devices in the component set can be sorted in order of priority, and the electronic device that can best provide the corresponding capabilities of the component set is ranked in the best position. For example, the large-screen device with the strongest video display capability in the visual component set is ranked in the best position.
  • the device center may firstly invoke the electronic device with a higher priority in the corresponding component set to provide this type of capability according to the priority order.
  • the electronic device priority can be determined according to the following formula (1-1):
  • Priority of electronic equipment electronic equipment processing capability factor ⁇ electronic equipment processing efficiency factor ⁇ electronic equipment user experience factor ⁇ electronic equipment performance power consumption factor (1-1)
  • the electronic device processing capability factor may refer to the capability related to the component set category of the electronic device.
  • the processing capability factor may include parameters such as image resolution; for the audio component set in the For electronic equipment, the processing capability may include parameters such as the signal-to-noise ratio of audio.
  • the processing efficiency factor can refer to the efficiency with which the electronic device performs the tasks to be processed, which can include, for example, the type of connected network (such as cellular network, broadband, WiFi), the processor performance of the electronic device (such as image processor, audio processor performance), etc. .
  • the user experience factor may include the screen size of the electronic device, the size of the loudspeaker, and other device parameters that affect the user's audiovisual experience.
  • the electronic device performance power consumption factor may include battery life, memory size parameters of the electronic device, and the like.
  • the relevant parameters corresponding to the electronic device may be processed. Taking the priority calculation of large-screen devices, tablet computers, and mobile phones in the visual component set as an example, the factors corresponding to large-screen devices, tablet computers, and mobile phones can be shown in Table 5.
  • corresponding preset values can be set according to the performance or capability of different electronic devices.
  • the network type connected to large-screen devices is wired bandwidth
  • the network type connected to tablet computers is WiFi
  • the network type connected to mobile phones is cellular network.
  • the network performance of wired bandwidth is better than that of WiFi.
  • the network performance of WiFi is better than that of cellular networks, so the default value of 3 can be used to represent the processing efficiency factor of large-screen devices, the default value of 2 can be used to represent the processing efficiency factor of tablet computers, and the default value of 1 can be used to represent the processing efficiency factor of mobile phones.
  • the default value of 2 can be used to represent the performance power consumption factor of large-screen devices, and the default value of 1 can be used to represent the performance power consumption of tablet computers and mobile phones, respectively. factor.
  • an optional way is to directly use the parameters corresponding to each factor to calculate the priority of the electronic device.
  • the image resolution 1080 of the above-mentioned large-screen device is directly into the processing capability factor item corresponding to formula (1-1); another optional way is to process the parameters, and normalize the parameters corresponding to the factors of different electronic devices to the value of a unified dimension.
  • the display screen size to represent the user experience factor
  • the display size of large-screen devices 55 inches
  • the display size of mobile phones 6.1 inches
  • the display size value is directly brought into the formula (1-1)
  • the priority result will be dominated by the user experience factor, which cannot reflect the influence of other items on the priority result. Therefore, the data can be calculated according to the display screen size of different electronic devices.
  • the user experience factor of a large-screen device can be represented by a value of 3
  • the user experience factor of a tablet computer can be represented by a value of 2
  • the user experience factor of a mobile phone can be represented by a value of 1.
  • component set is introduced in this embodiment of the present application.
  • the devices in the component set are prioritized according to their capabilities, and a device is selected according to the priority order to realize the function.
  • the security center may be used to provide security verification functions such as encryption and authentication, to ensure the security and reliability of the operation, communication, and management of electronic devices of the virtual sharing system application.
  • the security center can perform authentication and authentication on the user's identity to confirm whether the user can use the system, or what authority to access and use the system.
  • the security center can be set on at least one device that can provide security capabilities, such as mobile phones, tablet computers, large-screen devices, PCs, etc., which can be used as security capability providers and become a component of the security center.
  • the storage center stores the shared configuration information of the virtual shared system (such as the shared configuration information shown in Table 3 and Table 4), and the shared configuration information may include information of all devices and members in the virtual shared system. It is used for inquiries from the security center and application center in the subsystem to complete user identity authentication and invocation of related applications.
  • the storage center may include, for example, the internal memory 121 in FIG. 3 and the memory located in the processor 110 and the like.
  • the perception center can comprehensively judge the user's intention according to preset judgment rules, user information, a general user behavior model, and the like.
  • the perception center can be set on devices that can provide perception service capabilities, such as mobile phones, tablet computers, large-screen devices, and PCs.
  • the application center may automatically select a corresponding application (or function) and actively initiate the application based on the perception center's perception of the current status of the subsystem.
  • you can choose which subsystem in the system to communicate with, etc., based on the shared configuration information of the system.
  • the initiated application can be verified by the security center, and then communicate with other subsystems through the communication center.
  • the communications center may provide the ability for a subsystem to communicate wirelessly with at least one other subsystem.
  • the communication center may include, for example, the antenna 1 shown in FIG. 3 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • each center may be set on one master device, or may also be distributed on the master devices.
  • the functions of each center are provided by the relevant components in the main device; alternatively, each center can be distributed on different devices in the subsystem, and each center can be combined into a distributed virtual master device, for example, when the above-mentioned centers cannot
  • the functions of different centers can be provided by multiple devices, that is, multiple devices can cooperate to complete the tasks of each center in the subsystem.
  • each center may have an independent interface to realize the communication between the various centers.
  • FIG. 3 it is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • the electronic device 100 may be a schematic diagram of the electronic structure corresponding to the main device (eg, the large-screen device 1 or the large-screen device 2 ) in the subsystems shown in FIG. 1 and FIG. 2 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones. This interface can also be used to connect other terminals, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the terminal through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellites System
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • Display screen 194 is used to display images, videos, and the like.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on. Video codecs are used to compress or decompress digital video.
  • the NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transfer mode between neurons in the human brain, it can quickly process the input information, and can continuously learn by itself.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the terminal posture, and can be used in horizontal and vertical screen switching, pedometer and other applications.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect the temperature.
  • Touch sensor 180K also called "touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the electronic device 100 further includes an air pressure sensor 180C and a distance sensor 180F.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 4 is a schematic diagram of the software structure of the electronic device 100 corresponding to the main device according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) system layer, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as calendar, map, WLAN, music, notification, gallery, Bluetooth, video, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a call manager, a resource manager, and the like.
  • the communication center and the application center in the above-mentioned subsystems are also located in the application framework layer.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the call manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • the system layer may include multiple functional modules such as the device center, security center, perception center, and storage center in the aforementioned subsystems, as well as TCP/IP protocol stacks, Bluetooth/WiFi protocol stacks, and the like.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, sensor driver and codec.
  • FIGS. 5A to 5F provide some schematic diagrams of a graphical user interface (GUI).
  • GUI graphical user interface
  • the main device in the subsystem is a large-screen device, and a user (such as a father) in a child's family (which may correspond to the above-mentioned subsystem 2) logs in to the main device for device management and member management as an example for description.
  • a large-screen device such as a smart TV, a smart screen, etc.
  • a specific application application, APP
  • the specific application is a third-party application or a large-screen device Self-contained applications (such as Smart Life APP).
  • the large-screen device may display the smart life system registration/login interface as shown in FIG. 5A .
  • the user has registered the account and password of the smart life system, he can fill in the corresponding information in the corresponding account and password input box to log in to the APP.
  • the large-screen device can display a display interface as shown in Figure 5B, which can be an open interface for the smart living system to display the subsystem where the large-screen device is currently located (such as "My "Family” (that is, children's family) subsystem, corresponding to subsystem 2 in Figures 1 and 2), and other subsystems connected to large-screen devices (such as the elderly's home subsystem, corresponding to Figures 1 and 2). Subsystem 1).
  • My Family that is, children's family subsystem
  • Subsystem 1 subsystem 1 and other subsystems connected to large-screen devices
  • the large-screen device can display an interface as shown in FIG. 5C, which includes a device management icon 503 and a member management icon 504 in the children's family.
  • the large-screen device may display an interface as shown in FIG. 5D, which is a device management interface for a child's family.
  • FIG. 5D is a device management interface for a child's family.
  • users can view the electronic devices included in the children's family where the current large-screen device is located (such as large-screen device 2, indoor camera 2, smart door lock 2, cat's eye camera 2, etc.) and the online status of each electronic device, as shown in the figure "Connected" shown in 5D is used to indicate that the corresponding electronic device is currently connected to the large-screen device and is in an available state.
  • the user can click the add control in the "Add New Device" column, and the large-screen device will display the corresponding device adding page (not shown in Figure 5A to Figure 5F ) .
  • the user can manually input the ID (such as name), access address (such as MAC address) and device capability of the electronic device to be added; or,
  • the large-screen device can scan the surrounding electronic devices in response to the user's operation of clicking to add a new device. The device establishes a new device with a Bluetooth connection or a wired connection with the large-screen device), the new device can be automatically added.
  • the interface shown in FIG. 5D may also include electronic devices in the elderly family (ie, subsystem 1).
  • the current online device in the elderly’s family includes 4, which are the large-screen device 1 in the elderly’s home, the indoor camera 1, the smart door lock 1, and the cat’s eye camera 1.
  • the control behind the large-screen device 1 "connected" state, indicating that the large-screen device 1 is in communication connection with the large-screen device 2 in the children's family.
  • the two subsystems can share a user account, and the user can also manage the devices in the subsystem 1 on the large-screen device 2 of the subsystem 2. For example, the user can click the "Add new device" corresponding to the old man's home.
  • the adding control in the column "" can instruct the large-screen device in the elderly to add a new device, wherein the adding method can be similar to the adding method in the above-mentioned subsystem 2, and will not be repeated here.
  • the user may also click the member management icon 504 on the interface shown in FIG. 5C .
  • the large-screen device can display the member management interface as shown in FIG. 5E , and the member management interface can include the member information of the children's family where the user is located and the member information of the elderly family, for example, "My family" members can include dad, mom, and children.
  • the user can set the device permissions that members are allowed to use by clicking the permission management control corresponding to the member. For example, when the user clicks the permission management control corresponding to Dad, the large-screen device can display the device permission setting interface as shown in Figure 5F.
  • the device permission setting interface includes each electronic device included in the subsystem and the capabilities of each device.
  • the user can click and select the corresponding control to set the corresponding permission for the user.
  • the selection control corresponding to the function is, for example, " ⁇ " can be displayed.
  • the user can also add device permissions for the user by clicking the add control in the Add Device Permission column.
  • the device functions displayed by the large-screen device 2 may be pre-registered on the large-screen device 2 by each electronic device. For example, after the electronic device establishes a connection with the large-screen device 2, the electronic device can perform capability registration on the large-screen device 2, and the large-screen device 2 can display corresponding functions on the device permission setting interface based on the corresponding capabilities of each electronic device.
  • the large-screen device 2 when each electronic device performs capability registration with the large-screen device 2 , information such as the ID (eg, name) and access address (eg, MAC address) of the electronic device may be sent to the large-screen device 2 at the same time.
  • the large-screen device 2 can establish the device configuration information of this subsystem based on the information (as shown in Table 1).
  • the large-screen device 2 when receiving the member information of the subsystem added by the user, the large-screen device 2 can also establish the member configuration information of the subsystem (as shown in Table 2).
  • the device and member configuration information of the subsystem can be sent to the master device in subsystem 1 , and receive the device and member configuration information in the subsystem 1 sent by the master device of the subsystem 1.
  • the master device of each subsystem can generate shared configuration information (as shown in Table 3 and Table 4) based on the device and member configuration information of this subsystem and the device and member configuration information shared by other subsystems, and use the shared configuration information stored on the main device.
  • the way for the master devices in different subsystems to establish a communication connection may be, for example: the user inputs the access address of the master device of the subsystem 2 on the master device of the subsystem 1, and the master device 1 communicates with the master device of the subsystem 2 through the radio frequency module.
  • the master device 2 establishes a communication connection.
  • the communication type between the master devices of the subsystem may be peer-to-peer (peer to peer, P2P) communication, and the specific process of establishing the P2P communication can refer to the prior art, which will not be described in detail here.
  • the large-screen device may have a touch-sensitive display screen, and the user may interact with the large-screen device in a touch manner.
  • the large-screen device can also receive interactive operations performed by the user in other ways, such as receiving information input by the user through a remote control. This application does not limit this.
  • main device interfaces shown in FIG. 5A to FIG. 5F are only examples. In practical applications, related interfaces can also be displayed on other devices with display screens (such as tablet computers, mobile phones, etc.), and related interfaces are presented.
  • the specific content and presentation method of the APP can also be in other forms. For example, when the user logs in to the APP, a face recognition login, voice recognition login, etc. can also be used, which is not limited in this application.
  • the following describes the multi-device coordination method provided by the embodiments of the present application by taking some possible application scenarios as examples in conjunction with the system architecture shown in FIG. 1 and FIG. 2 and the electronic devices shown in FIG. 3 and FIG. 4 .
  • Scenario 1 The old man checks the track of the children after school.
  • FIG. 6 it is a schematic diagram of scenario 1 provided by this embodiment of the present application.
  • the large-screen device 1 is used as the main device.
  • the large-screen device 1 may have a device center, a security center, a storage center, an application center, a communication center, and the like required by the subsystem 1 .
  • the large-screen device 1 is connected to other electronic devices in the subsystem 1 (such as a cat's eye camera 1, an indoor camera 1, a smart door lock 1, etc.) through wired or wireless communication. , Wi-Fi connection, etc., which are not limited in this application.
  • the grandparents need to go back to their own homes (elderly families) first, so as to use the electronic devices in the family to understand the children's coming home from school.
  • the grandparents arrive at the door of the house (position 1 shown in Figure 6)
  • the cat's eye camera 1 at the door will capture the images of the grandparents, and then the cat's eye camera 1 will send the images of the grandparents to the room through wired or wireless means.
  • Large-screen device 1 ie, step S601).
  • the large-screen device 1 After the large-screen device 1 obtains the images of the grandparents, it performs image recognition, determines the identity, and authenticates the rights of the grandparents to use the device according to the shared configuration information (for example, the permissions corresponding to the grandparents in Table 4 to allow the automatic unlocking of the smart door lock 1) ); if the authentication is passed, the smart door lock 1 is instructed to open (step S602), so that the elderly can enter the room without operating the door lock.
  • the shared configuration information for example, the permissions corresponding to the grandparents in Table 4 to allow the automatic unlocking of the smart door lock 1
  • the indoor camera 1 After the old man enters the room (position 2 shown in FIG. 6 ), the indoor camera 1 captures the image of the old man, and sends the image of the old man to the large-screen device 1 (step S603 ).
  • the large-screen device 1 learns that the old man has entered the room according to the image of the old man sent by the indoor camera 1, and then obtains the authority of the old man according to other auxiliary information (such as the current time belonging to the preset time period when children leave school) and shared configuration information.
  • the location and trajectory information of the children's smart watch, the large-screen device 1 is automatically matched to the children's smart watch worn by the child.
  • the large-screen device 1 can request location and trajectory information from the children's smart watch through the Internet (step S604), and the children's smart watch responds to the request and sends the large-screen device 1 the current location and the historical track within a specific historical time period (step S604). S605). After acquiring the location and historical track of the children's smart watch, the large-screen device 1 automatically displays the corresponding information to the user (as shown in the track S606 displayed by the large-screen device in FIG. 6 ).
  • the large-screen device 1 can also predict the time to arrive home according to the current distance between the children's smart watch and the family and the speed of the child, and display relevant information (as shown in Figure 6 on the large-screen device, "it is expected to arrive home in 10 minutes"), So that the elderly can understand the general situation of the child on the way to school.
  • the subsystem of the elderly family can automatically match to multiple corresponding children's smart watches, and obtain the location and trajectory information of these children's smart watches. .
  • the elderly do not need to automatically initiate device management, and can experience the sensorless operation experience of the door lock automatically opening the door, and the large-screen device 1 automatically displaying the child's school leaving track after entering the door. While meeting the natural needs of users, this process does not require users to master specific equipment operation skills, and is especially suitable for people with low equipment operation capabilities such as the elderly and children.
  • Scenario 2 After the child arrives home, the child's family automatically establishes a video call with the elderly family.
  • FIG. 7 it is a schematic diagram of scenario 2 provided in this embodiment of the present application.
  • the second scenario is described as follows:
  • the cat's eye camera 2 at the door captures the child's figure.
  • the cat-eye camera 2 sends the image of the child to the large-screen device 2 (main device) in the child's family (ie, step S701 ).
  • the large-screen device 2 obtains the image of the child, the identity authentication of the child is performed, and the permission of the child is obtained based on the shared configuration information (for example, the permission of the child to allow the automatic unlocking of the smart door lock corresponding to the child in Table 4).
  • the large-screen device 2 instructs the smart door lock 2 to unlock (ie, step S702).
  • the smart door lock 2 automatically unlocks in response to the instructions of the large-screen device, and automatically opens the door for children.
  • the indoor camera 2 After the child enters the room, the indoor camera 2 captures the image of the child, and sends the image of the child to the large-screen device 2 (ie, step S703).
  • the large-screen device 2 knows that the child has entered the room according to the image of the child sent by the indoor camera 2, and then can automatically communicate with the large screen in the elderly family according to other auxiliary information (such as the current time belonging to the preset children’s school-off time period) and shared configuration information.
  • the device 1 establishes a video call, sends the child image and audio information to the large-screen device 1 in the elderly family (ie step S704), and receives the video image and audio information sent by the large-screen device 1 in the elderly family (ie, step S705), Implement two subsystems to automatically establish video calls for the elderly and children.
  • the elderly and children do not need to operate electronic equipment independently, and can achieve a natural communication experience of greeting the elderly after the children return home from school, and the elderly and children can communicate naturally and on demand as if they were in the same space.
  • the members who are separated by physical space can be It is like being in a virtual space, so that the members of the subsystem can obtain a natural and smooth communication effect that is triggered on demand, and enhance the understanding and care among the members.
  • Scenario 3 An emergency call for help when an emergency occurs.
  • FIG. 8 it is a schematic diagram of scenario 3 provided in this embodiment of the present application.
  • the device center can also be used to manage different wearable devices (such as smart watches, smart bracelets, smart shoes, smart glasses, etc.), and each sensor of the wearable device can sense the user's physiological signs in real time to determine Whether an abnormal event occurs to the user. If the user needs emergency help in an abnormal event, the system will automatically initiate communication with other subsystems.
  • the physiological signs of the user may include, for example, pulse, respiration, heartbeat, blood pressure, pupils, and the like.
  • the third scenario is described as follows:
  • a possible scenario is: when the elderly (such as grandpa) encounters an emergency (such as a sudden illness) at home, the smart bracelet worn by the elderly can detect abnormal physiological signs of the elderly and identify sudden diseases; The ring can report the detected abnormal physiological sign data together with the sudden disease event to the large-screen device 1 (ie, S801 ). After the sensing center of the large-screen device 1 senses the abnormal changes of the elderly's body according to the sudden abnormal event, it automatically establishes a video call with the children's family by sharing the permission in the configuration information to allow the elderly to use the automatic video call (ie, step S802 ).
  • the large-screen device 2 can request the large-screen device 1 for the elderly's physiological signs data according to the shared configuration information (as shown in Table 4, allowing viewing of the elderly's physiological signs).
  • the large-screen device 1 can send the data of the sudden disease of the elderly and the abnormal physiological signs of the elderly to the large-screen device 2 .
  • the large-screen device 2 can display a reminder of an emergency at home for the elderly, for example, as shown in Figure 8, reminding "Grandpa's blood pressure has increased significantly, his heartbeat has accelerated, and he needs to seek medical treatment in time" and so on.
  • Another possible scenario is: when the old man accidentally slips and falls at home, the indoor camera 1 of the old man's home can obtain an image of the old man's body in a slipping posture, and send the slipping image to the large-screen device 1 .
  • the large-screen device 1 can recognize the abnormal body posture of the elderly based on the image information, that is, it can sense that an emergency has occurred in the elderly; after that, the large-screen device 1 can use the equipment corresponding to the elderly in the configuration information of the virtual sharing system. In an emergency, it is allowed to automatically establish a video call), and automatically establish a video call with the children's family to make an emergency call.
  • the elderly family subsystem can be associated with the medical subsystem.
  • the device of the elderly's family reports the emergency to the large-screen device 1 after detecting the emergency.
  • the large-screen device 1 can automatically initiate communication with the medical system based on the device usage authority corresponding to the elderly in the virtual sharing system (for example, when an emergency occurs, allowing automatic emergency calls to the hospital), so that medical personnel can provide timely assistance.
  • the subsystem can sense the emergency, and promptly initiate an emergency call to members in other corresponding subsystems, so that distant family members or medical care personnel can understand the emergency.
  • Scenario 4 Spontaneous communication during driving.
  • FIG. 9 it is a schematic diagram of a scenario 4 provided by an embodiment of the present application.
  • one subsystem in the virtual sharing system is the vehicle subsystem, and the other subsystem is the family subsystem (eg, the children's family subsystem).
  • the fourth scenario is described as follows:
  • the main device in the vehicle detects that the vehicle is about to arrive at the preset destination, it can automatically initiate a video call with the corresponding destination subsystem.
  • the father can input the destination as the children's family on the on-board computer; during the driving process of the vehicle, the on-board computer positioning module can obtain the vehicle position in real time, when the vehicle is less than a certain threshold (such as 1Km) from the children's family, the vehicle The computer can automatically initiate a video call with the large-screen device 2 in the child's family (ie, step S901 ), to inform the family that they will arrive safely immediately.
  • a certain threshold such as 1Km
  • the camera in the vehicle can obtain images of the relevant events and transmit them to the on-board computer.
  • the in-vehicle computer judges the occurrence of an accident according to the acquired images, and then the in-vehicle computer can send the child to the child according to the device permissions that the driver is allowed to use in the shared configuration information (such as allowing the driver's vehicle to automatically establish communication with other subsystems when an accident occurs).
  • the family subsystem or the insurance rescue subsystem automatically initiates video communication to inform the family or the insurance rescuer that the current driver is in an abnormal situation, so that the relevant personnel can organize rescue.
  • the on-board subsystem can initiate communication with other subsystems as needed, so that the driver and family members or insurance rescuers and other personnel can communicate naturally and on demand as if they are in the same space, which can not only improve the user experience, but also initiate communication on demand. In the event of an accident, it can notify relevant personnel to rescue in time to ensure the safety of users.
  • FIG. 10A it is a schematic flowchart of a method for multi-device cooperation provided by an embodiment of the present application.
  • the steps in the flow may be performed by a first master device in a virtual sharing system, where the virtual sharing system includes at least a first subsystem and a second subsystem, and the first master device belongs to the master device of the first subsystem.
  • the process can include the following steps:
  • S110 Acquire user information of a first user, where the first user belongs to a member of the virtual sharing system.
  • the first master device may correspond to, for example, the master device 1 or master device 2 described above; the first user may, for example, correspond to the family members described above, such as the elderly and children.
  • acquiring the user information of the first user by the first host device may include: the first host device acquiring the user information of the first user collected by the first host device itself, for example, when the first host device is a band When there is a large-screen device with a camera, the first main device can collect user images through the camera; or, the first main device receives user information sent by the first electronic device, and the first electronic device may belong to the first subsystem, which may be the first electronic device. Any slave device with information collection capability in a subsystem, such as cat-eye camera, indoor camera, microphone, etc.
  • the user information may include, for example, a user image.
  • the user information may also include the user's voice, the user's biometric features (such as fingerprints), and the like.
  • the user information may be the image of the elderly collected by the cat's eye camera 1 in the embodiment of FIG. 6; or, it may be the child image collected by the cat's eye camera 2 in the embodiment of FIG. 7; or, may be the embodiment of FIG. 8 , the vital sign information of the elderly collected by the smart bracelet; or, in the embodiment of FIG. 9 , the location information of the user obtained by the vehicle-mounted computer, etc. may be used.
  • the first host device after the first host device obtains the user information of the first user, it can perform identity authentication on the first user according to the user information and the shared configuration information; when the identity authentication is passed, it is determined that the first user is a virtual shared user members of the system.
  • each subsystem in this application includes registration information of at least one member and registration information of at least one device, so the shared configuration information may include member information and device information corresponding to each subsystem in the virtual sharing system, wherein the shared configuration
  • the information can be as shown in Table 3 and Table 4.
  • identifying the user intent associated with the user information by the first master device may specifically include: the first master device first determines the current state of the first user according to the acquired user information of the first user; The current state of the first user, and the user intention of the first user corresponding to the state is determined.
  • the user status may include: the first user enters a room; the first user's vital signs are abnormal; the first user's body posture is abnormal; the distance between the first user and the destination less than the first threshold, etc.
  • the corresponding user's intention is to establish a video call with the second subsystem. For example, in the embodiment of FIG.
  • the master device 1 first determines that the state of the elderly is outside the door according to the image of the elderly sent by the cat's eye camera 1; after that, if the master device 1 receives the image of the elderly sent by the indoor camera 1 in the room, Then the main device 1 can determine that the state of the old man has changed to enter the room from outside the door; according to the user state that the old man has entered the room, the main device 1 can determine that the intention of the corresponding old man is to establish a relationship with the children's watch in the second subsystem. Communication to learn about children's school discharge trajectories.
  • the main device 2 first determines that the current state of the child is entering the house according to the images of the child sent by the cat's eye camera 2 and the indoor camera 2 in the house. According to the current state of the child, the main device 2 can determine that the child's next intention is to establish a video call with the subsystem 1 to say hello to the grandparents.
  • the main device 1 can determine that the elderly is currently in a state of abnormal vital signs according to the abnormal vital signs data sent by the smart bracelet of the elderly; according to the current state, the main device 1 It can be judged that the intention of the old man at this time is to establish a video call with the subsystem 2 to seek help from his family.
  • the main device such as the on-board computer
  • the on-board subsystem 3 determines the state that the user is about to arrive at the destination according to the user's location, it can be determined that the user's intention is to be related to the family and family of the destination. (ie subsystem 2) establishes a video call to inform the family in advance that they will be home soon.
  • the main device may also combine auxiliary information to more accurately determine the user's intention.
  • the auxiliary information may include, for example, date information, time information, and the like.
  • the main device 1 first determines that the current user status of the elderly is entering the room according to the images of the elderly sent by the cat-eye camera 1 and the indoor camera 1; It is judged that the old man's intention is to communicate with the child's watch to know the child's school-dismissing trajectory.
  • the first host device may establish or share a general user behavior model, where the general user behavior model includes a corresponding relationship between user information and user intent.
  • the general user behavior model may include, for example, a mapping relationship among user identification, time, location, user intention, and the like.
  • the first master device may determine the associated user intent according to the user information.
  • the time in the user information can be determined based on the time when the electronic device collects the user information, and the location in the user information can be determined according to the type of the electronic device that collects the user information. For example, if the user information is collected by a cat-eye camera, then It is determined that the user is located outside the door; if the user information is collected by an indoor camera, it is determined that the user is located in the house.
  • the first master device may record the correspondence between user information and user intent within a preset historical period, if the number of consecutive records of the correspondence between a certain type of user information and user intent reaches a preset threshold , the user information and user intent can be added to the general user behavior model.
  • the main device can record the user ID (elderly), time (12:00 ⁇ 13 : 00) and the user's intention (air conditioner is turned on and set to 26°C) is added to the user's general model, then the next time the main device receives the image of the elderly collected by the indoor camera from 12:00 to 13:00, The air conditioner can be automatically instructed to be on and set to 26°C.
  • mapping relationship shown in Table 7 is only an example, and in practical applications, the general user behavior model may also include more other items, which are not limited in this application.
  • electronic devices can automatically initiate services that match user expectations, improving the user's sensorless operating experience.
  • S130 Send a request message to the second master device in the second subsystem according to the user information and the shared configuration information, where the request message is used to request a service operation, and the shared configuration information includes member information corresponding to each subsystem in the virtual sharing system and Device Information.
  • the first master device may further verify the device use authority of the first user, and the verification process may include: the first master device determines, according to the user information and the shared configuration information, that the first user has Using the authority of at least one second electronic device in the second subsystem; the first master device sends a request message to the second master device in the second subsystem.
  • the second master device receives a request message sent by the first master device in the first subsystem; and in response to the request message, instructs at least one second electronic device to perform a service operation.
  • the second master device determines the type of capability required for the service operation according to the request message; then, according to the priority corresponding to the electronic device with this type of capability in the second subsystem, instructs the second electronic device to perform A service operation, wherein the second electronic device has the highest priority among the electronic devices having the capability in the second subsystem.
  • the master device can prioritize the electronic devices according to the strength of the capabilities corresponding to the capability types of the component sets possessed by the electronic devices in the same component set.
  • the master device can select the electronic device with the highest priority to perform the service operation according to the priority of the electronic device of the type of capability components.
  • FIG. 10B it is a schematic flowchart of a more specific method for multi-device cooperation provided by an embodiment of the present application. This flowchart is used to introduce the terminal-side implementation process when establishing communication between subsystems, including the following steps:
  • an electronic device in the first subsystem acquires user information.
  • the first subsystem may still correspond to the subsystem 1 or subsystem 2 in the above content; the electronic device in this step may be a slave device in the first subsystem, such as a cat's eye camera, an indoor camera, a microphone, and the like.
  • the user information may include a user image, and the user image may specifically include: the user's face, the user's figure, and the like.
  • the user information may also include the user's voice, the user's biometric features (such as fingerprints), and the like.
  • the first subsystem may have one or more electronic devices to acquire user information at the same time. For example, when the user enters the room, the indoor camera can capture the image of the user, the microphone can capture the user's voice, and so on.
  • the first host device performs security verification on the first user according to the user information, and confirms that the first user can use the virtual sharing system.
  • the first master device may perform security verification on the user through the security center.
  • the security verification may include: the security center performs identity authentication on the first user according to the user information, and the identity authentication is used to verify whether the first user is a member of the virtual sharing system, and then determine whether to allow the first user to use the virtual sharing system ; After the identity authentication is passed, if it is subsequently determined that the user intends to obtain a certain service operation, the security center can also verify the device use authority of the first user according to the shared configuration information, such as verifying whether the first user is Has the right to use an electronic device that provides the operation of a service.
  • non-virtual sharing system members can be prevented from arbitrarily using the functions of the system, and the electronic devices in the virtual sharing system can provide services for users. Security during operation.
  • the user may register his own information on the main device in advance, such as inputting a face image, voice, biometric feature, and the like.
  • the master device can store user information, so that when it subsequently receives the user information collected by the subsystem, it can compare it with pre-stored corresponding information to determine the identity of the user.
  • the identity authentication may specifically include: the security center performs face recognition on the first user according to the user image to confirm the identity; or, the security center performs voiceprint recognition on the user according to the voice of the first user; Recognition of user biometric features such as fingerprints.
  • the first master device may preset the device use authority of the first user, and the setting method may be as shown in FIG. 5F , which is not limited in this application.
  • the security center can verify the device permissions available to the user to ensure that the invoked application or electronic device can be used by users.
  • the first master device identifies the associated user intent according to the user information.
  • the first host device can further determine the user's intention according to the user information through the perception center.
  • the perception center of the master device may determine the current user state of the user according to the user information, and further determine the user's intention according to the user state.
  • the main device 2 first determines that the state of the child is outside the door according to the child image sent by the cat's eye camera 2; after that, if the main device 2 receives the child image sent by the indoor camera 2 in the house again , it can be determined that the current state of the child is entering the house. According to the current state of the child, the main device 2 can determine that the child's next intention is to establish a video call with the grandparents to greet the grandparents.
  • the main device 1 when the main device 1 receives the abnormal vital sign data sent by the smart bracelet, it can determine that the elderly is currently in a state of sudden illness; according to the current state, the main device 1 can determine that the elderly The intention at this time is to establish a video call with the family members in subsystem 2 to seek help from the family members.
  • the main device such as the on-board computer
  • the main device receives the state that the user is about to arrive at the destination, it can determine that the user's intention is to establish a relationship with the family at the destination. Video calls to inform family members in advance.
  • the perception center of the main device may also combine auxiliary information to more accurately determine the user's intention.
  • the auxiliary information may include, for example, date information, time information, and the like.
  • the master device 1 first determines that the state of the elderly is outside the door according to the image of the elderly sent by the cat's eye camera 1; after that, if the master device 1 receives the image of the elderly sent by the indoor camera 1 in the room, Then the main device 1 can determine that the state of the old man has changed to enter the room from outside the door; according to the current state of the old man who has entered the room, combined with the school time, the main device 1 can determine that the old man's intention is to know the child's School trail.
  • the first master device determines, according to the user's intention, that the electronic device in the second subsystem performs the service operation.
  • the application center of the first master device may further determine the application to be invoked according to the user's intention, and determine the subsystem to which the application belongs according to the shared configuration information.
  • the application center learns according to the perception result of the perception center that the child's intention is to make a video call with subsystem 1 to greet grandparents; the application center determines that the called application is a video call application according to the intention, and It is determined that the application belongs to subsystem 1 according to the shared configuration.
  • the first master device automatically initiates communication with the second master device in the second subsystem.
  • the application center may instruct the communication center of the first master device to establish communication with the communication center of the second master device.
  • the first master device when the first master device has established communication with the second master device in the second subsystem, the first master device may automatically initiate communication with the second master device in the second subsystem . Specifically, the first master device may send a request message to the second master device to request at least one second electronic device in the second subsystem to perform a service operation.
  • the second master device may determine the type of capability required for the service operation according to the request message, and select an electronic device that can provide this type of capability to perform the service operation.
  • the application center in the children's family may send a request message to the master device 1 of the elderly's family through the communication center, the request message for example establishing a video call between the children's family and the elderly's family.
  • the main device of the elderly family learns that the service operation is to establish a video call, it can automatically call the equipment in the visual component set for video communication display, and call the microphone and speaker of the audio device with the best auditory component set for audio call.
  • the multi-device cooperation method provided by the embodiment of the present application, by forming subsystems in different regions into a large virtual shared system, and making the electronic devices in the subsystems adaptively provide services to users according to the scene, it is possible to make the physical space
  • the separated members seem to be in a virtual space, so that the members of the subsystem can obtain a natural and smooth communication effect triggered on demand, and enhance the understanding and care among the members.
  • FIG. 11A it is a schematic diagram of a refined structure of some devices provided in this embodiment of the present application.
  • the implementation process in this application scenario will be introduced in conjunction with FIG. 6 .
  • the cat's eye camera 1 includes an image acquisition module 1001 and a communication module 1002 ;
  • the large-screen device 1 (the main device in the subsystem 1 ) includes a communication center 1101 , a security center 1102 , a perception center 1103 , and an application
  • the smart door lock 1 includes a communication module 1201 and a control module 1202 .
  • the communication module 1002, the communication center 1101, and the communication module 1201 may be the same or similar in internal structure and function implementation.
  • the image acquisition module 1001 of the cat's eye camera 1 can collect the user image of the old man, and then the image acquisition module 1001 can convert the user image into The electrical signal is transmitted to the communication module 1002 .
  • the communication module 1002 transmits the user's image electrical signal to the communication center 1101 of the large-screen device 1 in a wired or wireless manner.
  • the communication center 1101 transmits the user image information to the security center 1102 .
  • the security center 1102 authenticates the user's identity and the user's right to use the virtual sharing system according to the user's image. Specifically, the security center 1102 can query the pre-stored user reference information (such as a pre-recorded user face image) from the storage center 1105, and compare the acquired user image with the pre-stored user reference image. When the similarity of the reference images is greater than the preset threshold, it is confirmed that the identity of the user is a member of the virtual sharing system, and the electronic device in the virtual sharing system can be used.
  • the pre-stored user reference information such as a pre-recorded user face image
  • the security center 1102 can send the user's image to the perception center 1103, and can also send the user's identity information, information about the user's permission to use the virtual sharing system, etc. to the perception center.
  • the perception center 1103 obtains the user's state in combination with the user's image, and obtains the user's intention according to the user's state. For example, the perception center 1103 learns from the user information that the current status of the user whose authentication identity is grandparents is outside the door, then the perception center can obtain the user's intention to automatically unlock the smart door lock 1 according to the user's general model information (as shown in Table 7). .
  • the perception center 1103 sends the user intent to the application center 1104 .
  • the application center 1104 may determine the invoked application or device based on the user's intent. For example, when the application center 1104 learns that the user intends to automatically unlock the smart door lock 1, it can determine to invoke the unlocking function of the smart door lock 1 according to the shared configuration information. After that, the application center 1104 can send the calling instruction to the device center 1106, and the device center 1106 selects the corresponding device for communication according to the calling instruction, and instructs it to execute the corresponding service.
  • the device center 1106 can invoke the corresponding electronic device according to the instruction, such as selecting the smart door lock 1 from the control component set. After that, the device center 1106 can send the unlock instruction message to the communication center 1101 , and then the device center 1106 sends the unlock instruction message to the communication module 1201 of the smart door lock 1 .
  • the communication module 1201 of the smart door lock 1 transmits the unlock instruction message to the control module 1202 of the smart door lock 1, and the control module 1202 performs the unlock operation in response to the unlock instruction message, thereby realizing the process of automatic unlocking by the user.
  • the application center 1104 can also verify the user's device usage rights to the security center 1102, such as sending a notification message to the security center for device usage rights verification, and The message indicates the specific device the user is using.
  • the security center 1102 can verify whether the user has the permission to use the specific device. For example, according to the shared configuration information in the storage center 1105, it can be determined whether the grandparents have the permission to automatically unlock the smart door lock 1. The application center 1104 can do this.
  • FIG. 11B it is a schematic diagram of a refined structure of some devices provided in this embodiment of the present application.
  • the electronic device automatically matches the children's smart watch for the old man, and obtains the track of the child after school as an example, and the implementation process in this application scenario is still introduced with reference to Figure 6.
  • an unlocking feedback message can be generated and sent to the communication center 1101 of the large-screen device 1 via the communication module 1201 of the smart door lock 1 .
  • the communication center 1101 of the large-screen device 1 can receive the unlocking feedback message sent by the smart door lock 1, and transmit the unlocking feedback message to the perception center 1103.
  • the perception center 1103 can know that the user's smart door lock 1 has been opened according to the unlocking feedback message, and the user's current state is entering the room (in addition, the perception center 1103 can also determine that the user's state is entering the room according to the image collected by the indoor camera. The examples do not describe this in detail).
  • the perception center 1103 determines the user's intention (eg, wants to know the track of the child returning home from school) according to the new user status (the old man has entered the room) and other auxiliary information (eg time).
  • the perception center 1103 sends the user intent to the application center 1104 .
  • the application center 1104 determines the device to be called (the child's smart watch) according to the user's intention (wanting to know the grandchild's track of returning home from school) and the user's general behavior model, and sends the device calling instruction to the device center 1106 .
  • the device center 1106 selects a corresponding device for communication according to the device information. After that, the device center 1106 sends a request message to the communication center 1101, where the request message is used for requesting feedback of the trajectory information within the preset time period.
  • the communication center 1101 sends the device service indication to the communication module 1301 of the children's smart watch.
  • the communication module 1301 of the smart watch transmits the device service instruction to the management module 1302 of the smart watch; the management module 1302 can obtain the historical track information from the storage module 1303 of the smart watch according to the device service instruction, and generates a child track feedback message, which is sent by the smart watch.
  • the communication module 1301 sends it to the communication center 1101 of the large-screen device, and the communication center 1101 further sends the child's trajectory feedback message to the device center 1106, and the device center 1106 selects the highest-priority device available in the visual component set to display the child's trajectory. Track information, so as to automatically provide users with the track of children returning home from school.
  • FIG. 12 it is a schematic diagram of a refined structure of some devices provided in this embodiment of the present application.
  • the child family automatically establishes a video call with the elderly family
  • the implementation process in this application scenario will be introduced in conjunction with FIG. 7 .
  • the smart door lock 2 can automatically unlock the door, and the child enters the room.
  • the implementation process of the automatic unlocking of the smart door lock 2 of the children's family is similar to the process described in the embodiment shown in FIG. 11A , and details are not repeated here.
  • the image acquisition module 2001 of the indoor camera 2 installed in the child's family room collects the user's image (that is, the child's image, such as the child's figure, the child's face image, etc.);
  • the communication module 2002 is sent to the communication center 2101 in the large-screen device 2 (the main device in the subsystem 2).
  • the communication center 2101 transmits the user image to the security center 2102, and the security center 2102 authenticates the user's identity.
  • the security center 2102 can query and obtain the pre-stored user reference information from the storage center 2105, and compare the user image information with the user reference information to confirm that the user is a child. After that, the security center 2102 can send the user identity information to the perception center 2103 .
  • the perception center 2103 determines, according to the user identity information, that the user's status is that the child returns home from school, and determines that the child's intention is to establish video communication with the elderly according to the user's status.
  • the perception center 2103 sends the user intent to the application center 2104 .
  • the application center 2104 determines, according to the user's intention, that the called application is a video calling application in an elderly family.
  • the application center 2104 sends a calling instruction to the device center 2106, and the device center 2016 determines the master device information in the elderly home subsystem, and generates a request message, which is used to request a video call.
  • the device center 2106 sends a device request message to the communication center 2101; after that, the communication center 2101 sends the request message to the communication center 1101 of the large-screen device 1 in the subsystem 1 (the main device in the subsystem 1).
  • the communication center 1101 further transmits the request message to the device center 1106 in the home of the elderly.
  • the device center 1106 selects the highest-priority display device or component (the image display module 1111 shown in FIG. 12 ) and the audio playback device or component (as shown in FIG. 12 ) in a centralized manner from the visual components in the elderly family according to the request message
  • the audio playback module 1112 of the sub-system 1) performs image display and audio collection and playback respectively, so as to realize the automatic establishment of a video call between the subsystem 1 and the subsystem 2.
  • the large-screen device 1 in the subsystem 1 can also be connected to a device with a vital sign monitoring function, such as a smart bracelet worn by the elderly, a smart blood pressure detector, and smart shoes.
  • a device with a vital sign monitoring function such as a smart bracelet worn by the elderly, a smart blood pressure detector, and smart shoes.
  • the large-screen device 1 can also collect the vital signs data of the elderly, and send it to the large-screen device 2 in subsystem 2 for display by the display device in subsystem 2 , so that members in subsystem 2 (such as children's parents) know the health status of the elderly.
  • the displayed information may be specific monitoring data of at least one vital sign of the elderly (eg heart rate 79 ), and/or the status of at least one vital sign of the elderly (eg normal heart rate), etc., which are not limited in this application.
  • the subsystems in different spaces are formed into a large virtual shared system, and the electronic devices in the subsystems are adapted to the user (especially the user) according to the scene.
  • the elderly, children and other users who have operational difficulties with smart devices provide services, which can make the members separated by the physical space seem to be in a virtual space, so that the members of the subsystem can obtain a natural and smooth communication effect triggered on demand, and enhance the Understanding and caring among members.
  • FIG. 13 it is a schematic diagram of a refined structure of some devices provided in this embodiment of the present application.
  • scenario 3 evolution call for help when an emergency occurs
  • FIG. 8 the implementation process in this application scenario will be introduced with reference to FIG. 8 .
  • the elderly family can automatically establish a video call with the children's family.
  • the image acquisition module 1301 of the indoor camera 1 in the elderly's family can acquire an image of the user (an image of the elderly falling), and the image acquisition module 1301 can send the collected user image to the communication module 1302, and send it to the communication center 1101 of the large-screen device 1 via the communication module 1302 in a wired or wireless manner.
  • the physiological sign collection module 1401 in the smart bracelet worn by the elderly can collect the user's physiological sign information (such as blood pressure data), and the physiological sign collection module 1401 can send the collected user's physiological sign information to the communication module 1402, and It is sent to the communication center 1101 in the large-screen device 1 via the communication module 1402 .
  • the communication center 1101 may first send the image of the user to the security center 1102 .
  • the security center 1102 can authenticate the user's identity according to the user's image, and determine that the user is a member of the virtual sharing system. Specifically, the security center 1102 can query and obtain the pre-stored user reference information from the storage center 1105, and compare the user image information with the user reference information to confirm that the user is an elderly person.
  • the security center 1101 may transmit the user image to the perception center 1103 .
  • the perception center 1103 can also obtain the user's physiological sign information from the security center 1102 (or directly from the communication center 1101 ).
  • the perception center 1103 can comprehensively determine that the user's state is an emergency for the elderly based on the user's image and the user's physiological sign information, combined with other auxiliary information (such as location, time, etc.)
  • the perception center 1103 sends the user intent to the application center 1104 .
  • the application center 1104 determines according to the user's intention to establish a video call with the device in the child family subsystem, that is, invokes the video call application in the child family subsystem.
  • the application center 1104 sends a calling instruction to the device center 1106, and the device center 1106 determines the master device information in the elderly home subsystem and generates a request message.
  • the device center 1106 sends a request message to the communication center 1101, and the communication center 1101 sends the request message to the communication center 2101 of the large-screen device 2 (the master device in the subsystem 2) in the children's family.
  • the communication center 2101 further transmits the request message to the device center 2106 in the child's home.
  • the device center 2106 selects the highest-priority visual display electronic devices or components (the image display module 2111 shown in FIG. 13 ) and audio playback electronic devices or components (The audio playback module 2112 as shown in FIG. 13 respectively performs image display and audio collection and playback, so as to realize the automatic establishment of a video call between subsystem 1 and subsystem 2.
  • the main device of the subsystem can spontaneously initiate a video call for help with other subsystems, so that the elderly can be rescued in time.
  • the subsystems in different regions are formed into a large virtual sharing system, and the electronic devices in the subsystems are adapted to users (especially the elderly, children, etc.) according to the scene.
  • Users with operational disabilities of smart devices provide services, which can make the members separated by physical space seem to be in a virtual space, so that the members of the subsystem can obtain a natural and smooth communication effect triggered on demand, and enhance the understanding between members. and caring.
  • FIG. 14 it is a schematic diagram of a refined structure of some devices provided in this embodiment of the present application.
  • scenario 4 adaptive communication in driving
  • FIG. 9 the implementation process in this application scenario will be introduced in conjunction with FIG. 9 .
  • the in-vehicle camera may collect images of the user through the image collection module 3001 .
  • the image acquisition module 3001 can transmit the user image to the communication module 3003 of the in-vehicle camera, and then transmit it to the in-vehicle computer (the main device of the in-vehicle subsystem) via the communication module 3003 .
  • the on-board computer authenticates the user according to the user image, and determines that the user can be a member of the virtual sharing system and can use the virtual sharing system.
  • the process of performing identity authentication on the user and device authority authentication by the in-vehicle device may refer to the introduction in the above related embodiments, and will not be repeated here.
  • some positioning devices in the vehicle may have a positioning module 3002 , capable of locating the user's position in real time, and sending the positioning information to the communication center 3101 of the vehicle computer through the communication module 3003 .
  • the perception center 3103 judges that the vehicle is about to arrive at the destination according to the positioning information and the destination information input by the user, if the distance to the destination is less than a certain threshold, it can perceive that the user's intention is to inform the destination subsystem members in advance , you can initiate a video call with subsystem 2.
  • the process of initiating a video call is similar to the process described in the foregoing embodiment, and details are not repeated here.
  • the subsystems in different regions are formed into a large virtual sharing system, and the electronic devices in the subsystems are adapted to users (especially the elderly, children, etc.) according to the scene.
  • Users with operational disabilities of smart devices provide services, which can make the members separated by physical space seem to be in a virtual space, so that the members of the subsystem can obtain a natural and smooth communication effect triggered on demand, and enhance the understanding between members. and caring.
  • An embodiment of the present application further provides a multi-device cooperation system, including at least a first subsystem and a second subsystem, the first subsystem includes a first master device, and the second subsystem includes a second master device , the first master device and the second master device are configured to execute the method for multi-device cooperation provided by the embodiment of the present application.
  • the embodiments of the present application further provide a computer-readable storage medium storing computer instructions, and when the computer instructions are executed in a computer, the multi-device coordination method provided by the embodiments of the present application can be realized.
  • the embodiments of the present application further provide a computer product that stores computer instructions, and when the computer instructions are executed in the computer, the method for cooperating with multiple devices provided by the embodiments of the present application can be realized.
  • the embodiments of the present application further provide a chip that stores computer instructions, and when the computer instructions are executed in the chip, the multi-device cooperation method provided by the embodiments of the present application can be realized.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions can be sent from one website site, computer, server or data center to another website site, computer, server or data center for transmission.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
  • the process can be completed by instructing the relevant hardware by a computer program, and the program can be stored in a computer-readable storage medium.
  • the program When the program is executed , which may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random storage memory RAM, magnetic disk or optical disk and other mediums that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本申请实施例提供了一种多设备配合的方法及设备,属于物联网技术领域。该方法通过将位于不同空间的子系统连接为一个虚拟共享系统,并根据共享配置信息使该虚拟共享系统中的多个电子设备协同工作为用户提供跨越空间的场景自适应服务,从而满足分隔两地的人们自然通畅沟通的需求。

Description

一种多设备配合的方法及设备
本申请要求于2021年04月20日提交国家知识产权局、申请号为202110425911.9、申请名称为“一种多设备配合的方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及物联网技术领域,尤其涉及一种多设备配合的方法及设备。
背景技术
人类社会发展的历程,从某种角度上来讲,就是不断加强联系的过程。人与人之间的连接和沟通,关键是要打破人与人之间空间上的隔离和限制。为了加强联系,特别是为了加强空间上的联系,人类进行了各种各样的努力:交通的发展、通信的发展等,比如今天遍布全国的高铁网络,可以让相聚千里的人,只要几个小时就能见面。
随着网络技术的发展,人们现在已经可以通过手机、平板电脑等智能终端以语音或者视频的方式进行沟通交流。然而,对于相隔两地的人们来说,现有的这些方式只能让人们在人为启动通话的特定情形下才能进行沟通,智能终端无法为人们提供跨越空间的场景自适应服务,使人们难以获得如处于同一空间的自然顺畅的交流体验。
发明内容
本申请提供了一种多设备配合的方法及设备,通过将位于不同空间的子系统连接为一个虚拟共享系统,并根据共享配置信息使该虚拟共享系统中的多个电子设备为用户提供跨越空间的场景自适应服务,解决了分隔两地的人们沟通不自然通畅的问题。
第一方面,提供了一种多设备配合的方法,应用于虚拟共享系统中的第一主设备,所述虚拟共享系统至少包括第一子系统和第二子系统,所述第一主设备属于所述第一子系统,所述方法包括:获取第一用户的用户信息,所述第一用户属于所述虚拟共享系统中的成员;识别与所述用户信息关联的用户意图,所述用户意图包括使所述第二子系统中的至少一个电子设备执行服务操作;根据所述用户信息和共享配置信息,向所述第二子系统中的第二主设备发送请求消息,所述请求消息用于请求所述服务操作,所述共享配置信息包括所述虚拟共享系统中每个子系统对应的成员信息和设备信息。
根据本申请实施例提供的方法,通过将处于不同空间的多个子系统网络连接,组建一个虚拟共享系统,并使该虚拟共享系统中的电子设备根据用户意图协同工作,按需为用户自发提供场景自适应服务,能够为分隔两地的人们带来自然通畅的沟通效果,提升用户生活的便捷性。
结合第一方面,在第一方面的某些实现方式中,所述用户意图包括使所述第二子系统中的至少一个电子设备执行服务操作,具体包括:所述用户意图包括使所述第二子系统中的至少一个电子设备执行视频通话服务操作。
根据本申请实施例提供的方法,通过根据用户意图与其他子系统建立视频通话,能够使位于不同空间子系统中的人们如同处于一个虚拟空间那样,随需沟通,提升用户生活的便捷性。
结合第一方面,在第一方面的某些实现方式中,所述识别与所述用户信息关联的用 户意图,具体包括:根据获取的所述用户信息,确定所述第一用户当前的状态;根据所述第一用户当前的状态,确定对应的所述第一用户的用户意图。
根据本申请实施例提供的方法,通过根据用户信息判断用户状态,再根据用户状态确定用户意图,能够使主设备与其他子系统建立符合用户需求的通信,为用户提供最合适的服务,提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,所述第一用户当前的状态,包括以下至少一项:所述第一用户进入房间内;或者,所述第一用户生命体征异常;或者,所述第一用户身体姿势异常;或者,所述第一用户与目的地之间的距离小于第一阈值。
结合第一方面,在第一方面的某些实现方式中,所述获取第一用户的用户信息,具体包括:接收所述第一子系统中的至少一个电子设备发送的所述用户信息,所述第一子系统中的至少一个电子设备与所述第一主设备不同。
其中,这里的第一子系统中的至少一个电子设备可以指与第一主设备不同的电子设备,如第一子系统中的从设备,如屋内摄像头、猫眼摄像头等。
应理解,一个子系统可以包括主设备以及与该主设备连接的至少一个从设备,从设备可以具有用户信息采集能力,如图像采集能力、语音采集能力等,从设备可以将采集到的用户信息发送至主设备,以便主设备根据用户信息识别用户意图,并为用户提供合适的自适应服务。
结合第一方面,在第一方面的某些实现方式中,当所述用户信息为用户图像时,所述方法具体包括:接收第一屋内摄像头发送的第一图像,所述第一图像包括所述第一用户的图像,所述第一屋内摄像头属于所述第一子系统;当根据所述第一图像确定所述第一用户进入房间时,向所述第二主设备发起所述视频通话。
其中,屋内摄像头是指安装于房间内部的摄像头,可以用于采集房间内的图像。当屋内摄像头采集到的图像中包括第一用户的图像时,表示第一用户位于屋内。
结合第一方面,在第一方面的某些实现方式中,当所述用户信息为用户图像时,所述方法具体包括:接收第一屋内摄像头发送的第二图像,所述第二图像包括所述第一用户的图像,所述第一屋内摄像头属于第一子系统;根据所述第二图像信息识别所述第一用户的身体姿势;当根据所述第一用户的身体姿势确定所述第一用户身体姿势异常时,向所述第二主设备发起所述视频通话。
其中,身体姿势异常可以包括身体姿势呈摔倒姿势、蜷缩姿势等非正常姿势。当用户的身体姿势异常时,可以表示该用户发生紧急事件。
根据上述方法,第一子系统中的屋内摄像头采集用户图像,再由第一主设备根据用户图像识别到用户的身体姿势异常时,第一主设备可以确定第一用户发生紧急事件,此时第一主设备可以向第二主设备自动发起视频通话。通过虚拟共享系统中各个电子设备之间的协同配合,能够在用户发生紧急事件时,自发及时地为用户向家人呼救。
结合第一方面,在第一方面的某些实现方式中,当所述第一子系统为车载子系统时,所述方法具体包括:获取所述第一用户的位置信息;当根据所述第一用户的位置信息,确定所述第一用户与目的地之间的距离小于第一阈值时,向所述第二主设备发起视频通话。
结合第一方面,在第一方面的某些实现方式中,所述方法还包括:根据所述用户信息和所述共享配置信息,对所述第一用户进行身份认证;当所述身份认证通过时,确定 所述第一用户为所述虚拟共享系统中的成员。
根据本申请实施例提供的方法,通过根据用户信息对用户进行身份认证,确定用户为虚拟共享系统的成员时,才进行后续操作,可以保障系统和用户安全,避免非虚拟共享系统成员占用系统资源。
结合第一方面,在第一方面的某些实现方式中,所述共享配置信息还包括所述虚拟共享系统中的成员对应的设备使用权限;所述根据用户信息和共享配置信息,向所述第二子系统中的第二主设备发送请求消息,具体包括:所述根据用户信息和所述共享配置信息确定所述第一用户具有使用所述第二子系统中的至少一个第二电子设备的权限;向所述第二子系统中的第二主设备发送所述请求消息。
根据本申请实施例提供的方法,通过根据用户信息和共享配置信息对用户的设备使用权限进行认证,可以保障系统和用户安全,避免为获得相关设备使用权限的人员占用系统资源。
第二方面,提供了一种多设备配合的方法,应用于虚拟共享系统中的第二主设备,所述虚拟共享系统至少包括第一子系统和第二子系统,所述第二主设备属于所述第二子系统,所述方法包括:接收所述第一子系统中的第一主设备发送的请求消息,所述请求消息用于请求所述第二子系统中的至少一个电子设备执行服务操作;响应于所述请求消息,指示所述至少一个第二电子设备执行所述服务操作。
根据本申请实施例提供的方法,通过将处于不同空间的多个子系统网络连接,组建一个虚拟共享系统,并使该虚拟共享系统中的电子设备根据用户意图协同工作,按需为用户自发提供场景自适应服务,能够为分隔两地的人们带来自然通畅的沟通效果,提升用户生活的便捷性。
结合第二方面,在第二方面的某些实现方式中,所述服务操作包括:与所述第一子系统建立视频通话服务操作。
根据本申请实施例提供的方法,通过根据用户意图与其他子系统建立视频通话,能够使位于不同空间子系统中的人们如同处于一个虚拟空间那样,随需沟通,提升用户生活的便捷性。
结合第二方面,在第二方面的某些实现方式中,所述响应于所述请求消息,指示所述至少一个第二电子设备执行所述服务操作,具体包括:根据所述请求消息确定所述服务操作所需的能力;根据所述第二子系统中具有所述能力的电子设备对应的优先级,指示第二电子设备执行所述服务操作,所述第二电子设备为所述第二子系统中具有所述能力的电子设备中优先级最高的电子设备。
根据本申请实施例提供的方法,通过根据电子设备的能力优先级确定执行服务操作的设备,能够保证服务操作的完成效果,使用户获得更好的使用体验。
第三方面,提供了一种多设备配合的系统,至少包括第一子系统和第二子系统,所述第一子系统包括第一主设备,所述第二子系统包括第二主设备,所述第一主设备用于执行如上述第一方面中任一项实现方式所述的方法,所述第二主设备用于执行如上述第二方面中任一项实现方式所述的方法。
第四方面,提供了一种计算机可读存储介质,存储有计算机指令,当所述计算机指令在计算机中执行时,使得如上述第一方面或第二方面中任一实现方式所述的方法得以实现。
第五方面,提供了一种计算机产品,存储有计算机指令,当所述计算机指令在计算机中执行时,使得如上述第一方面或第二方面中任一实现方式所述的方法得以实现。
第六方面,提供了一种芯片,存储有计算机指令,当所述计算机指令在芯片中执行时,使得如上述第一方面或第二方面中任一实现方式所述的方法得以实现。
附图说明
图1是本申请实施例提供的一种多设备配合的系统架构示意图。
图2是本申请实施例提供的另一种多设备配合的系统架构示意图。
图3是本申请实施例提供的一种主设备对应的电子设备的结构示意图。
图4是本申请实施例提供的另一种主设备对应的电子设备的结构示意图。
图5A至图5F是本申请实施例提供的一些图形用户界面示意图。
图6是本申请实施例提供的一种多设备配合的方法的应用场景示意图。
图7是本申请实施例提供的另一种多设备配合的方法的应用场景示意图。
图8是本申请实施例提供的又一种多设备配合的方法的应用场景示意图。
图9是本申请实施例提供的又一种多设备配合的方法的应用场景示意图。
图10A和图10B是本申请实施例提供的一些多设备配合方法的示意性流程图。
图11A和图11B是本申请实施例提供的一些电子设备的细化结构示意图。
图12是本申请实施例提供的另一种电子设备的细化结构示意图。
图13是本申请实施例提供的又一种电子设备的细化结构示意图。
图14是本申请实施例提供的又一种电子设备的细化结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
需要说明的是,本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,除非另有说明,“多个”是指两个或多于两个,“至少一个”、“一个或多个”是指一个、两个或两个以上。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”特征可以明示或者隐含地包括一个或者更多个该特征。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例的技术方案可以应用于各种通信系统,尤其定位于物联网系统。例如: 全球移动通讯(global system of mobile communication,GSM)系统、码分多址(code division multiple access,CDMA)系统、宽带码分多址(wideband code division multiple access,WCDMA)系统、通用分组无线业务(general packet radio service,GPRS)、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)、通用移动通信系统(universal mobile telecommunication system,UMTS)、全球互联微波接入(worldwide interoperability for microwave access,WiMAX)通信系统、未来的第五代(5th generation,5G)系统或新无线(new radio,NR)等。
随着物联网的发展,目前已经可以通过应用程序(application,APP)、全球广域网(World Wide Web,Web)等方式远程控制家中的物联网(internet of things,IoT)设备,如在办公室或回家途中提前打开家中的热水器等,从而实现对物联网设备的远程管理功能。但此类情形下的智能设备为被动管理模式,例如房间主人不在家时,若有其他人员需要开门,则该人员需首先向房间主人发起远程通信,主人确认身份后,再进行远程开门控制,完成开门动作。物联网设备无法做到理解用户需求,自发地按用户需求提供服务。此外,考虑到在实际生活中,人们由于工作等的需要无法方便地与家人、朋友等见面,人与人之间的沟通不能像大家处在一个房间那样自然和便利。因而,如果能利用不同空间中的电子设备,自发地为人们提供满足自然需求的场景自适应服务,尤其跨越空间提供这些服务,就会让由于物理空间隔离开的人们如同处于同一个虚拟的空间,实现自然便捷的沟通,大大增加人们生活的便利性。
为了实现上述目的,本申请实施例提供了一种多设备配合的方法。该方法通过将位于不同空间的多个场景中的电子设备组成一个虚拟共享系统,电子设备之间互相协同配合,自发地为人们提供跨越空间的服务,从而实现位于不同区域人们可以如同处于同一空间那样顺其自然、随需触发交流的效果。
为更好地理解本申请实施例提供的多设备配合的方法,首先以家庭场景为例对本申请实施例提供的系统架构进行介绍。但应理解,本申请实施例提供的方法不仅限应用于家庭场景,例如还可以应用于位于不同空间的办公场地、特定的公共场地(如医院等)、出行中的车辆等场景,本申请对此不作限定。
示例性的,如图1所示,为本申请实施例提供的一种多设备配合的系统架构示意图。
该系统架构至少包括两个子系统,如子系统1和子系统2。其中,子系统1例如可以包括家庭1中的多个电子设备,子系统2例如可以包括家庭2中的多个电子设备。子系统1和子系统2可以通过网络(例如广域网(wide area network,WAN)(如互联网)等)连接组成一个虚拟共享系统。子系统1和子系统2可以位于不同空间,然而本申请对子系统之间的实际距离不做限定。
在一些实施例中,每个子系统可以包括多种类型的电子设备。例如,一个子系统可以包括一个家庭拥有的多个电子设备,如子系统1包括大屏设备1、猫眼摄像头1、蓝牙音箱1、屋内摄像头1;子系统2包括大屏设备2、猫眼摄像头2、蓝牙音箱2、屋内摄像头2,其中,猫眼摄像头可以指安装于门口的摄像头,可以采集门口一定区域内的图像;屋内摄像头可以指安装于房间内部的摄像头,可以用于采集房间内的图像。此外,子系统还可以包括平板电脑、个人电脑(personal computer,PC)、智能门锁、智能空调、热水器,及子系统成员佩戴的可穿戴设备,例如智能手表、智能手环、智能鞋、智能眼 镜等。本申请对电子设备的具体类型不做限定。
按照计算能力来划分,子系统中的电子设备可以分为主设备(或称富设备)和从设备(或称轻设备、瘦设备)。其中,主设备是指功能较为完备的设备,其计算能力较强,如智能手机、平板电脑、大屏设备(如智慧屏)和个人电脑(personal computer,PC)等;从设备是指可完成特定功能的设备,其计算能力较弱,如智能手环、智能手表、智能鞋等穿戴设备,以及蓝牙音箱、网络摄像头(camera)等IoT设备。为方便描述,在本申请以下实施例中以图1所示的大屏设备1作为子系统1中的主设备(记为主设备1),以图1所示的大屏设备2作为子系统2中的主设备(记为主设备2)为例进行说明,但在实际应用时,子系统中的主设备还可以是其他类型的电子设备。示例性的,本申请实施例中的主设备可以是一个设备,也可以是包括多个设备的分布式主设备,其中,该多个设备分别执行不同的主设备功能,本申请对此不作限定。
在一些实施例中,主设备具有射频模块,可以连接至公网,并通过公网与其他子系统中的主设备进行通信连接,从而使不同空间的子系统关联起来,组成一个虚拟共享系统。例如,大屏设备1和大屏设备2可以通互联网建立通信连接,从而使子系统1和子系统2关联为一个虚拟共享系统。
在一些实施例中,从设备的通信能力较弱,其可能无法直接连接至公网,因而从设备无法直接与其他子系统中的设备进行通信连接,甚至从设备也无法直接与同一子系统中的其他从设备进行通信连接。但同一子系统中的从设备可以连接至本子系统中的主设备上(如在子系统1中,从设备猫眼摄像头1、屋内摄像头1、蓝牙音箱1等可以连接至大屏设备1;在子系统2中,从设备猫眼摄像头2、屋内摄像头2、蓝牙音箱2等可以连接至大屏设备2上),从设备可以借由主设备的通信能力实现与其他设备之间的通信。例如,在一种可能的共享听歌场景中,子系统2中的蓝牙音箱2请求与子系统1中的蓝牙音箱1共享歌曲播放列表时,蓝牙音箱2需要先向大屏设备2发起请求,由大屏设备2与大屏设备1经由公网通信,再由大屏设备1指示蓝牙音箱1共享歌曲播放列表,之后通过反向路径向蓝牙音箱2共享该歌曲播放列表。虽然蓝牙音箱2可能无法直接与蓝牙音箱1通信,但通过两个子系统中的主设备作为通信桥梁,蓝牙音箱2和蓝牙音箱1也可以实现跨空间共享歌曲列表。
应理解,同一子系统中从设备和主设备的通信连接方式可以有多种,例如通过有线局域网、无线局域网(wireless local area network,WLAN)(如蓝牙(bluetooth)、无线保真(Wireless Fidelity,WiFi)、紫蜂(zigbee)等),本申请对此不作限定。
在一些实施例中,主设备具有较强的计算能力,其可以基于子系统中设备的能力进行任务分发,如利用自身的计算能力选择合适的辅助设备利用自身的特定能力协同完成事件处理。例如,在一种可能的智能开锁场景中,当子系统1中的猫眼摄像头1捕捉到门口的用户图像时,可以将用户图像发送给大屏设备1,大屏设备1基于一定规则判断后,指示智能门锁(图1未示出)开锁,也即主设备根据本子系统中各个电子设备的能力,将智能开锁任务分配给合适的设备(智能门锁)完成。又例如,在一种可能的语音通话场景中,子系统2中的大屏设备2请求与子系统1建立语音通话,则大屏设备1可以根据子系统1中的各个电子设备的语音播放能力和音频采集能力等选择合适的电子设备执行该语音通话任务。
应理解,上述所说的辅助设备可以包括本子系统中的设备(包括主设备、从设备)。 可选地,该辅助设备也可以包括虚拟共享系统中其他子系统中的设备,如其他子系统中具有独立通信功能的电子设备(如智能手表、手机等)。通过选择合适的辅助设备,可以使多个设备协同工作,为用户按需提供场景自适应服务。
还应理解,相对于主设备来说,从设备的计算能力较弱,其可能只具有某一个或几个方面的特定能力。例如,智能门锁具有智能开锁的能力,猫眼摄像头、屋内摄像头具有图像(或视频)采集能力,蓝牙音箱具有音频播放能力等。然而,虽然主设备和从设备具有能力上的差异,但主设备和从设备不是一个绝对的概念,主设备相对于从设备而言,某些能力可以更强(如通信能力、计算能力等),但就某一特定功能而言,从设备具备的能力可能会超过主设备。例如,蓝牙音箱的放音功能高过大屏设备,因此用户在家里会更喜欢用蓝牙音箱播放音乐;家庭智慧屏的屏幕很大,其视频播放效果好过智能手机,用户居家时更喜欢在智慧屏上看电影。
根据上述系统架构,处于不同空间的多个子系统能够网络连接,组建一个虚拟共享系统,该虚拟共享系统中的多种类型的电子设备进而可以协同工作,按需为用户自发提供场景自适应服务,提升用户生活的便捷性。
在本申请实施例提供的多设备配合的方法中,每个子系统可以预先收集本子系统的设备信息和成员信息。为便于理解,本申请实施例以子系统1是老人家庭,成员包括爷爷、奶奶,设备包括大屏设备1(作为子系统1的主设备)、屋内摄像头1、猫眼摄像头1、老人佩戴的智能手表1、智能鞋等;子系统2是儿童家庭,成员包括爸爸、妈妈、儿童,设备可以包括大屏设备2(作为子系统2的主设备)、屋内摄像头2、猫眼摄像头2、儿童的智能手表2等为例进行说明。其中,本申请列举的成员和设备均为示例性举例,在实际应用中,成员和设备不限于本申请实施例所列举的类型。
在一些实施例中,子系统中的设备信息可以包括电子设备的标识(identificaton,ID)、访问地址(如媒体控制存取位址(media access control address,MAC))、能力等。子系统中的成员信息可以包括子系统的成员身份、成员ID、可使用的设备权限等。示例性的,老人家庭的设备信息可以如表1所示,老人家庭中的成员信息可以如表2所示。
表1:
Figure PCTCN2022085793-appb-000001
表2:
Figure PCTCN2022085793-appb-000002
Figure PCTCN2022085793-appb-000003
在一些实施例中,子系统的设备信息和成员信息可以由本子系统的主设备进行收集,该主设备可以将本子系统的设备信息和成员信息分享给其他子系统中的主设备,同时也可以获取其他子系统中主设备分享的设备信息和成员信息。主设备可以基于多个子系统共享的设备信息和成员信息,形成虚拟共享系统的统一共享配置(profile)信息(下称共享配置信息)。示例性的,虚拟共享系统的共享配置信息中的设备信息和成员信息可以分别如表3和表4所示。
表3:
Figure PCTCN2022085793-appb-000004
表4:
Figure PCTCN2022085793-appb-000005
Figure PCTCN2022085793-appb-000006
各子系统中的主设备可以根据共享设置信息中的成员信息(如用户身份、设备使用权限等)按需发起自适应通信和电子设备管理,使子系统中的电子设备协同配合,自动按用户需求提供相关场景下最恰当的服务。
应理解,本申请实施例所说的按用户需求提供的服务是指提供符合用户日常生活中自然需要的服务。自然需要例如可以包括:见到家人、熟人要打招呼的自然需要;见到陌生人到达自己的领地(如家、办公室),要确认陌生人身份的自然需要;家中老人关心孙子放学回家路上情况的自然需要;家中老人发生意外需紧急通知家人或医护人员进行救治的自然需要等。针对这些自然需要,本申请实施例提供的方法,能使虚拟共享系统根据预设规则选择最优方式并自动提供合乎场景的适配服务。
图1从设备层面介绍了本申请实施例的系统架构,以下结合图2从功能层面介绍多设备配合系统的构成。
如图2所示,是本申请实施例提供的另一种多设备配合的系统架构示意图。图2中的子系统1可以与图1中的子系统1对应,子系统2可以与图1中的子系统2对应。
应理解,从功能层面来说,为保证子系统的正常运行,每个子系统需要至少包括设备中心、安全中心、感知中心、应用中心、通信中心和存储中心。
在一些实施例中,设备中心可以用于调度本子系统中所有可用的电子设备。其中,可用的电子设备可以是指当前连接在子系统中,能够利用电子设备自身支持的功能执行待处理事件的设备。不同的电子设备在某一特定功能上可能具备优于其他设备的能力,每个电子设备可以作为子系统中的一个部件存在,用于实现至少一项特定的功能,多个电子设备之间协同配合能够为用户提供场景自适应服务,使得用户在不同场景下体验到电子设备自动提供的适配服务。电子设备可以在设备中心注册自己的能力,设备中心可以按照能力将它们划分归属为不同能力类别的部件集,各个部件集可以实时自动组合,提供完成特定事件(或提供特定服务)的能力。
示例性的,根据能力类别划分,子系统中的部件集可以包括视觉类部件集、听觉类部件集、图像采集类部件集、控制类部件集、穿戴类部件集等。其中,视觉类部件集中的电子设备可以用于提供图像显示的能力或视频播放能力,包括的电子设备如大屏设备、投影仪、PC等;听觉类部件集中的电子设备用于提供音频播放的能力,包括的电子设备如大屏设备、蓝牙音箱等;图像采集类部件集中的电子设备用于提供实时采集周边图像 的能力,包括的电子设备如摄像头(包括猫眼摄像头、屋内摄像头等)等;控制类部件集中的电子设备用于提供至少一种智能家居服务能力,包括的电子设备如智能门锁、空调、智能热水器等;穿戴类部件集中的电子设备用于提供采集用户的身体体征数据的能力,包括的电子设备如智能手表、智能手环、智能鞋等。
应理解,本申请实施例上述对部件集的划分仅为示例,在实际应用时,还可以根据需要进行更细或更多的能力类别划分,获取更多的部件集,本申请对此不作限定。
不同能力类别部件集中的电子设备,按照对应类别能力的不同,在相应的部件集中处于不同的优先级。部件集中的电子设备可以按照优先级顺序进行排序,最能提供部件集对应能力的电子设备排在最优的位置,例如视觉类部件集中视频显示能力最强的大屏设备排在最优的位置。当需要调用某一类别能力完成待处理事件时,设备中心可以按照优先级顺序在对应的部件集优先调用优先级高的电子设备提供此类别的能力。示例性的,电子设备优先级可以按照以下公式(1-1)确定:
电子设备的优先级=电子设备处理能力因子×电子设备处理效率因子×电子设备用户体验因子×电子设备性能功耗因子(1-1)
其中,电子设备处理能力因子可以指该电子设备具有的与部件集类别相关的能力,比如对于视觉类部件集中的电子设备,该处理能力因子可以包括图像分辨率等参数;对于听觉类部件集中的电子设备,该处理能力可以包括音频的信噪比等参数。处理效率因子可以指电子设备执行待处理任务时的效率,可以包括如连接网络的类型(如蜂窝网、宽带、WiFi)、电子设备的处理器性能(如图像处理器、音频处理器性能)等。用户体验因子可以包括电子设备的屏幕尺寸、扩音器尺寸等影响用户视听体验的设备参数。电子设备性能功耗因子可以包括电池续航能力、电子设备内存大小参数等。
在一些实施例中,在不同场景中通过公式(1-1)计算电子设备的优先级时,可以对电子设备对应的相关参数进行处理。以视觉类部件集中的大屏设备、平板电脑、手机这三者优先级的计算为例,大屏设备、平板电脑、手机对应的各项因子可以如表5所示。
表5:
Figure PCTCN2022085793-appb-000007
对于无法直接用参数数值表示的因子,可以根据不同电子设备的性能或能力等设置相应的预设值(如1、2、3等数值)。例如,对于处理效率因子,大屏设备连接的网络类型为有线带宽、平板电脑连接的网络类型是WiFi、手机连接的网络类型为蜂窝网络,由于一般来说,有线带宽的网络性能好于WiFi,WiFi的网络性能好于蜂窝网络,因而可以采用预设值3表示大屏设备的处理效率因子,采用预设值2表示平板电脑的处理效率因子,采用预设值1表示手机的处理效率因子。类似地,对于上述电子设备的性能功耗 因子中的连接电源的类型,可以采用预设值2表示大屏设备的性能功耗因子,采用预设值1分别表示平板电脑和手机的性能功耗因子。
此外,对于能够用参数数值表示因子的优先级相关项,一种可选的方式为直接采用各项因子对应的参数计算电子设备的优先级,例如将上述大屏设备的图像分辨率1080直接带入公式(1-1)对应的处理能力因子项中;另一种可选的方式为对参数进行处理,将不同电子设备的因子对应的参数归一化为统一量纲的数值,例如对于用来表示用户体验因子的显示屏尺寸来说,由于大屏设备的显示屏尺寸(55英寸)与平板电脑的显示屏尺寸(10英寸)、手机的显示屏尺寸(6.1英寸)数值相差较大,若直接将显示屏尺寸数值带入公式(1-1)则优先级结果会由用户体验因子主导,无法体现其他项对优先级结果的影响,因而,可以根据不同电子设备的显示屏大小进行数据处理,如大屏设备的用户体验因子可以采用数值3表示,平板电脑的用户体验因子可以采用数值2表示,手机的用户体验因子可以采用数值1表示。
示例性的,对电子设备的不同因子经过上述处理后,对应的数值可以如表6所示。
表6:
Figure PCTCN2022085793-appb-000008
则根据公式(1-1)计算的电子设备优先级结果分别为:
大屏设备的优先级=1×3×3×2×1=18;
平板电脑的优先级=1×2×2×1×2=8;
手机的优先级=1×1×1×1×3=3;
因此,对于上述三种电子设备,其在视觉类部件集中的优先级排序为:大屏设备>平板电脑>手机,当需要调用视觉类部件集中的电子设备时,可以优先调用大屏设备。
应理解,上述对电子设备各类因子设置的预设值可以灵活设置,本申请对此不作限定。针对不同场景所需的能力可能不同,上述公式(1-1)可以允许某些因子不存在。
还应理解,本申请实施例之所以引入部件集的概念,是因为能够提供相同或相似能力的电子设备可能有多个,为保证某一功能得以顺利实现,将具有该功能的电子设备划分在同一类别的部件集中。为了使得某一功能的实现呈现最优的效果,避免功能实现过程中的执行错误,将部件集中的设备按照能力进行优先级排序,并按照优先级顺序选择一个设备实现该功能。
在一些实施例中,安全中心可以用于提供加密鉴权等安全校验功能,保障虚拟共享系统应用的操作、通信以及对电子设备管理等的安全可靠性。例如,安全中心可以对用户身份进行认证鉴权,以确认用户是否可以使用系统,或以何种权限接入和使用系统。安全中心可以设置于至少一个能提供安全能力的设备上,如手机、平板电脑、大屏设备、 PC等都可以作为安全能力提供者而成为安全中心的一个部件。
在一些实施例中,存储中心存储有虚拟共享系统的共享配置信息(如表3和表4所示的共享配置信息),该共享配置信息可以包括虚拟共享系统中所有设备和成员的信息,用于供子系统中的安全中心、应用中心等查询,以完成用户身份认证鉴权以及相关应用的调用。示例性的,存储中心例如可以包括图3中的内部存储器121以及位于处理器110中的存储器等。
在一些实施例中,感知中心可以根据预设判断规则和用户信息、用户通用行为模型等,综合判断用户的意图。感知中心可以设置于能提供感知服务能力的设备上,如手机、平板电脑、大屏设备、PC等。
在一些实施例中,应用中心可以基于感知中心对子系统当前状况的感知,自动选择相应的应用(或功能)并主动发起应用。应用选择时,可以基于系统的共享配置信息,以选择与系统中的哪个子系统进行通信等。发起的应用可以通过安全中心校验,通过后经由通信中心与其他子系统进行通信。
在一些实施例中,通信中心可以提供子系统与其他至少一个子系统进行无线通信的能力。示例性的,通信中心例如可以包括图3所示的天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等。
应理解,上述功能中心可以设置于一个主设备上,或者也可以分布式主设备上。例如,各个中心的功能均由主设备中的相关部件提供;或者,各个中心可以分布在子系统中不同的设备上,各个中心可以联合为一个分布式的虚拟主设备,例如当上述各个中心无法集成在子系统中单独一个电子设备上时,可以由多个设备提供不同中心的功能,即多个设备协同完成子系统中各个中心的任务。其中,各个中心可以具有独立的接口,以实现各个中心之间的通信。
示例性的,如图3所示,为本申请实施例提供的一种电子设备的结构示意图。该电子设备100可以是上述图1、图2所示子系统中的主设备(如大屏设备1或大屏设备2)对应的电子结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit, GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他终端,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为终端供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B 等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。显示屏194用于显示图像,视频等。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。视频编解码器用于对数字视频压缩或解压缩。NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器180B可以用于确定电子设备100的运动姿态。磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。加速度传感器180E可检测电子设 备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别终端姿态,应用于横竖屏切换,计步器等应用。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。指纹传感器180H用于采集指纹。温度传感器180J用于检测温度。触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。骨传导传感器180M可以获取振动信号。
此外,电子设备100还包括气压传感器180C和距离传感器180F。其中,气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图4是本申请实施例的主设备对应的电子设备100的软件结构示意图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)系统层以及内核层。
应用程序层可以包括一系列应用程序包。
如图4所示,应用程序包可以包括日历、地图、WLAN、音乐、通知、图库、蓝牙、视频等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图4所示,应用程序框架层可以包括窗口管理器,内容提供器,通话管理器,资源管理器等。此外,上述所说的子系统中的通信中心、应用中心也位于应用程序框架层。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
通话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框 架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统层可以包括上述所说的子系统中的设备中心、安全中心、感知中心、存储中心等多个功能模块以及TCP/IP协议栈、蓝牙/WiFi协议栈等。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动以及编解码等。
示例性的,图5A至图5F提供了一些图形用户界面(graphical user interface,GUI)示意图。这里以子系统中的主设备为大屏设备,且以儿童家庭(可对应上述子系统2)中的用户(如爸爸)登录主设备进行设备管理和成员管理为例进行说明。
在一些实施例中,大屏设备(如智能电视、智慧屏等)可以安装有用于管理多设备配合的特定应用程序(application,APP),该特定应用程序为第三方应用程序或者为大屏设备自带的应用程序(如智慧生活APP)。示例性的,用户在大屏设备上点击智慧生活APP图标之后,响应于用户的点击操作,大屏设备可以显示如图5A所示的智慧生活系统注册/登录界面。当用户已经注册有智慧生活系统的账号及密码时,可在相应的账号和密码输入框填写相应的信息,以登录该APP。
响应于用户输入的账号和密码,大屏设备可以显示如图5B所示的显示界面,该界面可以为智慧生活系统开启界面,用以显示目前该大屏设备所在的子系统(如“我的家庭”(也即儿童家庭)子系统,对应于图1和图2中的子系统2),以及大屏设备连接的其他子系统(如老人的家子系统,对应于图1和图2中的子系统1)。当用户点击“我的家庭”图标501时,大屏设备可以显示如图5C所示的界面,该界面包括儿童家庭中的设备管理图标503以及成员管理图标504。
示例性的,当用户点击设备管理图标503时,大屏设备可以显示如图5D所示的界面,该界面为儿童家庭的设备管理界面。通过该界面用户可以查看当前大屏设备所在的儿童家庭中包括的电子设备(如大屏设备2、屋内摄像头2、智能门锁2、猫眼摄像头2等)以及各个电子设备的在线状态,如图5D所示的“已连接”用于表示对应的电子设备当前与大屏设备连接,处于可用状态。若需要在系统中添加儿童家庭中新的电子设备,则用户可以点击“添加新设备”一栏中的添加控件,大屏设备会显示相应的设备添加页面(图5A至图5F未示出)。添加新设备的方式可以有多种,例如在一种可选的方式下,用户可以手动输入需要添加的电子设备的ID(如名称)、访问地址(如MAC地址)以及设备的能力;或者,在另一种可选地方式下,大屏设备响应于用户点击添加新设备的操作,可以扫描周围的电子设备,若发现可用的新设备(如和大屏设备连接在同一WiFi、与大屏设备建立有蓝牙连接或者与大屏设备有线连接等的新设备),则可以自动添加该新设备。
此外,在图5D所示的界面还可以包括老人家庭(即子系统1)中的电子设备。继续参考图5D,老人家庭中当前在线设备包括4个,分别为老人家中的大屏设备1、屋内摄像头1、智能门锁1、猫眼摄像头1,其中,大屏设备1后的控件显示“已连接”状态,表示该大屏设备1与儿童家庭中的大屏设备2处于通信连接中。可选地,两个子系统可以共享一个用户账号,用户在子系统2的大屏设备2上也可以对子系统1中的设备进行管理,例如用户通过点击老人的家所对应的“添加新设备”一栏中的添加控件,可以指示老人家中的大屏设备添加新的设备,其中,添加方式可以与上述所说的子系统2中的 添加方式类似,此处不再赘述。
在一些实施例中,用户还可以在图5C所示的界面点击成员管理图标504。响应于用户点击成员管理图标的操作,大屏设备可以显示如图5E所示的成员管理界面,该成员管理界面可以包括用户所在的儿童家庭中的成员信息以及老人家庭中的成员信息,例如,“我的家庭”成员可以包括爸爸、妈妈和儿童。用户可以通过点击成员对应的权限管理控件,对允许成员使用的设备权限进行设置,例如,当用户点击爸爸对应的权限管理控件时,大屏设备可以显示如图5F所示的设备权限设置界面。该设备权限设置界面包括子系统包括的各个电子设备以及各个设备具有的能力,用户可以通过点击选择对应的控件,为用户设置对应的权限,其中,选择对应功能后,该功能对应的选择控件例如可以显示“√”。此外,用户还可以通过点击添加设备权限栏的添加控件为用户新增设备权限。
应理解,大屏设备2所显示的设备功能(如图5F所示)可以由各个电子设备预先在大屏设备2上注册。例如,电子设备与大屏设备2建立连接后,电子设备可以在大屏设备2上进行能力注册,大屏设备2可以基于各个电子设备对应的能力在设备权限设置界面显示对应的功能。
在一些实施例中,各个电子设备向大屏设备2进行能力注册时,可以同时向大屏设备2发送电子设备的ID(如名称)、访问地址(如MAC地址)等信息。大屏设备2可以基于这些信息建立本子系统的设备配置信息(如表1所示)。类似地,大屏设备2在接收到用户添加的本子系统的成员信息时,也可以建立本子系统的成员配置信息(如表2所示)。
在一些实施例中,当本子系统中的主设备与其他子系统(如子系统1)中的主设备建立通信连接后,可以向子系统1中的主设备发送本子系统的设备和成员配置信息,并且接收子系统1的主设备发送的该子系统1中的设备和成员配置信息。之后,各个子系统的主设备可以基于本子系统的设备和成员配置信息以及其他子系统分享的设备和成员配置信息生成共享配置信息(如表3和表4所示),并将该共享配置信息存储于主设备。
在一些实施例中,不同子系统中的主设备建立通信连接的方式例如可以是:用户在子系统1的主设备上输入子系统2的主设备的访问地址,由主设备1通过射频模块与主设备2建立通信连接。示例性的,子系统主设备之间的通信类型可以是点对点(peer to peer,P2P)通信,该P2P通信建立的具体流程可以参见现有技术,此处不再详述。
应理解,在图5A至图5F所示的实施例中,大屏设备可以具有触敏显示屏,用户可以通过触摸方式与大屏设备交互。在实际应用时,大屏设备也可以接收用户通过其他方式进行的交互操作,例如接收用户通过遥控器输入的信息等。本申请对此不作限定。
还应理解,图5A至图5F所示的主设备界面仅为示例,在实际应用时,相关界面还可以在其他具有显示屏的设备上显示(如平板电脑、手机等),且相关界面呈现的具体内容以及呈现方式也可以采用其他形式,例如,当用户登录APP时,还可以采用人脸识别登录、语音识别登录等方式,本申请对此不作限定。
以下结合图1和图2所示的系统架构以及图3和图4所示的电子设备,以一些可能的应用场景为例对本申请实施例提供的多设备配合的方法进行介绍。
场景一:老人查看儿童放学轨迹。
如图6所示,为本申请实施例提供的场景一的示意图。
在一些实施例中,在图6所示的老人家庭(即上述子系统1)中,大屏设备1作为 主设备。该大屏设备1可以具有子系统1所需要的设备中心、安全中心、存储中心、应用中心、通信中心等。该大屏设备1与子系统1中的其他电子设备(如猫眼摄像头1、屋内摄像头1、智能门锁1等)通过有线或无线方式通信连接,其中,无线通信连接方式例如可以包括通过蓝牙连接、Wi-Fi连接等方式,本申请对此不作限定。
当儿童快放学时,爷爷奶奶需要先回到自己的家(老人家庭),以便借助家庭中的电子设备了解儿童放学回家情况。当爷爷奶奶到达家门口时(如图6所示位置1),门口的猫眼摄像头1会捕捉到爷爷奶奶的图像,之后猫眼摄像头1通过有线或无线的方式将爷爷奶奶的图像发送至房间内的大屏设备1(即步骤S601)。大屏设备1获取爷爷奶奶的图像后,进行图像识别,确定身份,并根据共享配置信息对爷爷奶奶进行设备使用的权限认证(如表4中爷爷奶奶对应的允许智能门锁1自动开锁的权限);认证通过,则指示智能门锁1开启(步骤S602),使得老人无需操作门锁即可进入房间内。
老人进入房间内后(如图6所示的位置2),屋内摄像头1捕捉到老人的图像,并将老人图像发送至大屏设备1(步骤S603)。大屏设备1根据屋内摄像头1发送的老人图像获知老人已经进入房间内,之后根据其他辅助信息(如当前时间属于预设的儿童放学的时间段)以及共享配置信息获取老人的权限为获取儿童的儿童智能手表的位置和轨迹信息,则大屏设备1自动匹配至儿童佩戴的儿童智能手表。大屏设备1可以通过互联网向儿童智能手表请求位置和轨迹信息(步骤S604),儿童智能手表响应于该请求,向大屏设备1发送此时的位置以及特定历史时间段内的历史轨迹(步骤S605)。大屏设备1获取儿童智能手表的位置及历史轨迹后,自动向用户显示对应的信息(如图6大屏设备显示的轨迹S606)。可选地,大屏设备1还可以根据儿童智能手表当前与家庭的距离以及儿童的速度预测到家时间,并显示相关信息(如图6大屏设备显示的“预计还有10分钟到家”),以便老人可以了解孩子在放学路上的大致情况。
在一些实施例中,如果有多个儿童,每个儿童分别佩戴儿童智能手表,则老人家庭的子系统可以自动匹配至多个对应的儿童智能手表,并分别获取这些儿童智能手表的位置和轨迹信息。
应理解,通过上述过程,老人无需自动发起设备管理,即可体验门锁自动开门,进门后大屏设备1自动显示儿童放学轨迹的无感操作体验。该过程在满足用户自然需求的同时,不需要用户掌握具体的设备操作技能,尤其适用于老人、儿童等设备操作能力较低的人群。
场景二:儿童到家后,儿童家庭自动建立与老人家庭的视频通话。
示例性的,如图7所示,为本申请实施例提供的场景二的示意图。该场景二描述如下:
当儿童到达家(儿童家庭)门口时,门口的猫眼摄像头2捕捉到儿童的身影。猫眼摄像头2将儿童图像发送至儿童家庭中的大屏设备2(主设备)(即步骤S701)。大屏设备2获取儿童图像后,对儿童进行身份认证,并基于共享配置信息获取儿童的权限(如表4中儿童对应的允许智能门锁自动开锁的权限)。之后大屏设备2指示智能门锁2开锁(即步骤S702)。智能门锁2响应于大屏设备的指示自动开锁,为儿童自动开门。
儿童进入房间后,屋内摄像头2捕捉到儿童图像,并将儿童图像发送至大屏设备2(即步骤S703)。大屏设备2根据屋内摄像头2发送的儿童图像获知儿童已经进入房间内,之后可以根据其他辅助信息(如当前时间属于预设的儿童放学时间段)以及共享配 置信息自动与老人家庭中的大屏设备1建立视频通话,将儿童图像和音频信息发送至老人家庭中的大屏设备1(即步骤S704),并接收老人家庭的大屏设备1发送的视频图像和音频信息(即步骤S705),实现两个子系统自动为老人和儿童建立视频通话。
应理解,通过上述过程,老人和儿童无需自主操作电子设备,即可实现在儿童放学回家后与老人打招呼的自然交流体验,老人、儿童可以如同处于同一空间自然、随需地交流。
根据本申请实施例提供的方法,通过将不同地域子系统组建为一个大的虚拟共享系统,并使子系统中的电子设备按照场景自适应为用户提供服务,能使得由于物理空间隔开的成员如同处于一个虚拟空间,从而使子系统成员之间获得自然流畅、随需触发的交流效果,增进成员之间的了解和关爱。
场景三:发生紧急事件时的紧急呼救。
示例性的,如图8所示,为本申请实施例提供的场景三的示意图。在该场景下,设备中心还可以用于管理不同穿戴设备(如智能手表、智能手环、智能鞋、智能眼镜等),该穿戴类设备的各个传感器可以对用户生理体征进行实时感知,以判断用户是否发生异常事件。若用户发生异常事件需要紧急求助,则通过系统自动发起与其他子系统的通信。示例性的,用户生理体征例如可以包括:脉搏、呼吸、心跳、血压、瞳孔等。该场景三的描述如下:
一种可能的场景为:当老人(如爷爷)在家遇到紧急事件(如突发疾病)时,老人佩戴的智能手环可以检测到老人生理体征数据异常,并识别到突发疾病;智能手环可以将检测到的异常的生理体征数据连同突发疾病事件一起上报给大屏设备1(即S801)。大屏设备1的感知中心根据突发异常事件感知到老人身体异常变化后,通过共享配置信息中允许老人使用自动视频通话的权限与儿童家庭自动建立视频通话(即步骤S802)。当视频通话自动建立后,大屏设备2可以根据共享配置信息(如表4所示的允许查看老人生理体征)向大屏设备1请求老人的生理体征数据。大屏设备1响应于请求可以将老人发生的突发疾病以及老人的异常生理体征数据发送给大屏设备2。大屏设备2可以显示老人家庭紧急事件提醒,例如如图8所示,提醒“爷爷血压明显升高、心跳加速,需及时就医”等。
另一种可能的场景为:当老人在家不慎滑倒时,老人家庭的屋内摄像头1可以获取到老人的身体呈滑倒姿势的图像,并将该滑倒图像发送至大屏设备1。大屏设备1可以基于图像信息识别到老人的身体姿势异常,也即感知到老人发生了紧急事件;之后,大屏设备1可以根据虚拟共享系统的配置信息中老人对应的设备使用权限(如发生紧急事件时,允许自动建立视频通话),与儿童家庭自动建立视频通话进行紧急呼救。
又一种可能的场景为:老人家庭子系统可以与医疗子系统关联。当老人发生紧急事件时,老人家庭的设备检测到该紧急事件后,上报给大屏设备1。该大屏设备1可以基于虚拟共享系统中老人对应的设备使用权限(如发生紧急事件时,允许自动呼叫医院急诊)自动发起与医疗系统之间的通信,从而使医疗人员及时实施救助。
应理解,通过上述方法,当子系统成员发生紧急事件后,子系统可以感知该紧急事件,并且及时对其他相应子系统中的成员发起紧急呼救,从而使远处的家人或医疗看护人员能了解老人当前状况,及时组织救助。
场景四:行车过程中的自发通信。
示例性的,如图9所示,为本申请实施例提供的一种场景四的示意图。在该场景四中,虚拟共享系统中的一个子系统为车载子系统,另一个子系统为家庭子系统(如儿童家庭子系统)。该场景四的描述如下:
在一个可能情形下:车内的主设备(如车载电脑)监测到车辆即将到达预设的目的地时,可以自动发起与对应目的地子系统之间的视频通话。例如,车辆行驶前,爸爸可以在车载电脑输入目的地为儿童家庭;在车辆行驶过程中,车载电脑定位模块可以实时获取车辆位置,当车辆距离儿童家庭小于某一阈值(如1Km)时,车载电脑可以自动发起与儿童家庭中大屏设备2的视频通话(即步骤S901),告知家人马上安全到达。
在另一个可能的情形下:当在行驶过程中发生意外事件(如驾驶员突发疾病、发生车祸等)时,车辆中的摄像头可以获取相关事件的图像,并传输至车载电脑。车载电脑根据获取的图像判断发生意外事件,之后车载电脑可以根据共享配置信息中允许驾驶员使用的设备权限(如当发生意外事件时,允许驾驶员所在车辆自动与其他子系统建立通信)向儿童家庭子系统或者保险救助子系统自动发起视频通信,告知家人或保险救助人当前驾驶员处于异常情况,以便相关人员组织救助。
通过上述过程,车辆远行时,车载子系统可以按需向其他子系统发起通信,使得驾驶员与家人或保险救助人等人员如同处于同一空间自然、按需发起交流,不仅可以提升用户体验,更能够在发生意外事件时,通知相关人员及时施救,保障用户安全。
示例性的,如图10A所示,为本申请实施例提供的一种多设备配合的方法的示意性流程图。该流程中的步骤可以由虚拟共享系统中的第一主设备来执行,该虚拟共享系统至少包括第一子系统和第二子系统,该第一主设备属于第一子系统的主设备。该流程可以包括以下步骤:
S110,获取第一用户的用户信息,该第一用户属于所述虚拟共享系统中的成员。
其中,第一主设备例如可以对应上述介绍的主设备1或主设备2;第一用户例如可以对应上述介绍的家庭成员,如老人、儿童等。
在一些实施例中,第一主设备获取第一用户的用户信息可以包括:第一主设备获取该第一主设备自身采集到的第一用户的用户信息,例如,当第一主设备为带有摄像头的大屏设备时,第一主设备可以通过摄像头采集用户图像;或者,第一主设备接收第一电子设备发送的用户信息,该第一电子设备可以属于第一子系统,可以是第一子系统中具备信息采集能力的任一从设备,如猫眼摄像头、屋内摄像头、麦克风等。
示例性的,用户信息例如可以包括用户图像。可选地,用户信息也可以包括用户的语音、用户的生物特征(如指纹)等。
具体地,该用户信息可以是图6实施例中由猫眼摄像头1采集的老人图像;或者,可以是图7实施例中,由猫眼摄像头2采集到的儿童图像;或者,可以是图8实施例中,由智能手环采集到的老人的生命体征信息;或者,可以是图9实施例中,由车载电脑获取到的用户的位置信息等。
在一些实施中,当第一主设备获取第一用户的用户信息后,可以根据用户信息和共享配置信息,对第一用户进行身份认证;当该身份认证通过时,确定第一用户为虚拟共享系统中的成员。应理解,本申请中的每个子系统包括至少一个成员的注册信息和至少一个设备的注册信息,因而共享配置信息可以包括虚拟共享系统中每个子系统对应的成员信息和设备信息,其中,共享配置信息可以如表3和表4所示。
S120,识别与用户信息关联的用户意图,该用户意图包括使第二子系统中的至少一个第二电子设备执行服务操作。
在一些实施例中,第一主设备识别与用户信息关联的用户意图,可以具体包括:第一主设备首先根据获取的第一用户的用户信息,确定第一用户当前的状态;然后,可以根据第一用户当前的状态,确定与该状态对应的第一用户的用户意图。
在一些实施例中,用户状态可以包括:所述第一用户进入房间内;所述第一用户生命体征异常;所述第一用户身体姿势异常;所述第一用户与目的地之间的距离小于第一阈值等。
在一些实施例中,用户状态可以与用户意图之间具有对应关系。例如,当第一用户当前的用户状态满足上述任意一种时,对应的用户意图为与第二子系统建立视频通话。比如,在图6实施例中,主设备1首先根据猫眼摄像头1发送的老人图像确定老人的状态为位于门外;之后,若主设备1又接收到房间内的屋内摄像头1发送的老人图像,则主设备1可以确定老人的状态变为由门外进入房间内;根据老人当前已进入房间内的用户状态,主设备1可以确定对应的老人的意图为与第二子系统中的儿童手表建立通信,以获知儿童的放学轨迹。
再比如,在图7对应的实施例中,主设备2首先根据猫眼摄像头2和屋内的屋内摄像头2发送的儿童图像,确定儿童当前的状态为进入屋内。根据儿童当前的状态,主设备2可以判断儿童接下来的意图为与子系统1建立视频通话,以向爷爷奶奶打招呼。
又比如,在图8对应的实施例中,主设备1根据接收到的老人的智能手环发送的生命体征异常数据,可以确定老人当前处于生命体征异常状态;根据该当前的状态,主设备1可以判断老人此时的意图为与子系统2建立视频通话,以向家人求助。
又比如,在图9对应的实施例中,当车载子系统3的主设备(如车载电脑)根据用户位置确定用户即将到达目的地的状态后,可以判断用户的意图为与目的地的家人家庭(即子系统2)建立视频通话,以提前告知家人马上到家。
在一些实施例中,主设备根据用户信息确定用户状态后,还可以结合辅助信息,以更准确地确定用户的意图。其中,辅助信息例如可以包括:日期信息、时间信息等。
比如,在图6实施例中,主设备1首先根据猫眼摄像头1和屋内摄像头1发送的老人图像,确定老人当前的用户状态为进入房间内;同时,结合此时为放学时间,主设备1可以判断老人的意图为想要与儿童手表通信,以获知儿童的放学轨迹。
在一些实施例中,第一主设备可以建立或共享用户通用行为模型,该用户通用行为模型包括用户信息与用户意图之间的对应关系。示例性的,如表7所示,该用户通用行为模型例如可以包括用户标识、时间、地点、用户意图等之间的映对应关系。第一主设备可以根据用户信息确定关联的用户意图。其中,用户信息中的时间可以基于电子设备采集用户信息时的时间确定,用户信息中的地点可以根据采集用户信息的电子设备的类型来确定,例如,如果用户信息是由猫眼摄像头采集的,则确定用户位于门外;如果用户信息是由屋内摄像头采集的,则确定用户位于屋内。
表7:
Figure PCTCN2022085793-appb-000009
Figure PCTCN2022085793-appb-000010
在一些实施例中,第一主设备可以记录预设历史时段内的用户信息和用户意图之间的对应关系,若某一类型的用户信息和用户意图的对应关系的连续记录次数达到预设阈值,则可以该用户信息和用户意图添加至用户通用行为模型中。例如,主设备连续10天在12:00~13:00记录到屋内摄像头采集的老人图像与空调开启并设置为26℃,则主设备可以将用户标识(老人)、时间(12:00~13:00)与用户意图(空调开启且设置为26℃)之间的对应关系添加至用户通用模型,则下一次主设备再在12:00~13:00接收到屋内摄像头采集的老人图像后,可以自动指示空调开启且设置为26℃。
应理解,表7所示的映射关系仅为一个示例,在实际应用中,该用户通用行为模型还可以包括更多其他项,本申请对此不作限定。通过基于用户信息与常用的设备服务的通用模型,电子设备可以自动发起与用户期望匹配的服务,提升用户的无感操作体验。
S130,根据用户信息和共享配置信息,向第二子系统中的第二主设备发送请求消息,该请求消息用于请求服务操作,共享配置信息包括虚拟共享系统中每个子系统对应的成员信息和设备信息。
在一些实施例中,第一主设备确定用户意图后,还可以对第一用户的设备使用权限进行验证,该验证过程可以包括:第一主设备根据用户信息和共享配置信息确定第一用户具有使用第二子系统中的至少一个第二电子设备的权限;第一主设备向第二子系统中的第二主设备发送请求消息。
在一些实施例中,第二主设备接收第一子系统中的第一主设备发送的请求消息;并响应于该请求消息,指示至少一个第二电子设备执行服务操作。
在一些实施例中,第二主设备根据请求消息确定服务操作所需的能力的类型;然后,根据第二子系统中具有的该类型能力的电子设备对应的优先级,指示第二电子设备执行服务操作,其中,该第二电子设备为第二子系统中具有该能力的电子设备中优先级最高的。
应理解,子系统中具有同一类型能力的电子设备可以划分到同一个部件集中,如上述内容介绍的视觉类部件集、听觉类部件集、图像采集类部件集、控制类部件集、穿戴类部件集等、主设备可以根据同一部件集中的电子设备具有的与部件集能力类型对应的能力强弱,对电子设备进行优先级排序。当需要电子设备的某一类型的能力执行服务操作时,主设备可以根据该类型能力部件集中电子设备的优先级,选择优先级最高的电子设备执行服务操作。
示例性的,如图10B所示,为本申请实施例提供的一种更为具体的多设备配合的方法的示意性流程图。该流程图用于介绍子系统之间建立通信时的端侧实现过程,包括以下步骤:
S1101,第一子系统中的某一电子设备获取用户信息。
其中,第一子系统仍然可以对应于上述内容中的子系统1或子系统2;该步骤中的电子设备可以是第一子系统中的从设备,例如猫眼摄像头、屋内摄像头、麦克风等。
在一些实施例中,用户信息可以包括用户图像,该用户图像可以具体包括:用户的 人脸、用户身影等。可选地,用户信息也可以包括用户的语音、用户的生物特征(如指纹)等。
在一些实施例中,第一子系统可以有一个或多个电子设备同时获取用户信息。例如,当用户进入房间内时,屋内摄像头可以捕捉到用户的图像,麦克风可以采集到用户的声音等。
S1102,第一主设备根据用户信息对第一用户进行安全校验,确认第一用户可以使用虚拟共享系统。
其中,第一主设备可以通过安全中心对用户进行安全校验。该安全校验可以包括:安全中心根据用户信息对第一用户进行身份认证,该身份认证用于验证第一用户是否为虚拟共享系统中的成员,进而确定是否允许第一用户使用该虚拟共享系统;当身份认证通过后,若后续确定用户意图为获取某项服务操作时,安全中心还可以根据共享配置信息,对该第一用户具有的设备使用权限进行校验,如校验第一用户是否具有使用能提供某项服务操作的电子设备的权限。
应理解,通过安全中心对用户进行身份认证、设备使用权限认证,可以避免非虚拟共享系统成员随意使用该系统的功能,保证虚拟共享系统中的电子设备为用户提供服务操作过程中的安全性。
针对上述身份认证,在一些实施例中,用户可以预先在主设备上注册自己的信息,如录入人脸图像、语音、生物特征等。主设备可以存储用户信息,以便后续接收到子系统采集的用户信息时,能够与预存的相应信息进行比对,从而确定用户的身份。示例性的,身份认证具体可以包括:安全中心根据用户图像对第一用户进行人脸识别,确认身份;或者,安全中心根据第一用户的语音对用户进行声纹识别;或者,安全中心可以进行指纹等用户生物特征的识别。
针对上述设备使用权限的校验,在一些实施例中,第一主设备可以预先设置第一用户的设备使用权限,设置方式可以如图5F所示,本申请对此不作限定。当后续第一主设备根据用户意图调用其他子系统中的某一应用或者电子设备向用户提供服务时,安全中心可以对用户可使用的设备权限进行校验,以保证所调用的应用或者电子设备能够被用户使用。
S1103,第一主设备根据用户信息识别关联的用户意图。
在一些实施例中,安全中心对用户身份认证通过后,若确认允许该第一用户使用虚拟共享系统,第一主设备可以通过感知中心进一步根据用户信息判断用户意图。
在一些实施例中,主设备的感知中心可以根据用户信息确定用户当前所处的用户状态,根据用户状态进一步判断用户意图。
比如,在图7对应的实施例中,主设备2首先根据猫眼摄像头2发送的儿童图像确定儿童的状态为位于门外;之后,若主设备2又接收到屋内的屋内摄像头2发送的儿童图像,则可以确定儿童当前的状态为进入屋内。根据儿童当前的状态,主设备2可以判断儿童接下来的意图为与爷爷奶奶建立视频通话,以向爷爷奶奶打招呼。
再比如,在图8对应的实施例中,主设备1接收到智能手环发送的生命体征异常数据时,可以确定老人当前处于突发疾病状态;根据该当前的状态,主设备1可以判断老人此时的意图为与子系统2中的家人建立视频通话,以向家人求助。
又比如,在图9对应的实施例中,当车辆内的子系统3的主设备(如车载电脑)接 收到用户即将到达目的地的状态后,可以判断用户的意图为与目的地的家人建立视频通话,以提前告知家人。
在一些实施例中,主设备的感知中心根据用户信息确定用户状态后,还可以结合辅助信息,以更准确地确定用户的意图。其中,辅助信息例如可以包括:日期信息、时间信息等。
比如,在图6实施例中,主设备1首先根据猫眼摄像头1发送的老人图像确定老人的状态为位于门外;之后,若主设备1又接收到房间内的屋内摄像头1发送的老人图像,则主设备1可以确定老人的状态变为由门外进入房间内;根据老人当前已进入房间内的状态,结合此时为放学时间,则主设备1可以判断老人的意图为想要获知儿童的放学轨迹。
S1104,第一主设备根据用户意图确定由第二子系统中的电子设备执行服务操作。
在一些实施例中,第一主设备的应用中心还可以根据用户意图确定需要调用的应用,并根据共享配置信息,确定该应用所归属的子系统。
例如,在图7实施例中,应用中心根据感知中心的感知结果获知儿童的意图为与子系统1视频通话,以与爷爷奶奶打招呼;应用中心根据该意图确定调用的应用为视频通话应用,且根据共享配置确定该应用归属于子系统1。
S1105,第一主设备自动发起与第二子系统中的第二主设备之间的通信。
在一些实施例中,应用中心可以指示第一主设备的通信中心与第二主设备的通信中心建立通信。
在一些实施例中,当第一主设备与第二子系统中的第二主设备已经建立有通信时,第一主设备可以自动发起与第二子系统中的第二主设备之间的通信。具体地,第一主设备可以向第二主设备发送请求消息,请求第二子系统中的至少一个第二电子设备执行服务操作。
在一些实施例中,第二主设备可以根据请求消息确定服务操作所需的能力的类型,并选择可以提供该类型能力的电子设备执行服务操作。
例如,在图7实施例中,儿童家庭中的应用中心可以通过通信中心向老人家庭主设备1发送请求消息,该请求消息例如为儿童家庭和老人家庭建立视频通话。老人家庭的主设备获知服务操作为建立视频通话后,可以自动调取视觉类部件集中的设备进行视频通信显示,调取听觉部件集中最优的音频设备的麦克风和喇叭等进行音频通话。
根据本申请实施例提供的多设备配合的方法,通过将不同地域子系统组建为一个大的虚拟共享系统,并使子系统中的电子设备按照场景自适应为用户提供服务,能使得由于物理空间隔开的成员如同处于一个虚拟空间,从而使子系统成员之间获得自然流畅、随需触发的交流效果,增进成员之间的了解和关爱。
以上结合附图对本申请实施例提供的多设备配合的方法的一些可能的应用场景以及交互流程进行介绍,为更好的理解本申请实施例提供的多设备配合的方法,以下从内部实现层面进行进一步的介绍。
示例性的,如图11A所示,为本申请实施例提供的一些设备的细化结构示意图。以上述场景一中的智能门锁为老人自动开锁阶段为例,结合图6对该应用场景下的实现过程进行介绍。
在图11A所示的实施例中,猫眼摄像头1包括图像采集模块1001、通信模块1002; 大屏设备1(子系统1中的主设备)包括通信中心1101、安全中心1102、感知中心1103、应用中心1104、存储中心1105、设备中心1106;智能门锁1包括通信模块1201、控制模块1202。其中,通信模块1002、通信中心1101、通信模块1201在内部结构和功能实现上可以是相同或者类似的。
在图6所示的场景下,当老人到达门口猫眼摄像头1的图像采集区域(门口)后,猫眼摄像头1的图像采集模块1001可以采集老人的用户图像,之后图像采集模块1001可以将用户图像转换为电信号传输至通信模块1002。通信模块1002将用户图像电信号通过有线方式或无线方式传输至大屏设备1的通信中心1101。
通信中心1101将用户图像信息传输至安全中心1102。安全中心1102根据用户图像对用户身份以及用户的虚拟共享系统使用权限进行认证。具体地,安全中心1102可以由存储中心1105查询预先存储的用户参考信息(如预先录入的用户人脸图像),并将获取的用户图像与预存的用户参考图像进行比对,当用户图像与用户参考图像相似度大于预设阈值时,则确认出用户的身份为虚拟共享系统的成员,可以使用虚拟共享系统中的电子设备。
当确认用户身份后,安全中心1102可以将用户图像发送至感知中心1103,同时还可以将用户身份信息、用户具有虚拟共享系统使用权限信息等也发送至感知中心。感知中心1103结合用户图像获知用户状态,并根据用户状态获取用户意图。例如,感知中心1103由用户信息获知认证身份为爷爷奶奶的用户当前的状态为在门外,则感知中心可以根据用户通用模型信息(如表7所示)获取用户意图为智能门锁1自动开锁。
感知中心1103将用户意图发送至应用中心1104。应用中心1104可以基于用户意图确定调用的应用或设备。例如,应用中心1104获知用户意图为智能门锁1自动开锁,则可以根据共享配置信息确定调用智能门锁1的开锁功能。之后,应用中心1104可以将调用指令发送至设备中心1106,由设备中心1106根据调用指令选择对应设备进行通信,并指示其执行对应的服务。
具体地,设备中心1106接收到调用指令后,可以根据其指示的需要调用的对应的电子设备,如在控制部件集选取智能门锁1。之后,设备中心1106可以向通信中心1101发送开锁指示消息,再由设备中心1106将开锁指示消息发送至智能门锁1的通信模块1201。智能门锁1的通信模块1201将开锁指示消息传输至该智能门锁1的控制模块1202,控制模块1202响应于该开锁指示消息执行开锁操作,从而实现为用户自动开锁的过程。
可选地,应用中心1104确定为用户调用智能门锁1的开锁功能后,还可以向安全中心1102进行用户的设备使用权限校验,如向安全中心发送设备使用权限校验的通知消息,并在该消息指示用户所使用的特定设备。安全中心1102可以对用户是否具有该特定设备的使用权限进行校验,如根据存储中心1105中的共享配置信息判断爷爷奶奶是否具有智能门锁1自动开锁的权限,当设备使用权限通过后,指示应用中心1104可以进行该操作。
示例性的,如图11B所示,为本申请实施例提供的一些设备的细化结构示意图。以上述场景一中老人进入房间后,电子设备自动为老人匹配儿童智能手表,获取儿童放学轨迹阶段为例,仍然结合图6对该应用场景下的实现过程进行介绍。
参见图11A所示,当智能门锁1的控制模块1202执行开锁操作后,可以生成开锁反馈消息,并经由智能门锁1的通信模块1201发送至大屏设备1的通信中心1101。
参见图11B所示,大屏设备1的通信中心1101可以接收智能门锁1发送的开锁反 馈消息,并将该开锁反馈消息传输至感知中心1103。感知中心1103可以根据该开锁反馈消息获知用户智能门锁1已经打开,用户此时的状态为进入房间内(此外,感知中心1103也可以根据屋内摄像头采集的图像确定用户状态为进入房间内,本实施例对此不作详述)。之后,感知中心1103根据新的用户状态(老人已进入房间内)以及其他辅助信息(如时间)确定用户意图(如想要获知儿童的放学回家轨迹)。感知中心1103向应用中心1104发送用户意图。应用中心1104根据用户意图(想要获知孙子的放学回家轨迹)以及用户通用行为模型确定调用的设备(儿童的智能手表),并将设备调用指令发送至设备中心1106。设备中心1106根据设备信息选择对应设备进行通信。之后,设备中心1106向通信中心1101发送请求消息,该请求消息用于请求反馈预设时间段内的轨迹信息。通信中心1101将设备服务指示发送至儿童智能手表的通信模块1301。智能手表的通信模块1301将设备服务指示传输至该智能手表的管理模块1302;管理模块1302根据设备服务指示可以由智能手表的存储模块1303获取历史轨迹信息,并生成儿童轨迹反馈消息,由智能手表的通信模块1301发送至大屏设备的通信中心1101,通信中心1101再将儿童轨迹反馈消息进一步发送至设备中心1106,由设备中心1106在视觉类部件集中选取可用的最高优先级的设备显示儿童的轨迹信息,从而实现自动为用户提供儿童的放学回家轨迹。
示例性的,如图12所示,为本申请实施例提供的一些设备的细化结构示意图。以上述场景二(儿童到家后,儿童家庭自动建立与老人家庭的视频通话)为例,结合图7对该应用场景下的实现过程进行介绍。
在图7所示的应用场景下,当儿童放学回家时,智能门锁2可以自动开启门锁,儿童进入房间内。其中,儿童家庭的智能门锁2自动开锁的实现过程与图11A所示实施例介绍的过程类似,此处不再赘述。
当儿童进入房间内后,安装于儿童家庭房间内的屋内摄像头2的图像采集模块2001采集用户图像(即儿童图像,如儿童的身影、儿童的面部图像等);之后将用户图像经由屋内摄像头的通信模块2002发送至大屏设备2(子系统2中的主设备)中的通信中心2101。通信中心2101将用户图像传输至安全中心2102,由安全中心2102对用户身份进行认证。具体地,安全中心2102可以由存储中心2105查询并获取预存储的用户参考信息,并将用户图像信息与用户参考信息进行比对,以确认用户身份为儿童。之后,安全中心2102可以将用户身份信息发送至感知中心2103。
感知中心2103根据用户身份信息确定用户状态为儿童放学回到家中,并且根据用户状态确定儿童的意图为与老人建立视频通信。
感知中心2103向应用中心2104发送用户意图。应用中心2104根据用户意图确定调用的应用为老人家庭中的视频通话应用。
应用中心2104向设备中心2106发送调用指示,由设备中心2016确定老人家庭子系统中的主设备信息,并生成请求消息,请求消息用于请求视频通话。设备中心2106向通信中心2101发送设备请求消息;之后,通信中心2101向子系统1中的大屏设备1(子系统1中的主设备)的通信中心1101发送该请求消息。
通信中心1101将请求消息进一步传输至老人家庭中的设备中心1106。设备中心1106根据请求消息由老人家庭中的视觉类部件集中选择可用的最高优先级的显示设备或部件(如图12所示的图像显示模块1111)和音频播放设备或部件(如图12所示的音频播放 模块1112)分别进行图像显示以及音频采集和播放,实现子系统1和子系统2自动建立视频通话。
在一些实施例中,子系统1中的大屏设备1还可以与老人的佩戴的智能手环、智能血压检测仪、智能鞋等具有生命体征监测功能的设备连接。当子系统1与子系统2自动建立视频通话后,大屏设备1还可以收集老人的生命体征数据,并将其发送至子系统2中的大屏设备2,由子系统2中显示设备进行显示,以便使子系统2中的成员(如儿童的爸爸妈妈)获知老人的健康状况。可选地,显示信息可以是老人至少一项生命体征的具体监测数据(如心率79),和/或老人至少一项生命体征的状态(如心率正常状态)等,本申请对此不作限定。
根据本申请实施例提供的多设备配合的方法,通过将不同空间(或地域)的子系统组建为一个大的虚拟共享系统,并使子系统中的电子设备按照场景自适应为用户(尤其是老人、儿童等对智能设备有操作障碍的用户)提供服务,能使得由于物理空间隔开的成员如同处于一个虚拟空间,从而使子系统成员之间获得自然流畅、随需触发的交流效果,增进成员之间的了解和关爱。
示例性的,如图13所示,为本申请实施例提供的一些设备的细化结构示意图。以上述场景三(发生紧急事件时的紧急呼救)为例,结合图8对该应用场景下的实现过程进行介绍。
在图8所示的应用场景下,当老人发生紧急事件时,老人家庭可以和儿童家庭自动建立视频通话。
示例性的,当老人发生紧急事件(例如由于血压升高而摔倒)时,老人家庭中的屋内摄像头1的图像采集模块1301可以采集到用户图像(老人摔倒的图像),该图像采集模块1301可以将采集到的用户图像发送至通信模块1302,并经由通信模块1302通过有线或无线方式发送至大屏设备1的通信中心1101。同时,老人佩戴的智能手环中的生理体征采集模块1401可以采集用户的生理体征信息(如血压数据),该生理体征采集模块1401可以将采集到的用户生理体征信息发送至通信模块1402,并经由通信模块1402发送至大屏设备1中的通信中心1101。
在一些实施例中,通信中心1101可以首先将用户的图像发送至安全中心1102。安全中心1102可以根据用户图像对用户身份进行认证,确定该用户为虚拟共享系统成员。具体地,安全中心1102可以由存储中心1105查询并获取预存储的用户参考信息,并将用户图像信息与用户参考信息进行比对,以确认用户身份为老人。
在一些实施例中,当确定该用户为虚拟共享系统成员,也即用户可以使用虚拟共享系统后,安全中心1101可以将用户图像传输至感知中心1103。感知中心1103还可以由安全中心1102(或直接由通信中心1101)获取用户生理体征信息。感知中心1103可以基于用户图像以及用户生理体征信息,结合其他辅助信息(如地点、时间等)综合判断用户状态为老人发生紧急事件,并且确定老人期望的设备服务为与儿童家庭建立视频通信。
感知中心1103向应用中心1104发送用户意图。应用中心1104根据用户意图确定与儿童家庭子系统中设备建立视频通话,即调用儿童家庭子系统中的视频通话应用。
应用中心1104向设备中心1106发送调用指示,由设备中心1106确定老人家庭子系统中的主设备信息,并生成请求消息。设备中心1106向通信中心1101发送请求消息, 通信中心1101向儿童家庭中的大屏设备2(子系统2中的主设备)的通信中心2101发送该请求消息。
通信中心2101将请求消息进一步传输至儿童家庭中的设备中心2106。设备中心2106根据请求消息由老人家庭中的视觉类部件集中选择可用的最高优先级的视觉显示类电子设备或部件(如图13所示的图像显示模块2111)和音频播放类电子设备或部件(如图13所示的音频播放模块2112)分别进行图像显示以及音频采集和播放,实现子系统1和子系统2自动建立视频通话。
应理解,通过上述方法,当老人发生紧急事件时,通过子系统的主设备的综合判断,能自发地与其他子系统发起视频通话进行求助,从而使得老人得到及时救助。
根据本申请实施例提供的多设备配合的方法,通过将不同地域子系统组建为一个大的虚拟共享系统,并使子系统中的电子设备按照场景自适应为用户(尤其是老人、儿童等对智能设备有操作障碍的用户)提供服务,能使得由于物理空间隔开的成员如同处于一个虚拟空间,从而使子系统成员之间获得自然流畅、随需触发的交流效果,增进成员之间的了解和关爱。
示例性的,如图14所示,为本申请实施例提供的一些设备的细化结构示意图。以上述场景四(行车中的自适应通信。)为例,结合图9对该应用场景下的实现过程进行介绍。
示例性的,当驾驶员驾驶车辆时或者驾驶车辆过程中,车内摄像头可以通过图像采集模块3001采集用户的图像。图像采集模块3001可以将用户图像传输至该车内摄像头的通信模块3003,经由通信模块3003传输至车载电脑(车载子系统的主设备)。车载电脑根据用户图像对用户进行身份认证,确定用户可以为虚拟共享系统的成员,可以使用该虚拟共享系统。其中,车载设备对用户进行身份认证以及设备权限认证的过程可以参见以上相关实施例中的介绍,从此处不再赘述。
在一些实施例,车辆中的一些定位装置可以具有定位模块3002,能够对用户的位置进行实时定位,并通过通信模块3003将定位信息发送至车载电脑的通信中心3101。当感知中心3103根据定位信息以及用户输入的目的地信息判断车辆即将到达目的地时,如与目的地之间的距离小于一定阈值时,则可以感知到用户意图为要提前告知目的地子系统成员,则可以发起与子系统2之间的视频通话。其中,发起视频通话的过程与上述实施例介绍的流程类似,此处不再赘述。
应理解,通过上述方法,当用户驾车出行时,通过子系统设备的综合判断,能自发地与其他子系统发起视频通话进行沟通。尤其在发生意外事件时,则可以及时向其他子系统成员进行呼救,从而使得用户得到及时救助。
根据本申请实施例提供的多设备配合的方法,通过将不同地域子系统组建为一个大的虚拟共享系统,并使子系统中的电子设备按照场景自适应为用户(尤其是老人、儿童等对智能设备有操作障碍的用户)提供服务,能使得由于物理空间隔开的成员如同处于一个虚拟空间,从而使子系统成员之间获得自然流畅、随需触发的交流效果,增进成员之间的了解和关爱。
本申请实施例还提供了一种多设备配合的系统,至少包括第一子系统和第二子系统,所述第一子系统包括第一主设备,所述第二子系统包括第二主设备,所述第一主设备和所述第二主设备用于执行本申请实施例提供的多设备配合的方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机指令,当所述计算机指令在计算机中执行时,使得本申请实施例提供的多设备配合的方法得以实现。
本申请实施例还提供了一种计算机产品,存储有计算机指令,当所述计算机指令在计算机中执行时,使得本申请实施例提供的多设备配合的方法得以实现。
本申请实施例还提供了一种芯片,存储有计算机指令,当所述计算机指令在芯片中执行时,使得本申请实施例提供的多设备配合的方法得以实现。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种多设备配合的方法,其特征在于,应用于虚拟共享系统中的第一主设备,所述虚拟共享系统至少包括第一子系统和第二子系统,所述第一主设备属于所述第一子系统,所述方法包括:
    获取第一用户的用户信息,所述第一用户属于所述虚拟共享系统中的成员;
    识别与所述用户信息关联的用户意图,所述用户意图包括使所述第二子系统中的至少一个电子设备执行服务操作;
    根据所述用户信息和共享配置信息,向所述第二子系统中的第二主设备发送请求消息,所述请求消息用于请求所述服务操作,所述共享配置信息包括所述虚拟共享系统中每个子系统对应的成员信息和设备信息。
  2. 根据权利要求1所述的方法,其特征在于,所述用户意图包括使所述第二子系统中的至少一个电子设备执行服务操作,具体包括:
    所述用户意图包括使所述第二子系统中的至少一个电子设备执行视频通话服务操作。
  3. 根据权利要求1所述的方法,其特征在于,所述识别与所述用户信息关联的用户意图,具体包括:
    根据获取的所述用户信息,确定所述第一用户当前的状态;
    根据所述第一用户当前的状态,确定对应的所述第一用户的用户意图。
  4. 根据权利要求3所述的方法,其特征在于,所述第一用户当前的状态,包括以下至少一项:
    所述第一用户进入房间内;或者,
    所述第一用户生命体征异常;或者,
    所述第一用户身体姿势异常;或者,
    所述第一用户与目的地之间的距离小于第一阈值。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述获取第一用户的用户信息,具体包括:
    接收所述第一子系统中的至少一个电子设备发送的所述用户信息,所述第一子系统中的至少一个电子设备与所述第一主设备不同。
  6. 根据权利要求5所述的方法,其特征在于,当所述用户信息为用户图像时,所述方法具体包括:
    接收第一屋内摄像头发送的第一图像,所述第一图像包括所述第一用户的图像,所述第一屋内摄像头属于所述第一子系统;
    当根据所述第一图像确定所述第一用户进入房间时,向所述第二主设备发起所述视频通话。
  7. 根据权利要求5所述的方法,其特征在于,当所述用户信息为用户图像时,所述方法具体包括:
    接收第一屋内摄像头发送的第二图像,所述第二图像包括所述第一用户的图像,所述第一屋内摄像头属于第一子系统;
    根据所述第二图像信息识别所述第一用户的身体姿势;
    当根据所述第一用户的身体姿势确定所述第一用户身体姿势异常时,向所述第二主设备发起所述视频通话。
  8. 根据权利要求2-5中任一项所述的方法,其特征在于,当所述第一子系统为车载子系统时,所述方法具体包括:
    获取所述第一用户的位置信息;
    当根据所述第一用户的位置信息,确定所述第一用户与目的地之间的距离小于第一阈值时,向所述第二主设备发起视频通话。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述用户信息和所述共享配置信息,对所述第一用户进行身份认证;
    当所述身份认证通过时,确定所述第一用户为所述虚拟共享系统中的成员。
  10. 根据权利要求3所述的方法,其特征在于,所述共享配置信息还包括所述虚拟共享系统中的成员对应的设备使用权限;
    所述根据用户信息和共享配置信息,向所述第二子系统中的第二主设备发送请求消息,具体包括:
    所述根据用户信息和所述共享配置信息确定所述第一用户具有使用所述第二子系统中的至少一个第二电子设备的权限;
    向所述第二子系统中的第二主设备发送所述请求消息。
  11. 一种多设备配合的方法,其特征在于,应用于虚拟共享系统中的第二主设备,所述虚拟共享系统至少包括第一子系统和第二子系统,所述第二主设备属于所述第二子系统,所述方法包括:
    接收所述第一子系统中的第一主设备发送的请求消息,所述请求消息用于请求所述第二子系统中的至少一个电子设备执行服务操作;
    响应于所述请求消息,指示所述至少一个第二电子设备执行所述服务操作。
  12. 根据权利要求11所述的方法,其特征在于,所述服务操作包括:
    与所述第一子系统建立视频通话服务操作。
  13. 根据权利要求11或12所述的方法,其特征在于,所述响应于所述请求消息,指示所述至少一个第二电子设备执行所述服务操作,具体包括:
    根据所述请求消息确定所述服务操作所需的能力;
    根据所述第二子系统中具有所述能力的电子设备对应的优先级,指示第二电子设备执行所述服务操作,所述第二电子设备为所述第二子系统中具有所述能力的电子设备中优先级最高的电子设备。
  14. 一种多设备配合的系统,其特征在于,至少包括第一子系统和第二子系统,所述第一子系统包括第一主设备,所述第二子系统包括第二主设备,所述第一主设备用于执行如权利要求1至10中任一项所述的方法,所述第二主设备用于执行如权利要求11-13中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,存储有计算机指令,当所述计算机指令在计算机中执行时,使得如权利要求1-13中任一项所述的方法得以实现。
  16. 一种计算机产品,其特征在于,存储有计算机指令,当所述计算机指令在计算机中执行时,使得如权利要求1-13中任一项所述的方法得以实现。
PCT/CN2022/085793 2021-04-20 2022-04-08 一种多设备配合的方法及设备 WO2022222768A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110425911.9 2021-04-20
CN202110425911.9A CN115309478A (zh) 2021-04-20 2021-04-20 一种多设备配合的方法及设备

Publications (1)

Publication Number Publication Date
WO2022222768A1 true WO2022222768A1 (zh) 2022-10-27

Family

ID=83723683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085793 WO2022222768A1 (zh) 2021-04-20 2022-04-08 一种多设备配合的方法及设备

Country Status (2)

Country Link
CN (1) CN115309478A (zh)
WO (1) WO2022222768A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282490A (zh) * 2014-06-25 2016-01-27 北京聚安威视觉信息技术有限公司 一种新型空巢老人的智能家庭交互系统及方法
CN105657638A (zh) * 2014-11-28 2016-06-08 三星电子株式会社 用于电子装置之间的功能共享的方法和装置
US20160373165A1 (en) * 2015-06-17 2016-12-22 Samsung Eletrônica da Amazônia Ltda. Method for communication between electronic devices through interaction of users with objects
CN110045621A (zh) * 2019-04-12 2019-07-23 深圳康佳电子科技有限公司 智能场景处理方法、系统、智能家居设备及存储介质
CN111083419A (zh) * 2018-10-19 2020-04-28 蒙柳 一种可以自动连接的远程互动系统及连接方法
CN113676689A (zh) * 2021-08-18 2021-11-19 百度在线网络技术(北京)有限公司 一种视频通话方法、装置及电视

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282490A (zh) * 2014-06-25 2016-01-27 北京聚安威视觉信息技术有限公司 一种新型空巢老人的智能家庭交互系统及方法
CN105657638A (zh) * 2014-11-28 2016-06-08 三星电子株式会社 用于电子装置之间的功能共享的方法和装置
US20160373165A1 (en) * 2015-06-17 2016-12-22 Samsung Eletrônica da Amazônia Ltda. Method for communication between electronic devices through interaction of users with objects
CN111083419A (zh) * 2018-10-19 2020-04-28 蒙柳 一种可以自动连接的远程互动系统及连接方法
CN110045621A (zh) * 2019-04-12 2019-07-23 深圳康佳电子科技有限公司 智能场景处理方法、系统、智能家居设备及存储介质
CN113676689A (zh) * 2021-08-18 2021-11-19 百度在线网络技术(北京)有限公司 一种视频通话方法、装置及电视

Also Published As

Publication number Publication date
CN115309478A (zh) 2022-11-08

Similar Documents

Publication Publication Date Title
WO2021000808A1 (zh) 设备控制方法和设备
WO2020192714A1 (zh) 显示设备控制页面的方法、相关装置及系统
WO2021052263A1 (zh) 语音助手显示方法及装置
WO2021063343A1 (zh) 语音交互方法及装置
WO2020177622A1 (zh) Ui组件显示的方法及电子设备
WO2020238728A1 (zh) 智能终端的登录方法及电子设备
WO2020173375A1 (zh) 一种多智能设备联动控制的方法、设备以及系统
WO2021233079A1 (zh) 一种跨设备的内容投射方法及电子设备
WO2021253975A1 (zh) 应用程序的权限管理方法、装置和电子设备
WO2020150917A1 (zh) 一种应用权限的管理方法及电子设备
CN113496426A (zh) 一种推荐服务的方法、电子设备和系统
WO2022037407A1 (zh) 一种回复消息的方法、电子设备和系统
US20240095329A1 (en) Cross-Device Authentication Method and Electronic Device
CN113689171A (zh) 一种家庭日程融合的方法及装置
WO2022135214A1 (zh) 分布式实现方法、分布式系统、可读介质及电子设备
WO2022127130A1 (zh) 一种添加操作序列的方法、电子设备和系统
CN114629993B (zh) 一种跨设备认证方法及相关装置
WO2022222768A1 (zh) 一种多设备配合的方法及设备
WO2023071940A1 (zh) 跨设备的导航任务的同步方法、装置、设备及存储介质
EP4177777A1 (en) Flexibly authorized access control method, and related apparatus and system
WO2023083026A1 (zh) 一种数据采集方法、系统以及相关装置
WO2022206637A1 (zh) 一种携物提醒方法、相关设备及系统
WO2021147483A1 (zh) 数据分享的方法和装置
EP4213461A1 (en) Content pushing method and apparatus, storage medium, and chip system
WO2022143273A1 (zh) 信息处理方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790877

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790877

Country of ref document: EP

Kind code of ref document: A1