WO2021147483A1 - 数据分享的方法和装置 - Google Patents

数据分享的方法和装置 Download PDF

Info

Publication number
WO2021147483A1
WO2021147483A1 PCT/CN2020/128996 CN2020128996W WO2021147483A1 WO 2021147483 A1 WO2021147483 A1 WO 2021147483A1 CN 2020128996 W CN2020128996 W CN 2020128996W WO 2021147483 A1 WO2021147483 A1 WO 2021147483A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
user
level
output level
account
Prior art date
Application number
PCT/CN2020/128996
Other languages
English (en)
French (fr)
Inventor
阙鑫地
林嵩晧
林于超
张舒博
郑理文
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021147483A1 publication Critical patent/WO2021147483A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • This application relates to the field of information processing, and more specifically, to methods and devices for data sharing.
  • a multi-smart device scenario such as a home scenario, it includes private devices (for example, mobile phones or watches) and household public devices (for example, TVs, vehicles, or speakers). It is not possible to provide users with a differentiated and personalized experience based on users who use smart devices.
  • This application provides a method and device for determining data sharing, which can provide users with a differentiated and personalized experience according to the users used.
  • a method for determining data sharing includes: a first device obtains registration information of a first user from a second device, and the registration information of the first user includes the account or the account of the first user.
  • the first device obtains the first data request message of the first user from the second device, and the first The data request message is used to request to share the first data of the first user; the first device determines that the first data belongs to the data of the data type corresponding to the output level of the data of the first device, and sends the The first data is given to the second device.
  • the aforementioned account may be a mobile phone number, a user name set by the user, an email address, and so on.
  • the above-mentioned registration information of the first user is the registration information of the first user who inputs the second device by the first user, that is, the first user uses the second device through the registration information of the first user.
  • the above-mentioned second device may be a device with a biometric recognition function, for example, the second device may be a mobile phone, a vehicle, or a tablet; or, the second device may be a device that can collect biometrics but does not have a biometric function, for example, The second device can be a watch, a stereo, or a TV.
  • the first device is a device other than the second device in the same network as the second device; or, the first device is a device selected based on the functions of all devices in the same network as the second device
  • the first device is a device with a biometric identification function.
  • the devices in the aforementioned network may be mutually trusted devices.
  • the network may be a home network, and the devices in the home network are mutually trusted devices.
  • the network may also be a working network, and the devices in the working network are mutually trusted devices.
  • the devices in the above-mentioned network are not only devices connected to the network, but also devices that join the network by scanning a two-dimensional code (identification code), and the two-dimensional code may be preset.
  • the first device may also be a device other than the second device in the same group in the same network as the second device; or, the first device may be a device other than the second device in the same network as the second device;
  • a device selected by the functions of all devices in a group, for example, the first device is a device with a biometric identification function.
  • multiple groups may be preset in the above-mentioned network, and the devices in each of the multiple groups may be mutually trusted devices.
  • the network may be a home network, and a home group and a visitor group may be preset in the home network.
  • the home group includes the aforementioned first device and the aforementioned second device, and the first device and guest group in the home group
  • the second device is a device that trusts each other; the devices in the visitor group and the devices in the family group are devices that do not trust each other, but the devices in the access group and the devices in the family group can communicate with each other Interaction of non-private information.
  • the devices in the home group may not only be devices connected to the home network, but also devices that join the home network by scanning a QR code, and the devices in the visitor group are only devices connected to the home network.
  • the above-mentioned first data may be any data.
  • the above-mentioned first data may be real-time location data of the user, data of places where the user likes to entertain, captured photo data, recorded video data, watched video data, historical playlist data, and so on.
  • the first device obtains the registration information of the first user of the second device.
  • the registration information of the first user includes the original data of the first user’s account or the first user’s biological characteristics.
  • the registration information determines the output level of the data of the first device. Among them, the output level of the data of the first device corresponds to different data types, and the data of different data types have different highest risks;
  • the first device obtains the first data of the first user from the second device A request message, the first data request message is used to request to share the first data of the first user; finally, when the first device determines that the first data belongs to the data of the data type corresponding to the output level of the data of the first device , Send the first data to the second device.
  • the first device can determine the output level of the data of the first device according to the form of the registration information of the first user.
  • the first data belongs to data of the data type corresponding to the output level of the data of the first device
  • the first data generated by the first user on the first device is shared with the second device to provide the first user with a differentiated personalized experience.
  • the first device in the case that the registration information of the first user includes the original data of the first user's biological characteristics, the first device The registration information of a user, and determining the output level of the data of the first device includes: the first device recognizes the original biometric data of the first user, and determines whether the biometric data of the first user is the same as that of the first user.
  • An account corresponding to the original data of the first device if the first device determines that the account corresponding to the original biometric data of the first user is not obtained, it is determined that the output level of the data of the first device is The fourth level; in the case that the first device obtains the account corresponding to the original data of the biological characteristics of the first user, it is determined whether the first device exists in all the accounts stored in the second device An account corresponding to the original biometric data of the first user obtained by a device; an account corresponding to the original biometric data of the first user obtained by the first device exists in the second device In the case of, it is determined that the output level of the data of the first device is the second level; the original data of the biological characteristics of the first user obtained by the first device does not exist in the second device In the case of the corresponding account, it is determined that the output level of the data of the first device is the third level.
  • the first device determines whether to obtain the original data corresponding to the biological characteristics of the first user according to the registration information of the first user Account number; in the case that the account number corresponding to the original biometric data of the first user is not obtained on the first device, determine that the output level of the data of the first device is the fourth level; In the case that the first device obtains the account corresponding to the original data of the first user's biological characteristics, the first device sends seventh information to the second device, and the seventh information is used to indicate all The first device obtains an account corresponding to the original data of the biological characteristics of the first user; and the first device receives eighth information sent by the second device, and the eighth information is used to indicate the first user Second, whether the device stores the account corresponding to the original biometric data of the
  • the method further includes: In the case where the user's registration information is the 3D face, fingerprint, iris or DNA of the first user, it is determined that the output level of the data of the first device is the first sub-level in the third level; In the case that the registration information of the first user is the 2D face or vein of the first user, it is determined that the output level of the data of the first device is the second sub-level in the third level; or in the second sub-level of the third level; In the case where the registration information of a user is the voice or signature of the first user, it is determined that the output level of the data of the first device is the third sub-level in the third level.
  • the output level of the data of the first device is classified as the third level and then subdivided, so as to provide the user with a differentiated personalized experience according to the different biological characteristics used by the user.
  • determining the output level of the data of the first device includes: the first device determines whether the account of the first user is stored; in the case that the account of the first user is stored in the first device, It is determined that the output level of the data of the first device is the second level; in the case that the first device does not store the account of the first user, it is determined that the output level of the data of the first device is The fourth level.
  • the above-mentioned output level of the data of the first device can be understood as the level at which the data on the first device is shared with other devices.
  • the output level of the data of the first device is relative to the setting of the device requesting the data. For different devices requesting data, the output level of the data of the first device may be different or may be the same.
  • the output level of the data of the first device is the first level.
  • the output level of the data of the first device is determined by what form of registration information the device requesting the data uses to access the data on the first device.
  • the output level of the first device’s data is the second level; if the device requesting data uses the same When the original biometric data corresponding to the account used in a device is used to access the first device, for the device requesting the data, the output level of the first device’s data is the third level; if the device requesting the data does not use the When the original data of the same account or biological characteristics of the first device is used to access the first device, for the device requesting the data, the output level of the data of the first device is the fourth level.
  • the device requesting the data may be the second device, and the device requesting the data may be the first device.
  • the output level of the data of the first device is divided, so as to provide the user with a differentiated personalized experience.
  • the data type corresponding to the second level is the second type, and the data corresponding to the second type includes general location data, video data, logistics data, and schedules.
  • Planning data, preference data, equipment capability data, and/or equipment status data; and/or, the data type corresponding to the third level is a third type, and the data corresponding to the third type includes video data, logistics data, and schedules Plan data, preference data, equipment capability data and/or equipment status data; and/or, the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type includes equipment capability data and/or equipment Status data.
  • the risk of data corresponding to the second type, the risk of data corresponding to the third type, and the risk of data corresponding to the fourth type are sequentially reduced.
  • general location data may be medium-impact personal data
  • video data, logistics data, schedule data, and preference data may be low-impact personal data
  • device capability data and/or device status data are non-personal data.
  • the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, Equipment capability data and/or equipment status data; and/or, the data type corresponding to the second sub-level is determined to be the second sub-type, and the data corresponding to the second sub-type includes logistics data, schedule data, and equipment capabilities Data and/or device status data; and/or, the data type corresponding to the third sub-level is determined to be the third sub-type, and the data corresponding to the third sub-type includes preference data, watched video data, and device capability data And/or device status data.
  • the risk of the data corresponding to the first subtype, the risk of the data corresponding to the second subtype, and the risk of the data corresponding to the third subtype are sequentially reduced.
  • the output level of the data of the first device is different, and the data type corresponding to the output level of the data of the first device is also different, so that a differentiated personalized experience can be provided to the user.
  • the method further includes: the first device sending the output level of the data of the first device to the second device.
  • the biological characteristics include one or more of the following: physical biological characteristics, soft biological characteristics, and behavioral biological characteristics.
  • the physical biological characteristics include: human face, fingerprint, iris, retina, deoxyribonucleic acid DNA, skin, hand shape, or vein;
  • the behavioral biological characteristics include: Voice, signature or gait;
  • the soft biological characteristics include: gender, age, height or weight.
  • a method for acquiring data including: a second device acquires registration information of a first user input by a first user, where the registration information of the first user includes the account of the first user or the The original data of the biological characteristics of the first user; the second device sends the registration information of the first user to the first device; the second device obtains the first data request message of the first user, and the second device A data request message is used to request to share the first data of the first user; the second device sends the first data request message, and receives the first data sent by the first device.
  • the second device may recognize the first user's request for the first data through a voice recognition function; or, the second device may also obtain the first user's request for the first data through the first user's input.
  • the method further includes: the first user The second device recognizes the original biometric data of the first user, and determines whether an account corresponding to the biometric original data of the first user is obtained; the biometric data of the first user is obtained on the second device.
  • the second device sends fifth information to the first device, and the fifth information is used to instruct the second device to obtain the account corresponding to the original data of the first user’s biometrics
  • the second device sends sixth information to the first device, and the sixth information is used to indicate The original data of the biological characteristics of the first user.
  • the method further includes: the second device receives the output level of the data of the first device sent by the first device, and the first device
  • the output level of the data of a device corresponds to different data types, and the data of the different data types have different highest risks.
  • a method for acquiring data including: a second device acquires registration information of a first user input by a first user, and the registration information of the first user includes original data of biological characteristics of the first user
  • the second device sends the registration information of the first user to the first device; the second device receives the first information sent by the first device, and the first information is used to instruct the first device
  • the second device acquires the A first data request message of a first user, where the first data request message is used to request to share the first data of the first user; the second device determines that the first data belongs to the data of the first device Data of the data type corresponding to the output level of the data type; Different data types of the data have different highest risks; the second device sends the first data request message, and receives the first data request message sent by the first device The first data.
  • the second device may recognize the first user's request for the first data through a voice recognition function; or, the second device may also obtain the first user's request for the first data through the first user's input.
  • the second device Information determines whether the second device stores the original data corresponding to the biological characteristics of the first user determined by the first device In the case that the second device stores the account corresponding to the original biometric data of the first user determined by the first device, it is determined that the output level of the data of the first device is all The second level; in the case that the second device does not store the account corresponding to the original biometric data of the first user determined by the first device, determine the output level of the data of the first device This is the third level.
  • the method further includes: In the case where the user's registration information is the 3D face, fingerprint, iris or DNA of the first user, it is determined that the output level of the data of the first device is the first sub-level in the third level; In the case that the registration information of the first user is the 2D face or vein of the first user, it is determined that the output level of the data of the first device is the second sub-level in the third level; or in the second sub-level of the third level; In the case where the registration information of a user is the voice or signature of the first user, it is determined that the output level of the data of the first device is the third sub-level in the third level.
  • the data type corresponding to the second level is the second type, and the data corresponding to the second type includes general location data, video data, logistics data, and schedules.
  • the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, Equipment capability data and/or equipment status data; and/or, the data type corresponding to the second sub-level is determined to be the second sub-type, and the data corresponding to the second sub-type includes logistics data, schedule data, and equipment capabilities Data and/or device status data; and/or, the data type corresponding to the third sub-level is determined to be the third sub-type, and the data corresponding to the third sub-type includes preference data, watched video data, and device capability data And/or device status data.
  • the method further includes: the second device sending the output level of the data of the first device to the first device.
  • a data sharing method including: a first device receives registration information of a first user sent by a second device, where the registration information of the first user includes the original biometric characteristics of the first user Data; the first device recognizes the biological characteristics of the first user, and determines whether to obtain an account corresponding to the original data of the first user’s biological characteristics; the first device determines to obtain the first user’s biological characteristics
  • the first device sends first information to the second device, and the first information is used to indicate the biological characteristics of the first user determined by the first device
  • the account corresponding to the original data of the first device the first device obtains the first data request message of the first user from the second device, and the first data request message is used to request to share the first user's One data; the first device sends the first data to the second device.
  • the method further includes: in a case where the first device determines that the account corresponding to the original biometric data of the first user is not obtained, The first device sends a first instruction to the second device, where the first instruction is used to indicate that the first device has not obtained an account corresponding to the original biometric data of the first user.
  • the method further includes: the first device receives the output level of the data of the first device sent by the second device, and the first device The output level of the data of a device corresponds to different data types, and the data of the different data types have different highest risks.
  • a method for determining the output level of data which includes: a second device obtains registration information of a first user input by a first user, and the registration information of the first user includes the registration information of the first user. Account; the second device sends the registration information of the first user to the first device; the second device receives a second instruction sent by the first device, and the second instruction is used to instruct the first device Whether the registration information of the first user is stored on the device; the second device determines the output level of the data of the first device according to the second instruction.
  • the second device determining the output level of the data of the first device according to the second instruction includes: storing in the first device In the case of an account corresponding to the original biometric data of the first user, it is determined that the output level of the data of the first device is the second level; the first user’s data is not stored in the first device In the case of an account corresponding to the original biometric data, it is determined that the output level of the data of the first device is the fourth level.
  • the data type corresponding to the second level is the second type, and the data corresponding to the second type includes general location data, video data, logistics data, and schedules.
  • the method further includes: the second device sending the output level of the data of the first device to the first device.
  • the method further includes: the second device obtains a first data request message of the first user, and the first data request message is used to request Sharing the first data of the first user; the second device sends the first data request message.
  • the method before the second device sends the first data request message, the method further includes: the second device determines that the first data belongs to all The data of the data type corresponding to the output level of the data of the first device.
  • a method for determining the output level of data including: a first device receives registration information of a first user sent by a second device, and the registration information of the first user includes the registration information of the first user. Account number; the first device determines whether the account of the first user is stored; the first device sends a second instruction to the second device, and the second instruction is used to indicate whether there is an account on the first device The account of the first user.
  • the method further includes: the first device receiving the output level of the data of the first device sent by the second device.
  • the method further includes: the first device receives the first data request message of the first user sent by the second device, and the first device A data request message is used to request to share the first data of the first user; the first device determines that the first device stores the first data, and the first device shares the first data with The second device.
  • the method further includes: The first device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device.
  • a method for acquiring data including: a second device acquires registration information of a first user, where the registration information of the first user includes the original data of the biological characteristics of the first user; The second device recognizes the original biometric data of the first user, and determines whether the second device can obtain an account corresponding to the original biometric data of the first user; In the case of an account corresponding to the original biometric data of the first user, the second device sends second information to the first device, and the second information is used to instruct the second device to obtain the The account corresponding to the original biometric data of the first user; and the second device receives the third information sent by the first device, and the third information is used to indicate whether the first device stores any The second device obtains an account corresponding to the original biometric data of the first user; the second device determines the output level of the data of the first device according to the third information; the second device The device obtains the data request message of the first user, where the data request message is used to request the first device to share the first data
  • the second device determining the output level of the data of the first device according to the third information includes: storing in the first device In the case that the second device obtains the account corresponding to the original biometric data of the first user, it is determined that the output level of the data of the first device is the second level; it is not stored in the first device In a case where the second device obtains the account corresponding to the original biometric data of the first user, it is determined that the output level of the data of the first device is the fourth level.
  • the method further includes: when the second device does not obtain an account corresponding to the original biometric data of the first user, The second device sends the registration information of the first user to the first device; the second device receives a third instruction sent by the first device, and the third instruction is used to instruct the first device The device does not obtain the account corresponding to the original data of the biological characteristics of the first user; the second device determines that the output level of the data of the first device is the fourth level according to the third instruction.
  • the method further includes: when the second device does not obtain an account corresponding to the original biometric data of the first user, The second device sends the registration information of the first user to the first device; the second device receives fourth information sent by the first device, and the fourth information is used to indicate the first device The account corresponding to the original biometric data of the first user determined by the device; the second device determines that the output level of the data of the first device is the third level according to the fourth information.
  • the method further includes: In the case where the user's registration information is the 3D face, fingerprint, iris or DNA of the first user, it is determined that the output level of the data of the first device is the first sub-level in the third level; In the case that the registration information of the first user is the 2D face or vein of the first user, it is determined that the output level of the data of the first device is the second sub-level in the third level; or in the second sub-level of the third level; In the case where the registration information of a user is the voice or signature of the first user, it is determined that the output level of the data of the first device is the third sub-level in the third level.
  • the data type corresponding to the second level is the second type, and the data corresponding to the second type includes general location data, video data, logistics data, and schedules.
  • Planning data, preference data, equipment capability data, and/or equipment status data; and/or, the data type corresponding to the third level is a third type, and the data corresponding to the third type includes video data, logistics data, and schedules Plan data, preference data, equipment capability data and/or equipment status data; and/or, the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type includes equipment capability data and/or equipment Status data.
  • the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, Equipment capability data and/or equipment status data; and/or, the data type corresponding to the second sub-level is determined to be the second sub-type, and the data corresponding to the second sub-type includes logistics data, schedule data, and equipment capabilities Data and/or device status data; and/or, the data type corresponding to the third sub-level is determined to be the third sub-type, and the data corresponding to the third sub-type includes preference data, watched video data, and device capability data And/or device status data.
  • the method further includes: the second device sending the output level of the data of the first device to the first device.
  • a method for data sharing including: the first device receives second information sent by the second device, and the second information is used to instruct the second device to compare the biological characteristics of the first user.
  • the method further includes: the first device receives the registration information of the first user sent by the second device, and the registration information of the first user includes the first user.
  • the first device sends a third instruction to the second device, and the third instruction is used to instruct the The first device has not obtained the account corresponding to the original biometric data of the first user; if the first device obtains the account corresponding to the original biometric data of the first user, the first device Sending fourth information to the second device, where the fourth information is used to indicate an account corresponding to the original biometric data of the first user determined by the first device.
  • the method further includes: the first device receiving the output level of the data of the first device sent by the second device.
  • a data sharing device including: a processor coupled with a memory; the memory is used to store a computer program; the processor is used to execute the computer program stored in the memory, So that the device executes the method described in the first aspect and some implementation manners of the first aspect, the method described in the fourth aspect and some implementation manners of the fourth aspect, the sixth aspect, and the first aspect.
  • a data sharing device including: a processor coupled with a memory; the memory is used to store a computer program; the processor is used to execute the computer program stored in the memory, So that the device executes the method described in the second aspect and some implementation manners of the second aspect, the method described in the third aspect and some implementation manners of the third aspect, the fifth aspect and the first aspect described above.
  • a computer-readable medium including a computer program, which when the computer program runs on a computer, causes the computer to execute the foregoing aspects from the first aspect to the eighth aspect and certain implementations of the first aspect Mode to the method described in some implementation modes of the eighth aspect.
  • a system chip in a twelfth aspect, includes an input and output interface and at least one processor.
  • the at least one processor is used to call instructions in a memory to execute the first aspect to the eighth aspect and the first aspect. The operation of the method in certain implementations of aspect to certain implementations of the eighth aspect.
  • system chip may further include at least one memory and a bus, and the at least one memory is used to store instructions executed by the processor.
  • Fig. 1 is an example diagram of an application scenario in which the method and device of the embodiment of the present application can be applied.
  • FIG. 2 is a schematic flowchart of a data sharing method 200 provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the permission level of the device to access the database and the shareable data corresponding to each permission level provided by the embodiment of the application.
  • FIG. 4 is a specific schematic flowchart of step 220 in the method 200 provided in an embodiment of the present application.
  • FIG. 5 is another specific schematic flowchart of step 220 in the method 220 provided in an embodiment of the present application.
  • FIG. 6 is another specific schematic flowchart of step 220 in the method 220 provided by the embodiment of the present application.
  • FIG. 7 is another specific schematic flowchart of step 220 in the method 220 provided by the embodiment of the present application.
  • FIG. 8 is a specific schematic flowchart of step 240 in the method 200 provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an example of data sharing between multiple devices provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another example of data sharing between multiple devices according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another example of data sharing between multiple devices according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another example of data sharing between multiple devices according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of the software structure of an electronic device provided by an embodiment of the present application.
  • Fig. 1 is an example diagram of an application scenario in which the method and device of the embodiment of the present application can be applied.
  • the scene shown in Figure 1 includes mobile phone 101, vehicle 102, tablet computer (pad) 103, watch 104, mobile phone 111, mobile phone 121, watch 122, audio 123, TV 131, mobile phone 132, tablet computer 133, watch 134, audio 135 and vehicle 136.
  • account B is registered on mobile phone 101, vehicle 102, tablet computer 103, and watch 104, respectively; only user 1’s account A is registered on mobile phone 111, and/or user 1’s biometrics are present on mobile phone 111; mobile phone 121, Account C is registered on the watch 122 and audio 123, and the original biometric data of the same user exists on the mobile phone 121, the watch 122 and the audio 123; TV 131, mobile phone 132, tablet 133, watch 134, audio 135 and vehicle 136 There is no account registration on TV 131, mobile phone 132, tablet 133, watch 134, audio 135, and vehicle 136. There is no raw data of the same user’s biometrics, namely TV 131, mobile phone 132, tablet 133, The watch 134, the audio 135, and the vehicle 136 may be used by a single user, or may be used by multiple users.
  • the device shown in FIG. 1 is only an example, and more or fewer devices may be included in the system.
  • it may only include a TV 131, a mobile phone 121, a mobile phone 111, a tablet computer 103, a watch 104, a stereo 123, and a vehicle 136.
  • the mobile phone 101 can perform face recognition
  • the mobile phone 121 can perform voiceprint recognition.
  • the mobile phone 101 and the mobile phone 121 can also recognize the same biological characteristics.
  • both the mobile phone 101 and the mobile phone 121 can perform face recognition; for another example, both the mobile phone 101 and the mobile phone 121 can perform voiceprint recognition.
  • the terminal equipment in the embodiments of the present application may be a mobile phone (mobile phone), a tablet computer, a computer with wireless transmission and reception functions, a virtual reality (VR) terminal, an augmented reality (AR) terminal, and an industrial control (industrial control) terminal.
  • Wireless terminal in control wireless terminal in self-driving (self-driving), wireless terminal in remote medical (remote medical), wireless terminal in smart grid (smart grid), wireless terminal in transportation safety (transportation safety) Terminals, wireless terminals in smart cities, wireless terminals in smart homes, etc.
  • the TV 131, audio 123, and audio 135 in Figure 1 can represent devices that can collect biometrics but do not have the biometric recognition function.
  • the TV 131 can collect facial images and human voices, but it does not have the function of facial recognition and recognition.
  • the biological characteristics in the embodiments of the present application may include one or more of the following: physical biological characteristics, behavioral biological characteristics, and soft biological characteristics.
  • Physical biological characteristics may include: human face, fingerprints, iris, retina, deoxyribonucleic acid (DNA), skin, hand shape, and veins.
  • Behavioral biological characteristics can include: voiceprints, signatures, and gait.
  • Soft biological characteristics can include: gender, age, height, and weight.
  • each device can communicate through the network.
  • the foregoing network includes a wireless fidelity (WI-FI) network or a Bluetooth network.
  • WI-FI wireless fidelity
  • the foregoing network may also include wireless communication networks, such as 2G, 3G, 4G, and 5G communication networks.
  • the aforementioned network may specifically be a work network or a home network.
  • the TV 131 collects the user’s facial image and sound, it will save the user’s facial image and sound, as well as the data generated on the user’s use of the TV 131, and the facial image can be saved through the aforementioned network.
  • FIG. 2 it is a data sharing method 200 provided by an embodiment of the present application. It should be understood that FIG. 2 shows the steps or operations of the method, but these steps or operations are only examples, and the technical solution proposed in this application may also perform other operations or variations of each operation in FIG. 2.
  • the first device and the second device may be terminal devices, and the terminal device may be any of the devices shown in FIG. 1.
  • the first user may be any user who uses the first device and the second device.
  • Step 210 The second device obtains the registration information of the first user input by the first user.
  • the registration information of the first user may include the original data of the first user's account and/or the first user's biological characteristics.
  • the aforementioned account may be registered by the first user; or, the aforementioned account may not be registered by the first user, but the first user uses the second device through the account.
  • the aforementioned account may be a mobile phone number, a user name set by the user, an email address, and so on.
  • the above-mentioned raw data of biological characteristics can be understood as unprocessed data of biological characteristics.
  • the above-mentioned first user uses the second device through the registration information of the first user input by the first user.
  • the first user uses the second device through the account of the first user.
  • the user uses account 1 to use the second device; or, the first user uses the first user's original biometric data to use the second device; for example, the user uses the user's face image to use the second device.
  • the second device may be a device with a biometric recognition function.
  • the second device may be a mobile phone 111, a mobile phone 101, a vehicle 102, a tablet computer 103, a mobile phone 121, a mobile phone 132, and a tablet computer 133. Or a vehicle 136; or, the second device may be a device that can collect biological characteristics but does not have a biometric recognition function.
  • the second device may be a watch 104, a watch 122, a stereo 123, a TV 131, Watch 134 or audio 135.
  • Biometric identification technology refers to a technology that uses human biological characteristics for identity authentication.
  • the biometric identification technology is to use computers and high-tech methods such as optics, acoustics, biosensors, and biostatistics to identify personal identity by using the inherent physiological characteristics and behavioral characteristics of the human body.
  • the second device is a device with a biometric recognition function
  • the second device will save the recognition result after recognizing the original data of the user's biological characteristics (the recognition result is the user).
  • the above-mentioned biometric identification result can be understood as a biometric identification obtained based on biometric identification.
  • the mobile phone 101 will first collect the face image of the owner 1 of the mobile phone 101, convert the face image of the owner 1 into digital codes, and combine these The combination of the digital codes obtains the face feature template of the machine owner 1.
  • the mobile phone 101 identifies the user using the mobile phone 101, it will collect the original data of the face image of the user using the mobile phone 101, and compare the collected original data of the face image of the user using the mobile phone 101 with the original data of the user using the mobile phone 101.
  • the face feature template of the owner 1 stored in the database is compared.
  • the speaker 123 When the original data of the face image of the user using the mobile phone 101 matches the face feature template of the owner 1 stored in the database of the mobile phone 101, The user who uses the mobile phone 101 is determined to be the owner 1. For another example, before the speaker 123 identifies the user who uses the speaker 123, the speaker 123 first collects the sound of the owner 2 of the speaker 123, converts the sound of the owner 2 into a digital code, and combines these digital codes Obtain the voiceprint feature template of Owner 2. When the audio 123 identifies the user who uses the audio 123, it will collect the raw data of the user's voice using the audio 123, and store the collected raw data of the user's voice using the audio 123 with the database of the audio 123.
  • the voiceprint feature template of the owner 2 is compared. If the original data of the user's voice using the audio 123 matches the voiceprint feature template of the owner 2 stored in the database of the audio 123, the user who uses the audio 123 is determined For the owner 2. Among them, the identity of the user who uses the mobile phone 101 is the owner 1 and the identity of the user who uses the speaker 123 is the owner 2 is the biometric identification result, and the mobile phone 101 and the speaker 123 will save the biometric identification result.
  • the user's registration information and the data generated when the user uses the device with the user's registration information input will be saved in a one-to-one correspondence.
  • the registration information may be the original data of the account number or the biological characteristics.
  • the aforementioned account may be a mobile phone number, a user name set by the user, an email address, and the like.
  • the data generated on the device by the user using the device is stored in the memory of the device according to the account used by the user, that is, in the user’s
  • the data generated by the user using the device through the user’s account is stored in the memory according to the account used by the user; when there are multiple accounts using the device, each account is used in the device during the use of the device.
  • the data generated above is also stored in accordance with the account.
  • Each account can correspond to a storage engine, and the data stored in the corresponding account can be accessed through the corresponding storage engine.
  • the user’s registration information is the user’s original biometric data
  • the user’s data stored on the device through the user’s biometric raw data is in
  • the database is also stored according to the account corresponding to the original biometric data of the user, where the original biometric data and the account can have a one-to-one correspondence, for example, one face corresponds to one account; or, biometrics
  • the original data and the account number can be in a many-to-one relationship, for example, multiple fingerprints correspond to one account, or one face and one fingerprint correspond to one account.
  • the biometrics entered when account A is in use such as fingerprints, iris, face, etc., are bound to account A
  • the biometrics entered when account B is in use are bound to account B.
  • the data stored on the mobile phone 111 by the user 1 through the account A is stored in the database according to the account A.
  • the data stored by user 1 on the mobile phone 111 through account A may include the photos taken by user 1 on the mobile phone 111 through account A, the historical songs and playlists saved by user 1 on the mobile phone 111 through account A, and user 1 through account A's historical location data of user 1 stored on the mobile phone 111, etc.
  • the data stored by the user 2 on the mobile phone 111 through the account B is stored in the database according to the account B.
  • the user 3 uses the face image of user 3 on the mobile phone 121
  • the stored data is stored in the database according to account C.
  • the data stored on the mobile phone 121 by the user 3 through the face image of the user 3 may include the historical video viewing records stored by the user 3 on the mobile phone 121 through the face image of the user 3, and the data stored by the user 3 through the face image of the user 3 on the mobile phone 121.
  • step 220 is further included.
  • Step 220 Determine the output level of the data of the first device.
  • the first device is a device other than the second device in the same network as the second device; or, the first device is selected based on the functions of all devices in the same network as the second device
  • the device for example, the first device is a device with a biometric identification function.
  • the devices in the aforementioned network may be mutually trusted devices.
  • the network may be a home network, and the devices in the home network are mutually trusted devices.
  • the network may also be a working network, and the devices in the working network are mutually trusted devices.
  • the devices in the above-mentioned network are not only devices connected to the network, but also devices that join the network by scanning a two-dimensional code (identification code), and the two-dimensional code may be preset.
  • the first device is a device other than the second device in the same group in the same network as the second device; or, the first device is based on the same group as the second device in the same network.
  • a device selected by the functions of all devices in the group, for example, the first device is a device with a biometric identification function.
  • multiple groups may be preset in the above-mentioned network, and the devices in each of the multiple groups may be mutually trusted devices.
  • the network may be a home network, and a home group and a visitor group may be preset in the home network.
  • the home group includes the aforementioned first device and the aforementioned second device, and the first device and guest group in the home group
  • the second device is a device that trusts each other; the devices in the visitor group and the devices in the family group are devices that do not trust each other, but the devices in the access group and the devices in the family group can communicate with each other Interaction of non-private information.
  • the devices in the home group may not only be devices connected to the home network, but also devices that join the home network by scanning a QR code, and the devices in the visitor group are only devices connected to the home network.
  • the output level of the data of the first device can be understood as the level of sharing the data on the first device with other devices.
  • the output level of the data of the first device is relative to the setting of the device requesting the data.
  • the output level of the data of the first device may be different or the same.
  • the output level of the data of the first device is the first level.
  • the output level of the data of the first device is determined by what form of registration information the device requesting the data (for example, the second device) uses to access the data on the first device.
  • the output level of the first device’s data is the second level; if the device requesting data uses the same When the original biometric data corresponding to the account used in a device is used to access the first device, for the device requesting the data, the output level of the first device’s data is the third level; if the device requesting the data does not use the When the original data of the same account or biological characteristics of the first device is used to access the first device, for the device requesting the data, the output level of the data of the first device is the fourth level.
  • the output level of the device's data It is the first level.
  • user A logs in to device 1 through account A or face (face-bound account A), and the data generated during the use of device 1 is associated with account A.
  • account A face-bound account A
  • the program in the device 1 can use or access all data associated with account A and data unrelated to any account.
  • the output level of the data of the first device can be divided into the following at least two levels: the second level, the third level, and the fourth level.
  • the output level of the first device’s data is the second level; when the user uses the same account as the first device
  • the output level of the data of the first device is the third level, where the original data of the first biometric feature is the user’s Any kind of raw data of biological characteristics; when the user does not use the second device with the same account number and raw data of biological characteristics as the first device, for the second device, the output level of the first device’s data For the fourth level.
  • the output level of the device data is the first level, the second level, the third level, and the fourth level in order from high to low.
  • the user only uses the mobile phone 111 through account A, and the data stored by the user on the mobile phone 111 through account A is not allowed to be exported, that is, the data stored by the user on the mobile phone 111 through account A will not be shared with other devices.
  • the user uses the mobile phone 101, the vehicle 102, the tablet computer 103, and the watch 104 through the account B, and the output level of the data stored by the user on the mobile phone 101 through the account B is the second level; the user stores on the vehicle 102 through the account B
  • the output level of the data stored on the tablet 103 by the user through account B is the second level; the output level of the data stored on the watch 104 by the user through account B is the second level grade.
  • the user uses the mobile phone 121, the watch 122 and the speaker 123 through the user's first biometric original data, and the output level of the data stored on the mobile phone 121 by the user through the user's first biometric original data is the third level;
  • the output level of the data stored on the watch 122 by the user through the user's first biometric raw data is the third level;
  • the output level of the data stored on the audio 123 by the user through the user's first biometric raw data is The third level.
  • the user does not use any account, and does not use any user’s original biometric data.
  • TV 131, mobile phone 132, tablet 133, watch 134, audio 135, and vehicle 136 are used.
  • the output level of the TV 131 data is fourth Level; the output level of the data of the mobile phone 132 is the fourth level; the output level of the data of the tablet 133 is the fourth level; the output level of the data of the watch 134 is the fourth level; the output level of the data of the audio 135 It is the fourth level; the output level of the data of the vehicle 136 is the fourth level.
  • the above-mentioned third level of refinement can be divided into the following at least two types: : The first sub-level, the second sub-level, and the third sub-level.
  • the output level of the data of the device is the first sub-level, the second sub-level, and the third sub-level in order from high to low.
  • the device When a user uses multiple devices through the user’s 3D face, fingerprint, iris, or DNA, that is, when the same user uses the same 3D face, fingerprint, iris, or DNA on multiple devices, the device
  • the output level of the data is determined as the first sub-level; for example, when user A uses user A’s fingerprint to use device A, and also uses user A’s fingerprint to use other devices (for example, device C), then user A uses user A’s fingerprint to use other devices (for example, device C).
  • the output level of data stored on device A by A's fingerprint is the first sub-level; the output level of data stored on device A by user A through user A's fingerprint is also the first sub-level.
  • the output level of the device’s data is determined as The second sub-level; for example, when user A uses user A’s 2D face to use device A, and also uses user A’s 2D face to use other devices (for example, device C), then user A uses user A’s 2D face
  • the output level of the face data stored on the device A is the second sub-level; the output level of the data stored on the device C by the user A through the user A's 2D face is also the second sub-level.
  • the output level of the device’s data is determined to be the third sub-level .
  • the third sub-level For example, when user A uses user A's voice to use device A, and also uses user A's voice to use other devices (for example, device C), then user A uses user A's voice to export the data stored on device A
  • the level is the third sublevel; the output level of the data stored on the device C by the user A through the voice of the user A is the third sublevel.
  • the first user When the first user uses the second device, the first user needs to obtain the data in the first device, and the output level of the data of the first device needs to be determined. At this time, the output level of the data of the first device is relative to that of the first device.
  • Second in terms of equipment. In the following, two situations are used to describe in detail how to determine the output level of the data of the first device. In the following, taking the first device and the second device belonging to the same home network as an example, the home network can communicate via Wi-Fi or Bluetooth.
  • Case 1 The second device determines the output level of the data of the first device.
  • the second device is a device that can collect biometrics but does not have the biometric recognition function
  • the second device can only collect Biometrics, the second device needs to use other devices to complete the biometric identification function.
  • the specific step 220 may include step 220a to step 223a.
  • Step 220a The second device sends the original biometric data of the first user to the first device.
  • Step 221a The first device recognizes the original biometric data of the first user, and determines whether an account corresponding to the biometric raw data of the first user can be obtained.
  • step 222a is executed.
  • Step 222a The first device sends a first instruction to the second device, where the first instruction is used to indicate that the first device has not obtained an account corresponding to the original biometric data of the first user. After receiving the first instruction, the second device determines that the output level of the data of the first device is the fourth level.
  • the TV 131 uses the camera of the TV 131 to collect the face image of the user who uses the TV 131, and to identify the user who uses the TV 131
  • the TV 131 131 sends the collected face image of the user of the TV 131 to the mobile phone 132 (the mobile phone 132 is an example of the first device), and the mobile phone 132 compares the original data of the face image of the user using the TV 131 with the mobile phone according to the original data sent by the TV 131.
  • the feature template stored in the database of 132 is compared.
  • the mobile phone 132 If the original data of the face image of the user using the TV 131 does not match the feature template of the owner 3 stored in the database of the mobile phone 132, the mobile phone 132 will not get the same Using the account corresponding to the face image of the user of the TV 131, the mobile phone 132 sends the above-mentioned first instruction to the TV 131. After the TV 131 receives the first instruction, the TV 131 can determine that the output level of the data of the mobile phone 132 is the fourth level.
  • the first device can obtain the identification result by recognizing the original biometric data of the first user, the first device can obtain the account corresponding to the original biometric data of the first user.
  • the first device determines that the account corresponding to the original biometric data of the first user is obtained, step 222a' and step 223a are executed.
  • step 222a' the first device sends first information to the second device, where the first information is used to indicate the account corresponding to the original biometric data of the first user determined by the first device.
  • Step 223a The second device determines the output level of the data of the first device according to the first information sent by the first device.
  • the second device determines that the output level of the data of the first device is the second level. In the case that the second device determines that it does not store the account corresponding to the original data of the biometric characteristics of the first user determined by the first device, the second device determines that the output level of the data of the first device is the third level.
  • the second device may also determine the level of the data of the first device according to the specific form of the registration information of the first user input by the first user. Which sub-level of the third level is the output level.
  • the second device determines that the output level of the data of the first device is the first sub-level;
  • the registration information of the first user input by the first user is the 2D face or vein of the first user, the second device determines that the output level of the data of the first device is the second sub-level;
  • the registration information of a user is the voiceprint or signature of the first user, the second device determines that the output level of the data of the first device is the third sub-level.
  • the speaker 123 (the speaker 123 is an example of the second device) collects the voice of the user who uses the speaker 123 through the microphone of the speaker 123, and identifies the user who uses the speaker 123, the speaker 123 will The collected voice of the user of the speaker 123 is sent to the mobile phone 121 (the mobile phone 121 is an example of the first device), and the mobile phone 121 uses the original data of the user’s voice of the speaker 123 sent by the speaker 123 and stored in the database of the mobile phone 121 The feature template is compared.
  • the mobile phone 121 will obtain the account C corresponding to the voice of the user using the audio 123 ,
  • the mobile phone 121 sends the aforementioned account C to the speaker 123, and the speaker 123 determines that the account C is stored in itself, and the speaker 123 determines that the output level of the data of the mobile phone 121 is the second level.
  • the audio 123 determines that it does not store the account C
  • the audio 123 determines that the output level of the data of the mobile phone 101 is the third level.
  • the speaker 123 can also determine that the output level of the data of the mobile phone 101 is the third sub-level in the third level.
  • the second device can complete the biological recognition function.
  • the specific step 220 may include step 220b to step 224b, and step 221c to step 224c.
  • step 220b the second device recognizes the original biometric data of the first user, and determines whether the second device can obtain an account corresponding to the original biometric data of the first user.
  • step 221b to step 224b are executed.
  • step 221c to step 224c are executed.
  • Step 221b The second device sends second information to the first device.
  • the second information is used to instruct the second device to identify the original biometric data of the first user and obtain the original biometric data of the first user. The corresponding account number.
  • step 222b the first device searches whether the account indicated by the second information is stored on the first device, and executes step 223b.
  • Step 223b The first device sends third information to the second device, where the third information is used to indicate whether the account indicated by the second information is stored on the first device.
  • Step 224b The second device determines the output level of the data of the first device according to the third information.
  • the second device determines that the output level of the data of the first device is the second level; the third information indicates that the first device In a case where the account indicated by the second information is not stored, the second device determines that the output level of the data of the first device is the fourth level.
  • the mobile phone 121 (the mobile phone 121 is an example of the second device) can identify the fingerprint of the user using the mobile phone 121, and obtain the account C corresponding to the fingerprint of the user of the mobile phone 121.
  • the mobile phone 121 sends the account C to the watch 122 (the watch 122 is an example of the first device)
  • the watch 122 determines that the account C is stored in the watch 122
  • the watch 122 sends the information that the account C is stored in the watch 122 to the mobile phone 121.
  • the mobile phone 121 determines that the output level of the data of the watch 122 is the second level.
  • the speaker 135 determines that the account C is not stored in the speaker 135, and the speaker 135 sends to the mobile phone 121 that the account C is not stored in the watch 122 Information, the mobile phone 121 determines that the output level of the audio 135 data is the fourth level.
  • Step 221c The second device sends the original biometric data of the first user to the first device.
  • step 222c the first device recognizes the original biometric data of the first user, and determines whether the account corresponding to the biometric raw data of the first user can be obtained.
  • the first device In the case that the first device does not obtain the account corresponding to the original biometric data of the first user, the first device sends a third instruction to the second device.
  • the third instruction is used to indicate that the first device has not obtained the first user’s account. If the original biometric data corresponds to the account number, the second device determines that the output level of the data of the first device is the fourth level according to the third instruction. In the case where the first device obtains the account corresponding to the original biometric data of the first user, step 223c to step 224c are executed.
  • Step 223c The first device sends fourth information to the second device, where the fourth information is used to indicate the account corresponding to the original biometric data of the first user determined by the first device.
  • step 224c the second device determines that the output level of the data of the first device is the third level according to the fourth information.
  • the second device may also determine which sub-level of the third level is the output level of the data of the first device according to the fourth information and the specific form of the registration information of the first user input by the first user. Specifically, when the registration information of the first user input by the first user is the 3D face, fingerprint, iris, or DNA of the first user, the second device determines that the output level of the data of the first device is the first sub-level; When the registration information of the first user input by the first user is the 2D face or vein of the first user, the second device determines that the output level of the data of the first device is the second sub-level; When the registration information of a user is the voice or signature of the first user, the second device determines that the output level of the data of the first device is the third sub-level.
  • the second device sends the account of the first user to the first device;
  • the device determines whether the account of the first user is stored, and the first device sends a second instruction to the second device.
  • the second instruction is used to indicate whether the account of the first user is stored on the first device.
  • the second device determines that the output level of the data of the first device is the second level; when the account of the first user is not stored on the first device , The second device determines that the output level of the data of the first device is the fourth level.
  • the second device may send the output level of the data of the first device determined by the second device to the first device.
  • Case 2 The first device determines the output level of the data of the first device.
  • the registration information input by the first user includes the original data of the first user’s biometrics
  • the second device is a device that can collect biometrics but does not have a biometric identification function
  • the second device can only To collect biometrics, the second device needs to use other devices to complete the biometric identification function.
  • the specific step 220 may also include step 220d to step 222d.
  • step 220d the second device sends the original biometric data of the first user and all accounts stored in the second device to the first device.
  • step 221d the first device recognizes the original biometric data of the first user, and determines whether an account corresponding to the original biometric data of the first user can be obtained.
  • Step 222d The first device determines the output level of the data of the first device.
  • the first device determines that the output level of the data of the first device is the second level
  • the first device determines that the output level of the data of the first device is the third level.
  • the first device determines that the output level of the data of the first device is the fourth level.
  • the first device may also determine the output level of the data of the first device according to the specific form of the registration information of the first user sent by the second device. Which sub-level of the third level is the end level.
  • the first device determines that the output level of the data of the first device is the first sub-level;
  • the registration information is the 2D face or vein of the first user, the first device determines that the output level of the data of the first device is the second sub-level;
  • the registration information of the first user is the voice or signature of the first user, The second device determines that the output level of the data of the first device is the third sub-level.
  • the second device can complete the biological recognition function.
  • the specific step 220 may also include steps 220e to 222e, and steps 221f and 222f.
  • step 220e the second device recognizes the original biometric data of the first user, and determines whether an account corresponding to the original biometric data of the first user can be obtained.
  • the above method 200 further includes 221e and step 222e. In the case that the second device does not obtain the account corresponding to the original biometric data of the first user, the above method 200 further includes 221f and step 222f.
  • Step 221e The second device sends fifth information to the first device, where the fifth information is used to instruct the second device to obtain an account corresponding to the original biometric data of the first user.
  • Step 222e The first device determines the output level of the data of the first device according to the fifth information.
  • the first device determines that the output level of the data of the first device is the second level ; In the case where the first device determines that the first device does not store the account corresponding to the original biometric data of the first user sent by the second device, the first device determines that the output level of the data to the first device is fourth grade.
  • Step 221f The second device sends sixth information to the first device, where the sixth information is used to indicate the original data of the biological characteristics of the first user.
  • step 222f the first device determines, according to the sixth information, whether to obtain the account corresponding to the original biometric data of the first user.
  • Step 223f Determine the output level of the data of the first device.
  • the first device determines that the output level of the data of the first device is the fourth level.
  • step 224f to step 226f are executed.
  • Step 224f The first device sends seventh information to the second device, where the seventh information is used to indicate the account corresponding to the original biometric data of the first user determined by the first device.
  • Step 225f The second device determines whether there is an account corresponding to the original biometric data of the first user determined by the first device on the second device.
  • step 226f the second device sends eighth information to the first device.
  • the eighth information is used to indicate whether an account corresponding to the original biometric data of the first user determined by the first device is stored on the second device.
  • the first device determines that the output level of the data of the first device is the second level; If there is no account corresponding to the original biometric data of the first user determined by the first device, the first device determines that the output level of the data of the first device is the third level.
  • the first device may also determine the first device according to the specific form of the registration information of the first user sent by the second device Which sub-level of the third level is the output level of the data.
  • the first device determines that the output level of the data of the first device is the first sub-level;
  • the registration information is the 2D face or vein of the first user, the first device determines that the output level of the data of the first device is the second sub-level;
  • the registration information of the first user is the voice or signature of the first user, The second device determines that the output level of the data of the first device is the third sub-level.
  • the second device sends the account of the first user to the first device;
  • a device determines whether the account of the first user is stored. In the case that the account of the first user is stored in the first device, the first device determines that the output level of the data of the first device is the second level; In the case where the account of the first user is stored, the first device determines that the output level of the data of the first device is the fourth level.
  • the first device may send the output level of the data of the first device determined by the first device to the second device.
  • the foregoing method may be used to determine the output level of the first device's data.
  • the second device sends the original biometric data to the first device, it can be sent to all devices in the home network except the second device, and all devices in the home network except the second device are the first devices; or It is the second device that selects the device with the biometric identification function according to the performance of the device in the home network, and sends the original biometric data to the device with the biometric identification function. After obtaining the account corresponding to the biometric original data, Then confirm whether other devices in the home network except the second device have the account, and complete the output level of the data of other devices in the home network except the second device (ie, the first device).
  • the second device may also perform the following step 230.
  • Step 230 The second device obtains a first data request message of the first user, where the first data request message is used to request to share the first data of the first user.
  • the second device may recognize the first user's request for the first data through a voice recognition function; or, the second device may also obtain the first user's request for the first data through the first user's input.
  • the registration information entered by the user on the device and the data generated by the use of the device can be divided into high-impact personal data, medium-impact personal data, low-impact personal data, and non-personal data according to the degree of risk of the data.
  • high-impact personal data can include accurate location data and/or health data, where accurate location data can be understood as latitude and longitude coordinates or trajectories.
  • the accurate location data may be real-time accurate location data of the user when the user uses the device.
  • the data affected in may include general location data and/or video data.
  • general location data can be understood as the cell Identity (CELL ID) where the terminal device is located or the basic service set identifier (BSSID) of the wireless fidelity WI-FI to which the device is connected.
  • CELL ID the cell Identity
  • BSSID basic service set identifier
  • General location data cannot directly locate the latitude and longitude coordinates, but can roughly identify information about the user's location.
  • General location data may be the user's historical location data when the user uses the device.
  • a place of interest to the user for example, a place where the user likes to eat, and a place where the user likes to entertain.
  • Low-impact data may include logistics data, schedule data, and/or preference data; non-personal data may include equipment capability data and/or equipment status data.
  • high-impact personal data can be understood as the part of personal data that has the highest risk impact on users, that is, the risk of this part of data is the highest; medium-impact personal data can be understood as this part of personal data has a greater impact on users. High, that is, the risk of this part of the data is relatively high; low-impact personal data can be understood as the risk of this part of personal data for users is low, that is, the risk of this part of the data is low; non-personal data can be understood as this Part of the data has nothing to do with the user, but some data of the device itself.
  • the degree of risk in the embodiments of the present application can also be replaced by the degree of privacy, and the degree of risk can also be replaced by the degree of privacy.
  • the device When the user uses the device through the registration information entered by the user, when data is generated on the device, the device will label the data on the device according to the degree of risk of the data. For example, label precise location data as high-impact personal data; label general location data as medium-influenced personal data; label user preference data as low-impact personal data; label device capability data as non-personal The label of the data.
  • the higher the data output level of the device requested the higher the highest risk that the device requesting data can access the data.
  • the data type of the device requesting the data can access the requested data is the second type, and it can access the highest risk data It is medium-impact personal data.
  • the second type of data can include medium-impact personal data, low-impact personal data, and non-personal data.
  • the data type in the device requesting the data that can access the requested data is the third type, and the highest risk data that can be accessed is low impact
  • the third type of data can include low-impact personal data and non-personal data.
  • the data type of the device requesting the data that can access the requested data is the fourth type, which may include non-personal data. Understandably, when the device requesting data is the device requesting data, all data types can be accessed, namely the first type. The highest risk data that can be accessed is high-impact personal data.
  • the first type of data includes High-impact personal data, medium-impact personal data, low-impact personal data and non-individual data.
  • the above-mentioned third level can be further divided into the following at least two types: the first sub-level, the second sub-level, and the third sub-level, the data corresponding to the data output level of the device for which the data is requested
  • the type can be different.
  • the data type of the device requesting the data that can access the requested data device includes photo data, recorded video data, and device capability data And/or device status data, for example, photos taken by users or videos recorded by users.
  • the data type of the device requesting the data that the device requesting data can access includes logistics data, schedule data, device capability data, and/or equipment Status data, for example, the user’s express shipping data.
  • the data type of the device requesting the data that the device requesting data can access includes preference data, watched video data, device capability data, and/or Device status data, for example, the types of songs that the user likes to listen to or the singers that the user likes to listen to; another example, the user's sports preferences; another example, the video record the user watches.
  • the foregoing second device may be a device requesting data
  • the foregoing first device may be a device requesting data
  • step 240 may be further included.
  • Step 240 Determine whether the first device shares the first data.
  • step 240 is specifically described in two ways.
  • Manner 1 The second device determines whether the first device shares the first data.
  • the specific step 240 may include step 241a to step 244a.
  • Step 241a The second device determines whether the first data belongs to data of the data type corresponding to the output level of the data of the first device, and when the first data belongs to data of the data type corresponding to the output level of the data of the first device, The second device executes step 242a; in the case that the first data does not belong to the data of the data type corresponding to the output level of the data of the first device, the second device will not send the first data request message to the first device.
  • Step 242a The second device sends a first data request message to the first device.
  • the second device may determine to send the foregoing first data request message to at least one first device among the multiple first devices according to a preset rule.
  • the preset rule may be the first device whose distance from the second device is less than the first threshold among the plurality of first devices; or the preset rule may be that the frequency of the second device requesting data is greater than the second threshold Or, the preset rule may be a first device whose confidence level of the first device is greater than the third threshold.
  • Step 243a The first device searches whether the first device stores first data.
  • step 244a is executed.
  • Whether the first data is stored on the first device specifically, refers to whether there is first data associated with the account of the first user on the first device.
  • Step 244a the first device shares the first data with the second device.
  • the second data request message is used to request to share the second data.
  • the second data and the first data belong to the data of the data type corresponding to the output level of the data of the first device, and the time when the second device obtains the first data request message of the first user is the same as the time when the second device obtains the first user's data.
  • the second device directly sends the second data request message of the first user to the first device, and the second data is stored in the first device In the case of sharing the second data with the second device. This can effectively provide users with a different personalized experience.
  • Manner 2 The first device determines whether the first device shares the first data.
  • the specific step 240 may include step 241b to step 244b.
  • Step 241b the first device of the second device sends a first data request message.
  • the second device may determine to send the first data request message to at least one first device among the multiple first devices according to a preset rule.
  • the preset rule may be the first device whose distance from the second device is less than the first threshold among the plurality of first devices; or the preset rule may be that the frequency of the second device requesting data is greater than the second threshold Or, the preset rule may be a first device whose confidence level of the first device is greater than the third threshold.
  • Step 242b The first device determines, according to the first data request message, whether the first data belongs to data of the data type corresponding to the output level of the data of the first device.
  • step 243b is further included.
  • Step 243b the first device searches whether the first device stores the first data.
  • step 244b is also executed.
  • Whether the first data is stored on the first device specifically, refers to whether there is first data associated with the account of the first user on the first device.
  • Step 244b the first device shares the first data with the second device.
  • the second data request message is used to request to share the second data.
  • the second data and the first data belong to the data of the data type corresponding to the output level of the data of the first device, and the second device obtains the first data request message of the first user at the same time as the second device obtains the first user's data.
  • the second device sends a second data request message of the first user to the first device.
  • the first device stores the second data
  • the vehicle 136 will send the user 1’s account A to one or more devices through the network.
  • one or more devices may be devices connected to the same network as the vehicle 136.
  • the one or more devices may be devices as shown in Figure 1.
  • the multiple devices are mobile phones 111 (the The mobile phone 111 is an example of the first device) and the mobile phone 101 (the mobile phone 101 is another example of the first device) are described as examples.
  • the mobile phone 111 When the mobile phone 111 receives the account A of the user 1, the mobile phone 111 determines that the account A of the user 1 is stored in the mobile phone 111, and the mobile phone 111 determines that the data output level of the mobile phone 111 is the second level, and the data output of the mobile phone 111 The level is relative to the vehicle 136, and the mobile phone 111 sends the output level of the data of the mobile phone 111 to the vehicle 136.
  • the vehicle 136 obtains the data request message of the user 1, and the data request message of the user 1 is used to request to share the places that the user 1 likes to entertain, because the places that the user 1 likes to entertain belong to the data type corresponding to the output level of the data of the mobile phone 111
  • the vehicle 136 sends the user 1’s data request message to the mobile phone 111. After the mobile phone 111 receives the user 1’s data request message, if there is a place where user 1 likes to entertain on the mobile phone 111, the user 1 Share the places you like to entertain with the vehicle 136.
  • the mobile phone 101 When the mobile phone 101 receives the account A of the user 1, the mobile phone 101 determines that the account A of the user 1 is not stored in the mobile phone 101, and the mobile phone 101 determines that the output level of the data of the mobile phone 101 is the fourth level.
  • the output level is relative to the vehicle 136, and the mobile phone 101 sends the output level of the data of the mobile phone 101 to the vehicle 136.
  • the vehicle 136 obtains the data request message of the user 1, and the request message of the user 1 is used to request to share the places that the user 1 likes to entertain, because the places that the user 1 likes to entertain do not belong to the data type corresponding to the output level of the data of the mobile phone 101 Data, the vehicle 136 may not send the user 1’s data request message to the mobile phone 101, that is, the vehicle 136 can only get the places that user 1 likes to entertain on the mobile phone 111, so the driver of the vehicle 136 can follow the user 1’s favorites on the mobile phone 111. A place for entertainment, thereby driving the vehicle 136 to the destination.
  • the TV 131 when the user 2 uses the TV 131 through account B (the TV 131 is an example of the second device), the TV 131 will send the user 2’s account B to one or more devices through the network.
  • one or more devices may be devices that are connected to the same home network as the TV 131.
  • the one or more devices may be devices as shown in FIG. 1, where the multiple devices are tablet computers 103. (The tablet computer 103 is an example of the first device) and the speaker 123 (the speaker 123 is another example of the first device) are described as examples.
  • the tablet 103 When the tablet 103 receives the account B of the user 2, the tablet 103 determines that the account B of the user 2 is stored in the tablet 103, and the tablet 103 sends to the TV 131 that the account B of the user 2 is stored on the tablet 103, then The TV 131 determines that the output level of the data of the tablet computer 103 is the second level, and the output level of the data of the tablet computer 103 is relative to that of the TV 131, and the TV 131 sends the output level of the data of the tablet computer 103 to Tablet PC 103.
  • the TV 131 obtains user 2’s data request message.
  • the user 2’s data request message is used to request to share user 2’s historical playlist data.
  • the TV 131 sends the user 2’s data request message to the tablet 103 because of the aforementioned user 2’s historical song.
  • the single data belongs to the data of the data type corresponding to the output level of the data of the tablet computer 103. If the historical playlist data of the user 2 is stored on the tablet 103, the historical playlist data of the user 2 is shared to the TV 131.
  • the speaker 123 determines that user 2’s account B is not stored in the speaker 123, then the speaker 123 sends an instruction to the TV 131 that the speaker 123 does not save the user 2’s account B, and the TV 131 determines
  • the output level of the data of the audio 123 is the fourth level, and the output level of the data of the audio 123 is relative to the TV 131, and the TV 131 transmits the output level of the data of the audio 123 to the audio 123.
  • the TV 131 obtains user 2’s data request message.
  • the user 2’s data request message is used to request to share user 2’s historical playlist data.
  • the TV 131 sends the user 2’s data request message to the speaker 123 because of the aforementioned user 2’s historical playlist. If the data does not belong to the data of the data type corresponding to the output level of the data of the audio 123, the audio 123 will not share the historical playlist data of the user 2 to the TV 131.
  • user 3 uses TV 131 by voice, there is no account corresponding to user 3’s voice print on TV 131, TV 131 cannot recognize user 3’s voice print, TV 131 will send user 3’s voice to one or more devices through the network.
  • one or more devices may be devices connected to the same home network as the TV 131.
  • the one or more devices may be devices as shown in FIG.
  • the audio 123 is described as an example.
  • the tablet computer 103 When the tablet computer 103 receives the voice of user 3, the tablet computer 103 does not recognize the voiceprint of user 3, and it is determined that there is no account corresponding to the voiceprint of user 3 in the tablet computer 103, then the tablet computer 103 determines the tablet computer 103
  • the output level of the data of the tablet computer is the fourth level.
  • the output level of the data of the tablet computer 103 is relative to that of the TV 131.
  • the tablet computer 103 sends the output level of the data of the tablet computer 103 to the TV 131, and the TV 131 obtains User 3’s data request message.
  • the user 3’s data request message is used to request to share user 3’s historical playlist data.
  • the TV 131 determines that user 3’s historical playlist data does not belong to the data corresponding to the output level of the tablet 103 data. Type of data, the TV 131 does not send the above-mentioned user 3 data request message to the tablet computer 103.
  • the speaker 123 receives the user 3’s voice
  • the speaker 123 recognizes the user 3’s voice print and determines that the account corresponding to the user 3’s voice print is stored in the speaker 123.
  • the speaker 123 determines that the output level of the data of the speaker 123 is The third level, and user 3 uses the voiceprint to use the TV 131, so the data output level of the audio 123 is the third sub-level.
  • the data output level of the audio 123 is relative to the TV 131.
  • the output level of the data of 123 is sent to the TV 131, and the TV 131 obtains the data request message of the user 3.
  • the data request message of the user 3 is used to request to share the historical playlist data of the user 3, and the TV 131 determines the historical playlist of the user 3 If the data belongs to the data type corresponding to the output level of the audio 123, the TV 131 sends the user 3’s data request message to the audio 123. If the audio 123 has user 3’s historical playlist data, the audio 123 shares the historical playlist data of user 3 with the TV 131. In this way, when user 2 uses the TV 111 through account B, the TV 111 will receive the historical playlist data stored on the tablet 103 by user 2 through account B. When user 3 uses the TV 111 through voice, he can access the user 3’s audio Historical playlist data stored on 123.
  • user 3 can use mobile phone 121 through user 3’s face image, user 3’s fingerprint, or user 3’s voice; user 3 can use user 3’s face image and user 3’s voice.
  • the voice uses the watch 122; the user 3 can use the audio 123 through the user 3’s voice, where the data stored in the mobile phone 121 by the user 3 through the original biometric data of the user 3 is stored in accordance with the account C used by the user 3;
  • the data stored in the watch 122 by the user 3 through the original biometric data of the user 3 is stored in accordance with the account C used by the user 3; the data stored by the user 3 in the audio 123 through the original biometric data of the user 3 is in accordance with the user 3.
  • Use account C for storage is
  • one or more devices may be devices that are connected to the same network as the vehicle 102.
  • the one or more devices may be devices as shown in FIG. 1, where one device is a speaker 123 (the speaker 123 is An example of the first device) is described as an example.
  • the speaker 123 When the speaker 123 receives the voice of user 3 of user 3 and account B, the speaker 123 determines that the account corresponding to the voiceprint of user 2 stored in the speaker 123 is account B, and the speaker 123 determines that the output level of the data of the speaker 123 is The second level, the output level of the data of the audio 123 is relative to the vehicle 102.
  • the audio 123 sends the output level of the data of the audio 123 to the vehicle 102, and the vehicle 102 obtains the data request message of the user 3.
  • the data request message is used to request the sharing of user 3’s historical playlist data.
  • the vehicle 102 determines that the user 3’s historical playlist data belongs to the data type corresponding to the output level of the audio 123 data, and the vehicle 102 sends the aforementioned data to the audio 123 In the data request message of user 3, if the historical playlist data of user 3 is stored on the audio 123, the audio 123 will share the historical playlist data of the user 3 to the vehicle 102.
  • the vehicle 102 will recognize the fingerprint of the user 3, and the account corresponding to the fingerprint of the user 3 is obtained as account B, and the vehicle 102 sends the account to one or more devices through the network B.
  • one or more devices may be devices connected to the same network as the vehicle 102.
  • the one or more devices may be devices as shown in Fig. 1, where one device is a mobile phone 101 (the mobile phone 101). 101 is another example of the first device) as an example for description.
  • the mobile phone 101 receives the account B
  • the mobile phone 101 determines that the account B is stored in the mobile phone 101
  • the mobile phone 101 determines that the output level of the data of the mobile phone 101 is the second level, and the output level of the data of the mobile phone 101 is relative to the vehicle
  • the mobile phone 101 sends the output level of the data of the mobile phone 101 to the vehicle 102
  • the vehicle 102 obtains the data request message of the user 3
  • the request message is used to request to share the fitness place of the user 3
  • the vehicle 102 determines that the vehicle 102 likes fitness
  • the place belongs to the data of the data type corresponding to the output level of the data of the mobile phone 101, then the vehicle 102 sends the data request message of the user 3 to the mobile phone 101.
  • the mobile phone 101 stores the place of the user 3 who likes to exercise
  • the mobile phone 101 shares where the user 3 likes to exercise to the vehicle 102.
  • the vehicle 102 will not only receive the historical playlist data stored by the user 3 on the audio 123, but the vehicle 102 will also receive the user 3’s mobile phone 101.
  • Store user 3’s favorite fitness area so that the vehicle 102 can play user 3’s favorite songs according to user 3’s historical playlist; vehicle 102 can also store user 3’s favorite fitness in the mobile phone 101 according to user 3’s original fingerprint Place, driving the vehicle 102 to a place where the user 3 likes to exercise.
  • the tablet computer 103 when the user 3 uses the tablet computer 103 with the original voice of the user 3 (the tablet computer 103 is an example of the second device), the tablet computer 103 will perform voiceprint recognition on the original voice of the user 3 , The account corresponding to the voiceprint of user 3 is obtained as account B, then the tablet 103 sends account B to one or more devices through the network, where one or more devices may be connected to the same home network as the tablet 103
  • the one or more devices may be devices as shown in FIG. 1.
  • one device is a watch 122 (the watch 122 is an example of the first device) as an example for description.
  • the watch 122 determines that the account B is not stored in the watch 122, and the watch 122 determines that the output level of the data of the watch 122 is the fourth level, and the tablet computer 103 sends the above-mentioned user 3 data request to the watch 122 According to the message, there is no data associated with user 3's account B on the watch 122, so the watch 122 will not share the data with the tablet 103.
  • the tablet 103 will recognize the original 2D face of the user 3, and the account corresponding to the original 2D face of the user 3 is account B, then the tablet 103 103 sends account B to one or more devices through the network.
  • One or more devices may be devices connected to the same home network as the tablet computer 103.
  • the one or more devices may be as shown in Figure 1.
  • a device is a mobile phone 121 (the mobile phone 121 is another example of the first device) as an example for description.
  • the mobile phone 121 determines that there is no account B stored in the mobile phone 121, the mobile phone 121 determines that the output level of the data of the mobile phone 121 is the second level, and the mobile phone 121 sends the output level of the data of the mobile phone 121
  • the tablet computer 103 obtains the data request message of the user 3
  • the data request message of the user 3 is used to request to share the schedule data of the user 3
  • the tablet computer 103 determines that the schedule data belongs to the output level of the data of the mobile phone 121
  • the tablet computer 103 sends the aforementioned user 3 data request message to the mobile phone 121.
  • the mobile phone 121 stores user 3’s schedule data
  • the mobile phone 121 shares the user 3’s schedule data with Tablet PC 103.
  • TV 131, mobile phone 132, tablet 133, watch 134, stereo 135 or vehicle 136 when one or more users use the TV 131, mobile phone 132, tablet 133, watch 134, stereo 135 or vehicle 136, they are all in the tourist state, that is, the one or more users do not To use the TV 131, mobile phone 132, tablet 133, watch 134, stereo 135 or vehicle 136 through any account and without any raw biometric data, then TV 131, mobile phone 132, tablet 133, watch 134, stereo 135 Or the vehicle 136 does not store the personal data of the one or more users (for example, historical video viewing), then the TV 131, mobile phone 132, tablet 133, watch 134, audio 135 or vehicle 136 will only be used for data sharing.
  • the non-personal data of each device can be shared, that is, device capability data and/or device status data of the device. For example, when the data generated by the one or more users on the TV 131 is stored, the data stored on the TV 131 for each user and each user will not be stored correspondingly, and only all users who use the TV 131 will be stored. The generated non-personal data is saved, and the TV 131 will only share the device capability data or device status data of the TV 131 with other devices.
  • the above method 200 may further include step 250.
  • the second device saves the first data shared by the first device.
  • FIG. 13 shows a schematic structural diagram of an electronic device 1300 provided by an embodiment of the present application.
  • the electronic device 1300 may be the first device in the above method 200, and the electronic device 1300 may execute the steps performed by the first device in the above method 200.
  • the electronic device 1300 may execute the steps performed by the first device in the above method 200.
  • the electronic device 1300 may be the second device in the above method 200, and the electronic device 1300 may execute the steps performed by the second device in the above method 200.
  • the electronic device 1300 may execute the steps performed by the second device in the above method 200.
  • the electronic device 1300 can be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or Smart city equipment, the embodiment of the application does not impose any special restrictions on the specific type of the electronic equipment.
  • PDA personal digital assistant
  • AR augmented reality
  • VR virtual reality
  • AI artificial intelligence
  • the electronic device 1300 may include a processor 1310, an external memory interface 1320, an internal memory 1321, a universal serial bus (USB) interface 1330, a charging management module 1340, a power management module 1341, a battery 1342, an antenna 1, and an antenna 2.
  • Mobile communication module 1350 wireless communication module 1360, audio module 1370, speaker 1370A, receiver 1370B, microphone 1370C, earphone jack 1370D, sensor module 1380, buttons 1390, motor 1391, indicator 1392, camera 1393, display 1394, and Subscriber identification module (subscriber identification module, SIM) card interface 1395, etc.
  • SIM Subscriber identification module
  • the sensor module 1380 can include pressure sensor 1380A, gyroscope sensor 1380B, air pressure sensor 1380C, magnetic sensor 1380D, acceleration sensor 1380E, distance sensor 1380F, proximity light sensor 1380G, fingerprint sensor 1380H, temperature sensor 1380J, touch sensor 1380K, ambient light Sensor 1380L, bone conduction sensor 1380M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 1300.
  • the electronic device 1300 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 1310 may include one or more processing units.
  • the processor 1310 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 1310 to store instructions and data.
  • the memory in the processor 1310 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 1310. If the processor 1310 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 1310 is reduced, and the efficiency of the system is improved.
  • the processor 1310 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 1310 may include multiple sets of I2C buses.
  • the processor 1310 may be coupled to the touch sensor 1380K, charger, flash, camera 1393, etc., through different I2C bus interfaces.
  • the processor 1310 may couple the touch sensor 1380K through an I2C interface, so that the processor 1310 and the touch sensor 1380K communicate through an I2C bus interface to realize the touch function of the electronic device 1300.
  • the I2S interface can be used for audio communication.
  • the processor 1310 may include multiple sets of I2S buses.
  • the processor 1310 may be coupled with the audio module 1370 through an I2S bus to implement communication between the processor 1310 and the audio module 1370.
  • the audio module 1370 can transmit audio signals to the wireless communication module 1360 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 1370 and the wireless communication module 1360 may be coupled through a PCM bus interface.
  • the audio module 1370 may also transmit audio signals to the wireless communication module 1360 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 1310 and the wireless communication module 1360.
  • the processor 1310 communicates with the Bluetooth module in the wireless communication module 1360 through the UART interface to realize the Bluetooth function.
  • the audio module 1370 may transmit audio signals to the wireless communication module 1360 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 1310 with the display 1394, camera 1393 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 1310 and the camera 1393 communicate through a CSI interface to implement the shooting function of the electronic device 1300.
  • the processor 1310 and the display screen 1394 communicate through the DSI interface to realize the display function of the electronic device 1300.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 1310 with the camera 1393, the display screen 1394, the wireless communication module 1360, the audio module 1370, the sensor module 1380, and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 1330 is an interface that complies with the USB standard specifications, and specifically can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 1330 can be used to connect a charger to charge the electronic device 1300, and can also be used to transfer data between the electronic device 1300 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is merely a schematic illustration, and does not constitute a structural limitation of the electronic device 1300.
  • the electronic device 1300 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 1340 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 1340 may receive the charging input of the wired charger through the USB interface 1330.
  • the charging management module 1340 may receive the wireless charging input through the wireless charging coil of the electronic device 1300. While the charging management module 1340 charges the battery 1342, it can also supply power to the electronic device through the power management module 1341.
  • the power management module 1341 is used to connect the battery 1342, the charging management module 1340 and the processor 1310.
  • the power management module 1341 receives input from the battery 1342 and/or the charging management module 1340, and supplies power to the processor 1310, internal memory 1321, display screen 1394, camera 1393, and wireless communication module 1360.
  • the power management module 1341 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 1341 may also be provided in the processor 1310.
  • the power management module 1341 and the charging management module 1340 may also be provided in the same device.
  • the wireless communication function of the electronic device 1300 can be implemented by the antenna 1, the antenna 2, the mobile communication module 1350, the wireless communication module 1360, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 1300 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 1350 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 1300.
  • the mobile communication module 1350 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 1350 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 1350 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 1350 may be provided in the processor 1310.
  • at least part of the functional modules of the mobile communication module 1350 and at least part of the modules of the processor 1310 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 1370A, a receiver 1370B, etc.), or displays an image or video through a display screen 1394.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 1310 and be provided in the same device as the mobile communication module 1350 or other functional modules.
  • the wireless communication module 1360 can provide applications on the electronic device 1300, including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites. System (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 1360 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 1360 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1310.
  • the wireless communication module 1360 may also receive a signal to be sent from the processor 1310, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 1300 is coupled with the mobile communication module 1350, and the antenna 2 is coupled with the wireless communication module 1360, so that the electronic device 1300 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 1300 implements a display function through a GPU, a display screen 1394, and an application processor.
  • GPU is a microprocessor for image processing, which connects the display 1394 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 1310 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display 1394 is used to display images, videos, etc.
  • the display screen 1394 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 1300 may include one or N display screens 1394, and N is a positive integer greater than one.
  • the electronic device 1300 can realize a shooting function through an ISP, a camera 1393, a video codec, a GPU, a display screen 1394, and an application processor.
  • the ISP is used to process the data fed back from the camera 1393. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 1393.
  • the camera 1393 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 1300 may include 1 or N cameras 1393, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 1300 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 1300 may support one or more video codecs. In this way, the electronic device 1300 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 1300 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 1320 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 1300.
  • the external memory card communicates with the processor 1310 through the external memory interface 1320 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 1321 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 1321 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 1300.
  • the internal memory 1321 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 1310 executes various functional applications and data processing of the electronic device 1300 by running instructions stored in the internal memory 1321 and/or instructions stored in a memory provided in the processor.
  • the electronic device 1300 can implement audio functions through an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, a headphone interface 1370D, and an application processor. For example, music playback, recording, etc.
  • the audio module 1370 is used to convert digital audio information into an analog audio signal for output, and also used to convert an analog audio input into a digital audio signal.
  • the audio module 1370 can also be used to encode and decode audio signals.
  • the audio module 1370 may be provided in the processor 1310, or part of the functional modules of the audio module 1370 may be provided in the processor 1310.
  • the speaker 1370A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 1300 can listen to music through the speaker 1370A, or listen to a hands-free call.
  • the receiver 1370B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 1300 answers a call or voice message, it can receive the voice by bringing the receiver 1370B close to the human ear.
  • Microphone 1370C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 1370C through the human mouth, and input the sound signal into the microphone 1370C.
  • the electronic device 1300 may be provided with at least one microphone 1370C. In other embodiments, the electronic device 1300 may be provided with two microphones 1370C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 1300 may also be provided with three, four or more microphones 1370C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 1370D is used to connect wired earphones.
  • the earphone interface 1370D may be a USB interface 1330, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 1380A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 1380A may be disposed on the display screen 1394.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials. When a force is applied to the pressure sensor 1380A, the capacitance between the electrodes changes.
  • the electronic device 1300 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 1394, the electronic device 1300 detects the intensity of the touch operation according to the pressure sensor 1380A.
  • the electronic device 1300 may also calculate the touched position according to the detection signal of the pressure sensor 1380A.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyroscope sensor 1380B may be used to determine the movement posture of the electronic device 1300.
  • the angular velocity of the electronic device 1300 around three axes ie, x, y, and z axes
  • the gyro sensor 1380B can be used for image stabilization.
  • the gyro sensor 1380B detects the shake angle of the electronic device 1300, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 1300 through reverse movement to achieve anti-shake.
  • the gyro sensor 1380B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 1380C is used to measure air pressure.
  • the electronic device 1300 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 1380D includes a Hall sensor.
  • the electronic device 1300 may use the magnetic sensor 1380D to detect the opening and closing of the flip holster.
  • the electronic device 1300 can detect the opening and closing of the flip according to the magnetic sensor 1380D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 1380E can detect the magnitude of the acceleration of the electronic device 1300 in various directions (generally three axes). When the electronic device 1300 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 1300 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 1300 may use the distance sensor 1380F to measure the distance to achieve fast focusing.
  • the proximity light sensor 1380G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 1300 emits infrared light to the outside through the light emitting diode.
  • the electronic device 1300 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 1300. When insufficient reflected light is detected, the electronic device 1300 may determine that there is no object near the electronic device 1300.
  • the electronic device 1300 can use the proximity light sensor 1380G to detect that the user holds the electronic device 1300 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 1380G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 1380L is used to sense the brightness of the ambient light.
  • the electronic device 1300 can adaptively adjust the brightness of the display screen 1394 according to the perceived brightness of the ambient light.
  • the ambient light sensor 1380L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 1380L can also cooperate with the proximity light sensor 1380G to detect whether the electronic device 1300 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 1380H is used to collect fingerprints.
  • the electronic device 1300 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 1380J is used to detect temperature.
  • the electronic device 1300 uses the temperature detected by the temperature sensor 1380J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 1380J exceeds the threshold, the electronic device 1300 performs a reduction in the performance of the processor located near the temperature sensor 1380J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 1300 when the temperature is lower than another threshold, the electronic device 1300 heats the battery 1342 to avoid abnormal shutdown of the electronic device 1300 due to low temperature.
  • the electronic device 1300 boosts the output voltage of the battery 1342 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 1380K also called “touch device”.
  • the touch sensor 1380K can be set on the display screen 1394, and the touch screen is composed of the touch sensor 1380K and the display screen 1394, which is also called a "touch screen”.
  • the touch sensor 1380K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 1394.
  • the touch sensor 1380K may also be disposed on the surface of the electronic device 1300, which is different from the position of the display screen 1394.
  • the bone conduction sensor 1380M can acquire vibration signals.
  • the bone conduction sensor 1380M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 1380M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 1380M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 1370 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 1380M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 1380M, and realize the heart rate detection function.
  • the button 1390 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 1300 may receive key input, and generate key signal input related to user settings and function control of the electronic device 1300.
  • the motor 1391 can generate vibration prompts.
  • the motor 1391 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 1394, the motor 1391 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 1392 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 1395 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 1395 or pulled out from the SIM card interface 1395 to achieve contact and separation with the electronic device 1300.
  • the electronic device 1300 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 1395 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 1395 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 1395 can also be compatible with different types of SIM cards.
  • the SIM card interface 1395 can also be compatible with external memory cards.
  • the electronic device 1300 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 1300 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 1300 and cannot be separated from the electronic device 1300.
  • the software system of the electronic device 1300 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 1300.
  • FIG. 14 is a schematic diagram of the software structure of an electronic device 1300 provided by an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 1300. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 1393 captures still images or video.
  • the embodiment of the present application also provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a computer, the method in any of the foregoing method embodiments is implemented.
  • the embodiments of the present application also provide a computer program product, which implements the method in any of the foregoing method embodiments when the computer program product is executed by a computer.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本申请提供一种数据分享的方法和装置,该方法包括:被请求数据的设备获取来自请求数据的设备的第一用户的注册信息,该第一用户的注册信息包括第一用户的账号或第一用户的生物特征的原始数据;该被请求数据的设备根据第一用户的注册信息,确定被请求数据的设备的数据的出端等级;被请求数据的设备获取请求数据的设备的第一用户的第一数据请求消息,该第一数据请求消息用于请求分享第一用户的第一数据,从而被请求数据的设备可以根据第一用户的不同形式的注册信息,确定第一数据是否属于被请求数据的设备的数据的出端等级,实现请求数据的设备与被请求数据的设备之间第一用户的数据共享,为第一用户提供了差异个性化体验。

Description

数据分享的方法和装置
本申请要求于2020年1月23日提交中国专利局、申请号为202010076673.0、申请名称为“数据分享的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及信息处理领域,并且更具体地,涉及数据分享的方法和装置。
背景技术
随着智能设备和物联网(internet of things,IoT)领域的快速发展,多智能设备协同一体化已然成为业界的共识。为了实现多个智能设备的协同,就需要用户数据及设备数据能够在多个智能设备或多账号间流动及分享。在多智能设备的场景下,比如家庭场景下,包括私人设备(例如,手机或手表)和家庭公用设备(例如,电视、车辆或音箱)。无法依据使用智能设备的用户为用户提供差异个性化体验。
发明内容
本申请提供确定数据分享的方法和装置,能够根据使用的用户,为用户提供差异个性化体验。
第一方面,提供了一种确定数据分享的方法,包括:第一设备获取来自第二设备的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的账号或所述第一用户的生物特征的原始数据;所述第一设备根据所述第一用户的注册信息,确定所述第一设备的数据的出端等级;所述第一设备的数据的出端等级对应不同的数据类型,所述不同的数据类型的数据具有不同的最高风险性;所述第一设备获取来自所述第二设备的所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第一设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据,发送所述第一数据给所述第二设备。
可选地,上述第一用户的账号可以是一个或多个。
示例性地,上述账号可以是手机号码、用户设置的用户名、邮箱等等。
上述第一用户的注册信息是第一用户输入第二设备的第一用户的注册信息,即第一用户通过第一用户的注册信息使用第二设备。
上述第二设备可以是具备生物识别功能的设备,例如,该第二设备可以是手机、车辆、平板电脑;或者,第二设备可以是可以采集生物特征但不具备生物识别功能的设备,例如,该第二设备可以是手表、音响、电视。
其中,第一设备是与第二设备同处一个网络中的除第二设备以外的设备;或者,该第一设备是根据与第二设备同处一个网络中的所有设备的功能选出的设备,例如,该第一设 备是具备生物识别功能的设备。
可选地,上述网络中的设备可以是相互信任的设备。例如,该网络可以是家庭网络,该家庭网络中的设备是相互信任的设备。又例如,该网络还可以是工作网络,该工作网络中的设备是相互信任的设备。其中,上述网络中的设备不仅仅是连接上该网络的设备,还都是通过扫描二维码(识别码)加入该网络中的设备,该二维码可以是预先设置的。
其中,该第一设备还可以是与第二设备同处于一个网络中同一个群组中的除第二设备以外的设备;或者,该第一设备是根据与第二设备同处于一个网络中同一个群组中的所有设备的功能选出的设备,例如,该第一设备是具备生物特征识别功能的设备。
可选地,在上述网络中可以预先设置多个群组,该多个群组中的每个群组中的设备是可以是相互信任的设备。例如,该网络可以是家庭网络,该家庭网络中可以预先设置家庭群组和访客群组,该家庭群组中包括上述第一设备和上述第二设备,该家庭群组中的第一设备和第二设备是相互信任的设备;该访客群组中的设备与家庭群组中的设备之间是不相互信任的设备,但是访问群组中的设备与家庭群组中的设备之间可以进行非隐私信息的交互。其中,家庭群组中的设备不仅可以是连接上家庭网络的设备,还是通过扫描二维码加入到该家庭网络中的设备,访客群组中的设备仅仅是连接上家庭网络的设备。
可选地,上述第一数据可以是任意数据。示例性地,上述第一数据可以是用户的实时位置数据、用户喜欢娱乐的地方数据、拍摄的照片数据、录制的视频数据、观看的视频数据、历史歌单数据等等。
首先,第一设备获取第二设备的第一用户的注册信息,该第一用户的注册信息包括第一用户的账号或第一用户的生物特征的原始数据,第一设备可以根据第一用户的注册信息,确定第一设备的数据的出端等级。其中,该第一设备的数据的出端等级对应不同的数据类型,且不同的数据类型的数据具有不同的最高风险性;其次,第一设备获取来自第二设备的第一用户的第一数据请求消息,该第一数据请求消息用于请求分享第一用户的第一数据;最后,第一设备在确定第一数据属于第一设备的数据的出端等级对应的数据类型的数据的情况下,向第二设备发送第一数据。从而第一设备可以根据第一用户的注册信息的形式,确定第一设备的数据的出端等级,在第一数据属于第一设备的数据的出端等级对应的数据类型的数据的情况下,将第一设备上第一用户产生的第一数据分享给第二设备,为第一用户提供差异个性化体验。
结合第一方面,在第一方面的某些实现方式中,在所述第一用户的注册信息包括所述第一用户的生物特征的原始数据的情况下,所述第一设备根据所述第一用户的注册信息,确定第一设备的数据的出端等级包括:所述第一设备对所述第一用户的生物特征的原始数据进行识别,确定是否得到与所述第一用户的生物特征的原始数据相对应的账号;在所述第一设备确定未得到与所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第四等级;在所述第一设备得到与所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第二设备中存储的所有账号中是否存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号;在所述第二设备中存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第二等级;在所述第二设备中不存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述 第一设备的数据的出端等级为所述第三等级。
结合第一方面,在第一方面的某些实现方式中,在所述第一用户的注册信息包括所述第一用户的生物特征的原始数据的情况下,所述第一设备根据所述第一用户的注册信息,确定第一设备的数据的出端等级包括:所述第一设备根据所述第一用户的注册信息,确定是否得到与所述第一用户的生物特征的原始数据对应的账号;在所述第一设备上未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第四等级;在所述第一设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第一设备向所述第二设备发送第七信息,所述第七信息用于指示所述第一设备得到与所述第一用户的生物特征的原始数据对应的账号;以及所述第一设备接收所述第二设备发送的第八信息,所述第八信息用于指示所述第二设备是否存有所述第一设备确定的与所述第一用户的生物特征的原始数据对应的账号;在所述第二设备存有所述第一设备确定的与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第二等级;在所述第二设备未存有所述第一设备确定的与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第三等级。
结合第一方面,在第一方面的某些实现方式中,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
根据用户使用的具体的生物特征,将上述第一设备的数据的出端等级为第三等级再进行细分,从而可以根据用户使用的不同的生物特征为用户提供差异个性化体验。
结合第一方面,在第一方面的某些实现方式中,在所述第一用户的注册信息包括所述第一用户的账号的情况下,所述第一设备根据所述第一用户的注册信息,确定第一设备的数据的出端等级包括:所述第一设备确定是否存有所述第一用户的账号;在所述第一设备存有所述第一用户的账号的情况下,确定所述第一设备的数据的出端等级为第二等级;在所述第一设备未存有所述第一用户的账号的情况下,确定所述第一设备的数据的出端等级为第四等级。
上述第一设备的数据的出端等级可以理解为该第一设备上的数据分享给其他设备的等级。第一设备的数据的出端等级相对于请求数据的设备设置,不同的请求数据的设备,第一设备的数据的出端等级可能不同,可能相同。请求数据的设备是第一设备本身时,第一设备的数据的出端等级是第一等级。请求数据的设备不是第一设备本身时,第一设备的数据的出端等级由请求数据的设备以什么形式的注册信息来访问第一设备上的数据决定。如果请求数据的设备采用第一设备中使用的账号来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第二等级;如果请求数据的设备采用与第一设备中使用的账号对应的生物特征的原始数据来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第三等级;如果请求数据的设备没有使用与第一设备相同的账号 或生物特征的原始数据来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第四等级。
在请求数据的设备不是第一设备本身的情况下,上述请求数据的设备可以是上述第二设备,上是被请求数据的设备可以是上述第一设备。
根据用户使用的不同形式的注册信息,划分第一设备的数据的出端等级,从而可以为用户提供差异个性化体验。
结合第一方面,在第一方面的某些实现方式中,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第四等级对应的数据类型为第四类型,所述第四类型对应的数据包括设备能力数据和/或设备状态数据。
其中,所述第二类型对应的数据的风险性、所述第三类型对应的数据的风险性和所述第四类型对应的数据的风险性依次降低。
其中,一般位置数据可以是中影响的个人数据;视频数据、物流数据、日程计划数据、喜好数据可以是低影响的个人数据;设备能力数据和/或设备状态数据是非个人数据。
结合第一方面,在第一方面的某些实现方式中,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
其中,所述第一子类型对应的数据的风险性、所述第二子类型对应的数据的风险性和所述第三子类型对应的数据的风险性依次降低。
第一设备的数据的出端等级不同,该第一设备的数据的出端等级对应的数据类型也不同,从而可以为用户提供差异个性化体验。
结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一设备向所述第二设备发送所述第一设备的数据的出端等级。
结合第一方面,在第一方面的某些实现方式中,所述生物特征包括以下一种或多种:物理生物特征、软性生物特征、行为生物特征。
结合第一方面,在第一方面的某些实现方式中,所述物理生物特征包括:人脸、指纹、虹膜、视网膜、脱氧核糖核酸DNA、皮肤、手形或静脉;所述行为生物特征包括:声音、签名或步态;所述软性生物特征包括:性别、年龄、身高或体重。
第二方面,提供了一种获取数据的方法,包括:第二设备获取第一用户输入的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的账号或所述第一用户的生物特征的原始数据;所述第二设备向第一设备发送所述第一用户的注册信息;所述第二设备获取所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第二设备发送所述第一数据请求消息,接收所述第一设备发送的所述第一数据。
可选地,第二设备可以通过语音识别功能识别到第一用户对第一数据的请求;或者,第二设备还可以通过第一用户的输入获取第一用户对第一数据的请求。
结合第二方面,在第二方面的某些实现方式中,在所述第一用户的输入的注册信息包括第一用户的生物特征的原始数据的情况下,所述方法还包括:所述第二设备对所述第一用户的生物特征的原始数据进行识别,并确定是否得到与第一用户的生物特征的原始数据对应的账号;在所述第二设备得到与第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备向所述第一设备发送第五信息,所述第五信息用于指示第二设备得到与第一用户的生物特征的原始数据对应的账号;在所述第二设备未得到与第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备向所述第一设备发送第六信息,所述第六信息用于指示所述第一用户的生物特征的原始数据。
结合第二方面,在第二方面的某些实现方式中,所述方法还包括:所述第二设备接收所述第一设备发送的所述第一设备的数据的出端等级,所述第一设备的数据的出端等级对应不同的数据类型,所述不同的数据类型的数据具有不同的最高风险性。
第三方面,提供了一种获取数据的方法,包括:第二设备获取第一用户输入第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;所述第二设备向第一设备发送所述第一用户的注册信息;所述第二设备接收所述第一设备发送的第一信息,所述第一信息用于指示所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;所述第二设备根据所述第一信息,确定所述第一设备的数据的出端等级;所述第二设备获取所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第二设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据;不同的所述数据类型的数据具有不同的最高风险性;所述第二设备发送所述第一数据请求消息,接收所述第一设备发送的所述第一数据。
可选地,第二设备可以通过语音识别功能识别到第一用户对第一数据的请求;或者,第二设备还可以通过第一用户的输入获取第一用户对第一数据的请求。
结合第三方面,在第三方面的某些实现方式中,在所述第一用户的注册信息包括所述第一用户的生物特征的原始数据的情况下,所述第二设备根据所述第一信息,确定所述第一设备的数据的出端等级包括:所述第二设备确定所述第二设备是否存储有所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;在所述第二设备存储有所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第二等级;在所述第二设备未存储所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第三等级。
结合第三方面,在第三方面的某些实现方式中,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
结合第三方面,在第三方面的某些实现方式中,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据。
结合第三方面,在第三方面的某些实现方式中,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
结合第三方面,在第三方面的某些实现方式中,所述方法还包括:所述第二设备向所述第一设备发送所述第一设备的数据的出端等级。
第四方面,提供了一种数据分享的方法,包括:第一设备接收第二设备发送的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;所述第一设备对所述第一用户的生物特征进行识别,确定是否得到与所述第一用户的生物特征的原始数据对应的账号;在所述第一设备确定得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第一设备向所述第二设备发送第一信息,所述第一信息用于指示第一设备确定的第一用户的生物特征的原始数据对应的账号;所述第一设备获取来自所述第二设备的所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第一设备发送所述第一数据给所述第二设备。
结合第四方面,在第四方面的某些实现方式中,所述方法还包括:在所述第一设备确定未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第一设备向所述第二设备发送第一指令,所述第一指令用于指示第一设备未得到与该第一用户的生物特征的原始数据对应的账号。
结合第四方面,在第四方面的某些实现方式中,所述方法还包括:所述第一设备接收所述第二设备发送的所述第一设备的数据的出端等级,所述第一设备的数据的出端等级对应不同的数据类型,所述不同的数据类型的数据具有不同的最高风险性。
第五方面,提供了一种确定数据的出端等级的方法,包括:第二设备获取第一用户输入的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的账号;所述第二设备向第一设备发送所述第一用户的注册信息;所述第二设备接收所述第一设备发送的第二指令,所述第二指令用于指示所述第一设备上是否存有所述第一用户的注册信息;所述第二设备根据所述第二指令,确定所述第一设备的数据的出端等级。
结合第五方面,在第五方面的某些实现方式中,所述第二设备根据所述第二指令,确定所述第一设备的数据的出端等级包括:在所述第一设备存有所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第二等级;在所述第一设备未存有所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第四等级。
结合第五方面,在第五方面的某些实现方式中,所述第二等级对应的数据类型为第二 类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第四等级对应的数据类型为第四类型,所述第四类型对应的数据包括设备能力数据和/或设备状态数据。
结合第五方面,在第五方面的某些实现方式中,所述方法还包括:所述第二设备向所述第一设备发送所述第一设备的数据的出端等级。
结合第五方面,在第五方面的某些实现方式中,所述方法还包括:所述第二设备获取所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第二设备发送所述第一数据请求消息。
结合第五方面,在第五方面的某些实现方式中,所述第二设备发送所述第一数据请求消息之前,所述方法还包括:所述第二设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据。
第六方面,提供了一种确定数据的出端等级的方法,包括:第一设备接收第二设备发送的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的账号;所述第一设备确定是否存有所述第一用户的账号;所述第一设备向所述第二设备发送第二指令,所述第二指令用于指示第一设备上是否存有第一用户的账号。
结合第六方面,在第六方面的某些实现方式中,所述方法还包括:所述第一设备接收所述第二设备发送的所述第一设备的数据的出端等级。
结合第六方面,在第六方面的某些实现方式中,所述方法还包括:所述第一设备接收所述第二设备发送的所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第一设备确定所述第一设备存有所述第一数据,所述第一设备将所述第一数据分享给所述第二设备。
结合第六方面,在第六方面的某些实现方式中,在所述第一设备接收所述第二设备发送的所述第一用户的第一数据请求消息之后,所述方法还包括:所述第一设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据。
第七方面,提供了一种获取数据的方法,包括:第二设备获取第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;所述第二设备对所述第一用户的生物特征的原始数据进行识别,并确定所述第二设备是否可以得到与所述第一用户的生物特征的原始数据对应的账号;在所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备向所述第一设备发送第二信息,所述第二信息用于指示所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号;以及所述第二设备接收所述第一设备发送的第三信息,所述第三信息用于指示所述第一设备是否存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号;所述第二设备根据所述第三信息,确定所述第一设备的数据的出端等级;所述第二设备获取所述第一用户的数据请求消息,所述数据请求消息用于请求所述第一设备分享所述第一用户在所述第一设备上存的第一数据;所述第二设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据,不同的所述数据类型的数据具有不同的最高风险性;所述第二设备向所述第一设备发送所述第一用户的数据请求消息,接收所述第一设备发送的第一数据。
结合第七方面,在第七方面的某些实现方式中,所述第二设备根据所述第三信息,确 定所述第一设备的数据的出端等级包括:在所述第一设备存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第二等级;在所述第一设备未存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第四等级。
结合第七方面,在第七方面的某些实现方式中,所述方法还包括:在所述第二设备未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备将所述第一用户的注册信息发送给所述第一设备;所述第二设备接收所述第一设备发送的第三指令,所述第三指令用于指示所述第一设备未得到第一用户的生物特征的原始数据对应的账号;所述第二设备根据所述第三指令,确定所述第一设备的数据的出端等级是第四等级。
结合第七方面,在第七方面的某些实现方式中,所述方法还包括:在所述第二设备未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备将所述第一用户的注册信息发送给所述第一设备;所述第二设备接收所述第一设备发送的第四信息,所述第四信息用于指示所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;所述第二设备根据所述第四信息,确定所述第一设备的数据的出端等级为第三等级。
结合第七方面,在第七方面的某些实现方式中,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
结合第七方面,在第七方面的某些实现方式中,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,所述第四等级对应的数据类型为第四类型,所述第四类型对应的数据包括设备能力数据和/或设备状态数据。
结合第七方面,在第七方面的某些实现方式中,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
结合第七方面,在第七方面的某些实现方式中,所述方法还包括:所述第二设备向所述第一设备发送所述第一设备的数据的出端等级。
第八方面,提供了一种数据分享的方法,包括:所述第一设备接收所述第二设备发送的第二信息,该第二信息用于指示第二设备对第一用户的生物特征的原始数据进行识别得到的与该第一用户的生物特征的原始数据对应的账号;所述第一设备查找第一设备上是否 存有第二信息指示的账号;第一设备向第二设备发送第三信息,该第三信息用于指示第一设备上是否存有第二信息指示的账号;所述第一设备获取来自所述第二设备的所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;所述第一设备发送所述第一数据给所述第二设备。
结合第八方面,在第八方面的某些实现方式中,所述方法还包括:第一设备接收第二设备发送的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;所述第一设备对所述第一用户的生物特征的原始数据进行识别,并确定是否得到所述第一用户的生物特征的原始数据对应的账号;在所述第一设备未得到所述第一用户的生物特征的原始数据对应的账号的情况下,所述第一设备向所述第二设备发送第三指令,所述第三指令用于指示所述第一设备未得到所述第一用户的生物特征的原始数据对应的账号;在所述第一设备得到所述第一用户的生物特征的原始数据对应的账号的情况下,所述第一设备向所述第二设备发送第四信息,所述第四信息用于指示所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号。
结合第八方面,在第八方面的某些实现方式中,所述方法还包括:所述第一设备接收所述第二设备发送的所述第一设备的数据的出端等级。
第九方面,提供了一种数据分享的装置,包括:处理器,所述处理器与存储器耦合;所述存储器用于存储计算机程序;所述处理器用于执行所述存储器中存储的计算机程序,以使得所述装置执行上述第一方面以及第一方面的某些实现方式中所述的方法、上述第四方面以及第四方面的某些实现方式中所述的方法、上述第六方面以及第六方面的某些实现方式中所述的方法、上述第八方面以及第八方面的某些实现方式中所述的方法。
第十方面,提供了一种数据分享的装置,包括:处理器,所述处理器与存储器耦合;所述存储器用于存储计算机程序;所述处理器用于执行所述存储器中存储的计算机程序,以使得所述装置执行上述第二方面以及第二方面的某些实现方式中所述的方法、上述第三方面以及第三方面的某些实现方式中所述的方法、上述第五方面以及第五方面的某些实现方式中所述的方法、上述第七方面以及第七方面的某些实现方式中所述的方法。
第十一方面,提供了一种计算机可读介质,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行上述第一方面至第八方面以及第一方面的某些实现方式至第八方面的某些实现方式中所述的方法。
第十二方面,提供了一种系统芯片,该系统芯片包括输入输出接口和至少一个处理器,该至少一个处理器用于调用存储器中的指令,以执行上述第一方面至第八方面以及第一方面的某些实现方式至第八方面的某些实现方式中的方法的操作。
可选地,该系统芯片还可以包括至少一个存储器和总线,该至少一个存储器用于存储处理器执行的指令。
附图说明
图1是可以应用本申请实施例的方法和装置的应用场景示例图。
图2是本申请实施例提供的一种数据分享的方法200的示意性流程图。
图3是本申请实施例提供的设备访问数据库的权限等级以及各权限等级对应的可分享的数据的示意图。
图4是本申请实施例提供的方法200中的步骤220的一种具体的示意性流程图。
图5是本申请实施例提供的方法220中的步骤220的另一种具体的示意性流程图。
图6是本申请实施例提供的方法220中的步骤220的又一种具体的示意性流程图。
图7是本申请实施例提供的方法220中的步骤220的又一种具体的示意性流程图。
图8是本申请实施例提供的方法200中的步骤240的一种具体的示意性流程图。
图9是本申请实施例提供的多设备之间进行数据分享的一例的示意图。
图10是本申请实施例提供的多设备之间进行数据分享的另一例的示意图。
图11是本申请实施例提供的多设备之间进行数据分享的另一例的示意图。
图12是本申请实施例提供的多设备之间进行数据分享的另一例的示意图。
图13是本申请实施例提供的电子设备的硬件结构示意图。
图14是本申请实施例提供的电子设备的软件结构示意图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1是可以应用本申请实施例的方法和装置的一种应用场景示例图。图1所示的场景中包括手机101、车辆102、平板电脑(pad)103、手表104、手机111、手机121、手表122、音响123、电视131、手机132、平板电脑133、手表134、音响135和车辆136。其中,手机101、车辆102、平板电脑103和手表104上分别注册了账号B;手机111上只注册了用户1的账号A,和/或,手机111上存在用户1的生物特征;手机121、手表122和音响123上注册了账号C,且在手机121、手表122和音响123都存在同一用户的生物特征的原始数据;电视131、手机132、平板电脑133、手表134、音响135和车辆136上没有任何账号的注册,且电视131、手机132、平板电脑133、手表134、音响135和车辆136上也不存在同一用户的生物特征的原始数据,即电视131、手机132、平板电脑133、手表134、音响135和车辆136可以是单用户使用,也可以是多用户使用。
应理解,图1中所示的设备仅是一种示例,该系统中可以包括更多或更少的设备。例如,可以仅包括电视131、手机121、手机111、平板电脑103、手表104、音响123和车辆136。
图1中的手机101、手机111、手机121、手机132、平板电脑103、平板电脑133、手表104、手表122、手表134、车辆102和车辆136都可以是代表具备生物特征识别功能的终端设备,例如,手机101可以进行人脸识别,手机121可以进行声纹识别。
当然,手机101和手机121也可以识别相同的生物特征。例如,手机101和手机121均可以进行人脸识别;又例如,手机101和手机121均可以进行声纹识别。
本申请实施例中的终端设备可以是手机(mobile phone)、平板电脑、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端、增强现实(augmented reality,AR)终端、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。
图1中的电视131、音响123和音响135可代表可以采集生物特征但不具备生物特征 识别功能的设备,例如,电视131可以采集人脸图像以及采集人的声音,但是没有人脸识别功能和没有声纹识别功能;又例如,音响123和音响135可以采集人脸图像以及采集人的声音,但是没有人脸识别功能和没有声纹识别功能。
本申请实施例中的生物特征可以包括以下一种或多种:物理生物特征、行为生物特征、软性生物特征。物理生物特征可以包括:人脸、指纹、虹膜、视网膜、脱氧核糖核酸(deoxyribonucleic acid,DNA)、皮肤、手形、静脉。行为生物特征可以包括:声纹、签名、步态。软性生物特征可以包括:性别、年龄、身高、体重。
如图1中所示的每个设备之间都可以通过网络进行通信。可选地,上述网络包括无线保真(wireless fidelity,WI-FI)网络或蓝牙网络。可以理解地,上述网络也可以包括无线通信网络,如2G、3G、4G、5G通信网络。上述网络可以具体是工作网络或家庭网络。例如,电视131采集其使用者的人脸图像和声音后,会将该使用者的人脸图像和声音,以及使用者使用电视131上产生的数据进行保存,并可以通过上述网络将人脸图像发送给手机101,将声音信息发送给手机121和音响135。
随着智能设备和IoT领域的快速发展,多智能设备协同一体化已然成为业界的共识。为了实现多个智能设备的协同,就需要用户数据及设备数据能够在多个智能设备或多账号间流动及分享。在多智能设备的场景下,例如家庭场景下,私人设备(例如,手机或手表)和家庭公用设备(例如,电视、车或音箱),其中,在家庭公用设备上会有多人使用的情况,而目前在家庭公用设备上,无法识别个人,若要依据使用人提供差异个性化体验,目前还是无法实现。
因此,亟需提供一种跨设备的数据分享的方法,根据用户使用设备的不同形式的注册信息,为用户提供差异个性化体验,实现多智能设备之间的数据分享。
如图2所示,是本申请实施例提供的一种数据分享的方法200。应理解,图2示出了该方法的步骤或操作,但这些步骤或操作仅是示例,本申请提出的技术方案还可以执行其他操作或者图2中的各个操作的变形。
以下,第一设备和第二设备可以是终端设备,该终端设备可以是图1中所示的任意一个设备。第一用户可以是任何一个使用第一设备和第二设备的用户。第一设备可以为多个,在第一设备为多个的情况下,多个第一设备中的每个第一设备都可以执行以下方法中第一设备执行的步骤。
步骤210,第二设备获取第一用户输入的第一用户的注册信息。
其中,第一用户的注册信息可以包括第一用户的账号和/或第一用户的生物特征的原始数据。
上述账号可以是该第一用户注册的;或者,上述账号也可以不是该第一用户注册的,只是该第一用户通过该账号来使用第二设备。
示例性地,上述账号可以是手机号码、用户设置的用户名、邮箱等等。
上述生物特征的原始数据可以理解为未处理的生物特征的数据。
上述第一用户通过第一用户输入的第一用户的注册信息来使用第二设备。示例性地,第一用户通过第一用户的账号来使用第二设备。例如,用户通过账号1来使用第二设备;或者,第一用户通过第一用户的生物特征的原始数据来使用第二设备;例如,用户通过用户的人脸图像来使用第二设备。
其中,第二设备可以是具备生物识别功能的设备,例如,如图1所示,该第二设备可以是手机111、手机101、车辆102、平板电脑103、手机121、手机132、平板电脑133或车辆136;或者,第二设备可以是可以采集生物特征但不具备生物识别功能的设备,例如,如图1所示,该第二设备可以是手表104、手表122、音响123、电视131、手表134或音响135。
生物特征识别技术(biometric identification technology)是指利用人体生物特征进行身份认证的一种技术。更具体一点,生物特征识别技术就是通过计算机与光学、声学、生物传感器和生物统计学原理等高科技手段密切结合,利用人体固有的生理特性和行为特征来进行个人身份的鉴定。
在第二设备是具备生物识别功能的设备的情况下,第二设备会将对用户的生物特征的原始数据识别后的识别结果(该识别结果为该用户)保存起来。
上述生物特征识别结果可以理解为根据生物特征识别得到的生物身份。例如,手机101在对使用手机101的用户进行身份识别之前,手机101会先对手机101的机主1的人脸图像进行采集,将机主1的人脸图像转化成数字代码,并将这些数字代码组合得到机主1的人脸特征模版。手机101在对使用手机101的用户进行身份识别时,会采集使用手机101的用户的人脸图像的原始数据,并将采集到的使用手机101的用户的人脸图像的原始数据与手机101的数据库中存储的机主1的人脸特征模版进行比对,在使用手机101的用户的人脸图像的原始数据与手机101的数据库中存储的机主1的人脸特征模版匹配的情况下,确定使用手机101的用户为机主1。又例如,音响123在对使用音响123的用户进行身份识别之前,音响123会先对音响123的机主2的声音进行采集,将机主2的声音转化成数字代码,并将这些数字代码组合得到机主2的声纹特征模版。音响123在对使用音响123的用户进行身份识别时,会采集使用音响123的用户的声音的原始数据,并将采集到的使用音响123的用户的声音的原始数据与音响123的数据库中存储的机主2的声纹特征模版进行比对,在使用音响123的用户的声音的原始数据与音响123的数据库中存储的机主2的声纹特征模版匹配的情况下,确定使用音响123的用户为机主2。其中,使用手机101的用户的身份是机主1和使用音响123的用户的身份是机主2即为生物特征识别结果,手机101和音响123会将生物识别结果保存下来。
当用户通过用户输入的该用户的注册信息使用设备时,该用户的注册信息与用户通过输入的该用户的注册信息使用该设备时产生的数据会一一对应保存下来。其中,注册信息可以是账号或生物特征的原始数据。示例性地,上述账号可以是手机号码、用户设置的用户名、邮箱等。
无论用户是通过账号还是用户的生物特征的原始数据来使用设备,用户使用设备从而在设备上产生的数据,这些数据在设备的存储器中都是按照用户使用的账号进行存储的,即在用户的注册信息是用户的账号的情况下,用户通过用户的账号使用设备产生的数据在存储器中是按照用户使用的账号进行存储的;当有多个账号使用设备时,每个账号使用过程中在设备上产生的数据也是按照账号存储的,每个账号可以对应一个存储引擎,通过相应的存储引擎来访问相应的账号存储的数据。在用户的注册信息是用户的生物特征的原始数据的情况下,该设备上存在与该用户的生物特征的原始数据对应的账号,用户通过用户的生物特征的原始数据在设备上存的数据在数据库中也是按照与该用户的生物特征的原 始数据对应的账号进行存储的,其中,生物特征的原始数据与账号可以是一一对应的关系,例如,一个人脸对应一个账号;或者,生物特征的原始数据与账号可以是多对一的关系,例如,多个指纹对应一个账号,或者一个人脸和一个指纹对应一个账号。比如一个设备中账号A在使用时录入的生物特征,比如指纹、虹膜、人脸等会和账号A绑定,账号B在使用时录入的生物特征则和账号B绑定。
例如,如图1所示,当用户1通过账号A来使用手机111时,用户1通过账号A在手机111上存的数据在数据库中是按照账号A进行存储的。其中,用户1通过账号A在手机111上存的数据可以包括用户1通过账号A在手机111上拍摄的照片、用户1通过账号A在手机111上存的历史听歌歌单、用户1通过账号A在手机111上的存的用户1的历史位置数据等。当用户2通过账号B来使用手机111时,用户2通过账号B在手机111上存的数据在数据库中是按照账号B进行存储的。
又例如,如图1所示,当用户3通过用户3的人脸图像来使用手机121之前,用户3已经通过账号C使用过手机121,则用户3通过用户3的人脸图像在手机121上存的数据在数据库中是按照账号C进行存储的。其中,用户3通过用户3的人脸图像在手机121上存的数据可以包括用户3通过用户3的人脸图像在手机121上存的历史视频观看记录、用户3通过用户3的人脸图像在手机121上存的用户3的历史运动情况记录等。
第二设备在获取到第一用户输入的注册信息之后,第一用户需要将第一用户通过第一用户输入的注册信息在至少一个第一设备上存的数据同步到第二设备上。在第一用户通过第一用户输入的注册信息在至少一个第一设备上存的数据同步到第二设备之前,还包括步骤220。
步骤220,确定第一设备的数据的出端等级。
其中,该第一设备是与第二设备同处一个网络中的除第二设备以外的设备;或者,该第一设备是根据与第二设备同处一个网络中的所有设备的功能选出的设备,例如,该第一设备是具备生物识别功能的设备。
可选地,上述网络中的设备可以是相互信任的设备。例如,该网络可以是家庭网络,该家庭网络中的设备是相互信任的设备。又例如,该网络还可以是工作网络,该工作网络中的设备是相互信任的设备。其中,上述网络中的设备不仅仅是连接上该网络的设备,还都是通过扫描二维码(识别码)加入该网络中的设备,该二维码可以是预先设置的。
其中,该第一设备是与第二设备同处于一个网络中同一个群组中的除第二设备以外的设备;或者,该第一设备是根据与第二设备同处于一个网络中同一个群组中的所有设备的功能选出的设备,例如,该第一设备是具备生物特征识别功能的设备。
可选地,在上述网络中可以预先设置多个群组,该多个群组中的每个群组中的设备是可以是相互信任的设备。例如,该网络可以是家庭网络,该家庭网络中可以预先设置家庭群组和访客群组,该家庭群组中包括上述第一设备和上述第二设备,该家庭群组中的第一设备和第二设备是相互信任的设备;该访客群组中的设备与家庭群组中的设备之间是不相互信任的设备,但是访问群组中的设备与家庭群组中的设备之间可以进行非隐私信息的交互。其中,家庭群组中的设备不仅可以是连接上家庭网络的设备,还是通过扫描二维码加入到该家庭网络中的设备,访客群组中的设备仅仅是连接上家庭网络的设备。
其中,第一设备的数据的出端等级可以理解为该第一设备上的数据分享给其他设备的 等级。第一设备的数据的出端等级相对于请求数据的设备设置,不同的请求数据的设备,第一设备的数据的出端等级可能不同,可能相同。请求数据的设备是第一设备本身时,第一设备的数据的出端等级是第一等级。请求数据的设备不是第一设备本身时,第一设备的数据的出端等级由请求数据的设备(例如,第二设备)以什么形式的注册信息来访问第一设备上的数据决定。如果请求数据的设备采用第一设备中使用的账号来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第二等级;如果请求数据的设备采用与第一设备中使用的账号对应的生物特征的原始数据来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第三等级;如果请求数据的设备没有使用与第一设备相同的账号或生物特征的原始数据来访问第一设备时,对于请求数据的设备而言,第一设备的数据的出端等级为第四等级。
示例性地,如图3中左边正三角图所示,当一个用户使用一个设备时,在该设备中产生的数据都可以被该设备使用,对于该设备本身,该设备的数据的出端等级为第一等级,比如用户A通过账号A或者人脸(人脸绑定账号A)登录设备1,在使用设备1的过程中产生的数据与账号A关联,当用户A再次使用账号A或人脸登录设备1时,设备1中的程序可以使用或访问与账号A相关联的所有数据以及与任何账号无关的数据。当用户通过第二设备来获取第一设备中与用户有关的数据时,该第一设备的数据的出端等级可以分为以下至少两个等级:第二等级、第三等级和第四等级。具体地,在用户通过与第一设备相同的账号使用第二设备时,对于该第二设备而言,该第一设备的数据的出端等级为第二等级;在用户通过与第一设备相同的第一生物特征的原始数据使用第二设备时,对于该第二设备而言,该第一设备的数据的出端等级为第三等级,其中,第一生物特征的原始数据是该用户的任意一种生物特征的原始数据;在用户未通过与第一设备相同的账号和生物特征的原始数据使用第二设备时,对于该第二设备而言,该第一设备的数据的出端等级为第四等级。
其中,设备的数据的出端等级从高到低依次为第一等级、第二等级、第三等级、第四等级。
例如,如图1所示,用户通过账号A只使用了手机111,则用户通过账号A在手机111存的数据不允许出端,即用户通过账号A在手机111上存的数据不会分享给其他设备。用户通过账号B分别使用了手机101、车辆102、平板电脑103和手表104,则用户通过账号B在手机101上存的数据的出端等级为第二等级;用户通过账号B在车辆102上存的数据的出端等级为第二等级;用户通过账号B在平板电脑103上存的数据的出端等级为第二等级;用户通过账号B在手表104上存的数据的出端等级为第二等级。用户通过用户的第一生物特征的原始数据使用了手机121、手表122和音响123,则用户通过用户的第一生物特征的原始数据在手机121上存的数据的出端等级为第三等级;用户通过用户的第一生物特征的原始数据在手表122上存的数据的出端等级为第三等级;用户通过用户的第一生物特征的原始数据在音响123上存的数据的出端等级为第三等级。用户未使用任何账号,且未使用任何用户的生物特征的原始数据使用了电视131、手机132、平板电脑133、手表134、音响135和车辆136,则电视131的数据的出端等级为第四等级;手机132的数据的出端等级为第四等级;平板电脑133的数据的出端等级为第四等级;手表134的数据的出端等级为第四等级;音响135的数据的出端等级为第四等级;车辆136的数据的出 端等级为第四等级。
进一步地,当用户使用不同的生物特征的原始数据来使用多个设备时,根据识别的生物特征的原始数据的准确率或误闯率,可以将上述第三等级细化分包括以下至少两种:第一子等级、第二子等级、第三子等级。其中,设备的数据的出端等级从高到低依次为第一子等级、第二子等级、第三子等级。
在用户通过该用户的3D人脸、指纹、虹膜或DNA来使用多个设备的情况下,即同一个用户在多个设备上使用同一个3D人脸、指纹、虹膜或DNA时,将该设备的数据的出端等级确定为第一子等级;例如,用户A通过用户A的指纹来使用设备A时,也通过用户A的指纹来使用其他设备(例如,设备C),则用户A通过用户A的指纹在设备A上存的数据的出端等级为第一子等级;用户A通过用户A的指纹在设备A上存的数据的出端等级也为第一子等级。
在用户通过该用户的2D人脸或静脉来使用多个设备的情况下,即同一个用户在多个设备上使用同一个2D人脸或静脉时,将该设备的数据的出端等级确定为第二子等级;例如,用户A通过用户A的2D人脸来使用设备A时,也通过用户A的2D人脸来使用其他设备(例如,设备C),则用户A通过用户A的2D人脸在设备A上存的数据的出端等级为第二子等级;用户A通过用户A的2D人脸在设备C上存的数据的出端等级也为第二子等级。
在用户通过该用户的声音或签名来使用多个设备的情况下,即同一个用户在多个设备上使用同一个声音或签名时,将该设备的数据的出端等级确定为第三子等级。例如,用户A通过用户A的声音来使用设备A时,也通过用户A的声音来使用其他设备(例如,设备C),则用户A通过用户A的声音在设备A上存的数据的出端等级为第三子等级;用户A通过用户A的声音在设备C上存的数据的出端等级为第三子等级。
当第一用户使用第二设备时,第一用户需要获取第一设备中数据,则需要确定第一设备的数据的出端等级,此时,第一设备的数据的出端等级是相对于第二设备而言。以下,以两种情况详细描述如何确定第一设备的数据的出端等级。以下以第一设备和第二设备属于同一家庭网络为例,家庭网络可以通过wifi方式或蓝牙方式通信。
情况1:第二设备确定第一设备的数据的出端等级。
(1)在第一用户输入的注册信息包括第一用户的生物特征的原始数据,且第二设备是可以采集生物特征但不具备生物识别功能的设备的情况下,该第二设备只能采集生物特征,第二设备需要借助其他设备来完成生物识别功能。
如图4所示,具体的步骤220可以包括步骤220a至步骤223a。
步骤220a,第二设备将第一用户的生物特征的原始数据发送给第一设备。
步骤221a,第一设备对该第一用户的生物特征的原始数据进行识别,确定是否可以得到与该第一用户的生物特征的原始数据对应的账号。
当第一设备确定未得到与该第一用户的生物特征的原始数据对应的账号的情况下,执行步骤222a。
步骤222a,第一设备向第二设备发送第一指令,该第一指令用于指示第一设备未得到与该第一用户的生物特征的原始数据对应的账号。在第二设备接收到该第一指令后,确定第一设备的数据的出端等级为第四等级。
例如,如图1所示,当电视131(该电视131是第二设备的一例)通过电视131的摄像头采集使用电视131的用户的人脸图像,对使用电视131的用户进行身份识别时,电视131将采集的电视131的使用者的人脸图像发送给手机132(该手机132是第一设备的一例),手机132根据电视131发送的使用电视131的用户的人脸图像的原始数据与手机132的数据库中存储的特征模版进行比对,使用电视131的用户的人脸图像的原始数据与手机132的数据库中存储的机主3的特征模版不匹配的情况下,手机132不会得到与使用电视131的用户的人脸图像对应的账号,手机132向电视131发送上述第一指令,电视131接收到第一指令后,电视131可以确定手机132的数据的出端等级为第四等级。
当第一设备对该第一用户的生物特征的原始数据进行识别可以得到识别结果的情况下,则第一设备可以得到第一用户的生物特征的原始数据对应的账号。在第一设备确定得到了与该第一用户的生物特征的原始数据对应的账号的情况下,执行步骤222a'和步骤223a。
步骤222a',第一设备向第二设备发送第一信息,第一信息用于指示第一设备确定的第一用户的生物特征的原始数据对应的账号。
步骤223a,第二设备根据第一设备发送的第一信息,确定第一设备的数据的出端等级。
具体地,第二设备确定自身存储有第一设备确定的第一用户的生物特征的原始数据对应的账号的情况下,第二设备确定第一设备的数据的出端等级为第二等级。在第二设备确定自身没有存储第一设备确定的第一用户的生物特征的原始数据对应的账号的情况下,第二设备确定第一设备的数据的出端等级为第三等级。
进一步地,在确定第一设备的数据的出端等级为第三等级的情况下,第二设备还可以根据第一用户输入的第一用户的注册信息的具体形式,确定第一设备的数据的出端等级为第三等级中的哪个子等级。具体地,当第一用户输入的第一用户的注册信息是第一用户的3D人脸、指纹、虹膜或DNA时,第二设备确定第一设备的数据的出端等级为第一子等级;当第一用户输入的第一用户的注册信息是第一用户的2D人脸或静脉时,第二设备确定第一设备的数据的出端等级为第二子等级;当第一用户输入的第一用户的注册信息是第一用户的声纹或签名时,第二设备确定第一设备的数据的出端等级为第三子等级。
例如,如图1所示,当音响123(该音响123是第二设备的一例)通过音响123的麦克风采集使用音响123的用户的声音,对使用音响123的用户进行身份识别时,音响123将采集的音响123的使用者的声音发送给手机121(该手机121是第一设备的一例),手机121根据音响123发送的使用音响123的用户的声音的原始数据与手机121的数据库中存储的特征模版进行比对,使用音响123的用户的声音的原始数据与手机121的数据库中存储的账号C的特征模版匹配的情况下,手机121会得到与使用音响123的用户的声音对应的账号C,手机121向音响123发送上述账号C,音响123确定自身存储有账号C,则音响123确定手机121的数据的出端等级为第二等级。当音响123确定自身没有存储有账号C则音响123确定手机101的数据的出端等级为第三等级。更进一步地,音响123还可以确定手机101的数据的出端等级为第三等级中的第三子等级。
(2)在第一用户输入的注册信息包括第一用户的生物特征的原始数据,且第二设备具备生物识别功能的设备的情况下,该第二设备可以完成生物识别功能。
如图5所示,具体的步骤220可以包括步骤220b至步骤224b、步骤221c至步骤224c。
步骤220b,第二设备对该第一用户的生物特征的原始数据进行识别,并确定第二设备是否可以得到与该第一用户的生物特征的原始数据对应的账号。
具体地,在第二设备得到与该第一用户的生物特征的原始数据对应的账号的情况下,执行步骤221b至步骤224b。在第二设备未得到与该第一用户的生物特征的原始数据对应的账号的情况下,执行步骤221c至步骤224c。
步骤221b,第二设备向第一设备发送第二信息,该第二信息用于指示第二设备对第一用户的生物特征的原始数据进行识别得到的与该第一用户的生物特征的原始数据对应的账号。
步骤222b,第一设备查找第一设备上是否存有第二信息指示的账号,并执行步骤223b。
步骤223b,第一设备向第二设备发送第三信息,该第三信息用于指示第一设备上是否存有第二信息指示的账号。
步骤224b,第二设备根据第三信息,确定第一设备的数据的出端等级。
具体地,在第三信息指示第一设备上存有第二信息指示的账号的情况下,第二设备确定第一设备的数据的出端等级为第二等级;在第三信息指示第一设备上未存有第二信息指示的账号的情况下,第二设备确定第一设备的数据的出端等级为第四等级。
例如,如图1所示,手机121(该手机121是第二设备的一例)可以对使用手机121的用户的指纹进行识别,并得到与该手机121的用户的指纹相对应的账号C。当手机121将账号C发送给手表122(该手表122是第一设备的一例)时,手表122确定手表122中存有账号C,手表122向手机121发送手表122中存有账号C的信息,则手机121确定手表122的数据的出端等级为第二等级。当手机121将账号C发送给音响135(该音响135是第一设备的另一例)时,音响135确定音响135中未存有账号C,音响135向手机121发送手表122中未存账号C的信息,则手机121确定音响135的数据的出端等级为第四等级。
步骤221c,第二设备将第一用户的生物特征的原始数据发送给第一设备。
步骤222c,第一设备对该第一用户的生物特征的原始数据进行识别,并确定是否可以得到第一用户的生物特征的原始数据对应的账号。
在第一设备未得到第一用户的生物特征的原始数据对应的账号的情况下,第一设备向第二设备发送第三指令,该第三指令用于指示第一设备未得到第一用户的生物特征的原始数据对应的账号,则第二设备根据第三指令,确定第一设备的数据的出端等级是第四等级。在第一设备得到第一用户的生物特征的原始数据对应的账号的情况下,执行步骤223c至步骤224c。
步骤223c,第一设备向第二设备发送第四信息,所述第四信息用于指示第一设备确定的第一用户的生物特征的原始数据对应的账号。
步骤224c,第二设备根据第四信息,确定第一设备的数据的出端等级是第三等级。
进一步地,第二设备还可以根据第四信息和第一用户输入的第一用户的注册信息的具体形式,确定第一设备的数据的出端等级为第三等级中的哪个子等级。具体地,当第一用户输入的第一用户的注册信息是第一用户的3D人脸、指纹、虹膜或DNA时,第二设备确定第一设备的数据的出端等级为第一子等级;当第一用户输入的第一用户的注册信息是 第一用户的2D人脸或静脉时,第二设备确定第一设备的数据的出端等级为第二子等级;当第一用户输入的第一用户的注册信息是第一用户的声音或签名时,第二设备确定第一设备的数据的出端等级为第三子等级。
(3)在第一用户输入的注册信息是第一用户的账号的情况下,第二设备中注册过第一用户的账号,第二设备将第一用户的账号发送给第一设备;第一设备确定是否存有第一用户的账号,第一设备向第二设备发送第二指令,该第二指令用于指示第一设备上是否存有第一用户的账号。在第一设备上存有第一用户的账号的情况下,第二设备确定第一设备的数据的出端等级为第二等级;在第一设备上未存有第一用户的账号的情况下,第二设备确定第一设备的数据的出端等级为第四等级。
在上述情况1下,第二设备可以将第二设备确定的第一设备的数据的出端等级发送给第一设备。
情况2:第一设备确定第一设备的数据的出端等级。
(1)在第一用户输入的注册信息包括第一用户的生物特征的原始数据,且在第二设备是可以采集生物特征但不具备生物识别功能的设备的情况下,该第二设备只能采集生物特征,第二设备需要借助其他设备来完成生物特征识别功能。
可选地,上述第一用户的账号可以是一个或多个。
如图6所示,具体的步骤220还可以包括步骤220d至步骤222d。
步骤220d,第二设备将第一用户的生物特征的原始数据和第二设备中存储的所有账号发送给第一设备。
步骤221d,第一设备对该第一用户的生物特征的原始数据进行识别,并确定是否可以得到与该第一用户的生物特征的原始数据相对应的账号。
步骤222d,第一设备确定第一设备的数据的出端等级。
具体地,在第一设备确定第二设备中存储的所有账号中存在第一用户的生物特征的原始数据相对应的账号时,第一设备确定第一设备的数据的出端等级为第二等级;在第一设备确定第二设备中存储的所有账号中不存在第一用户的生物特征的原始数据相对应的账号时,第一设备确定第一设备的数据的出端等级为第三等级。在第一设备未得到与该第一用户的生物特征的原始数据相对应的账号的情况下,该第一设备确定第一设备的数据的出端等级为第四等级。
进一步地,确定第一设备的数据的出端等级为第三等级的情况下,第一设备还可以根据第二设备发送的第一用户的注册信息的具体形式,确定第一设备的数据的出端等级为第三等级中的哪个子等级。具体地,当第一用户的注册信息是第一用户的3D人脸、指纹、虹膜或DNA时,第一设备确定第一设备的数据的出端等级为第一子等级;当第一用户的注册信息是第一用户的2D人脸或静脉时,第一设备确定第一设备的数据的出端等级为第二子等级;当第一用户的注册信息是第一用户的声音或签名时,第二设备确定第一设备的数据的出端等级为第三子等级。
(2)在第一用户的输入的注册信息包括第一用户的生物特征的原始数据,且第二设备具备生物识别功能的设备的情况下,该第二设备可以完成生物识别功能。
如图7所示,具体的步骤220还可以包括步骤220e至步骤222e、步骤221f和222f。
步骤220e,第二设备对第一用户的生物特征的原始数据进行识别,并确定是否可以得 到与第一用户的生物特征的原始数据对应的账号。
在第二设备得到与第一用户的生物特征的原始数据对应的账号的情况下,上述方法200还包括221e和步骤222e。在第二设备未得到与第一用户的生物特征的原始数据对应的账号的情况下,上述方法200还包括221f和步骤222f。
步骤221e,第二设备向第一设备发送第五信息,该第五信息用于指示第二设备得到与第一用户的生物特征的原始数据对应的账号。
步骤222e,第一设备根据第五信息,确定第一设备的数据的出端等级。
具体地,在第一设备确定第一设备存有第二设备发送的第一用户的生物特征原始数据对应的账号的情况下,第一设备确定第一设备的数据的出端等级为第二等级;在第一设备确定第一设备未存有第二设备发送的第一用户的生物特征的原始数据对应的账号的情况下,第一设备确定至第一设备的数据的出端等级为第四等级。
步骤221f,第二设备向第一设备发送第六信息,所述第六信息用于指示所述第一用户的生物特征的原始数据。
步骤222f,第一设备根据第六信息,确定是否得到第一用户的生物特征的原始数据对应的账号。
步骤223f,确定第一设备的数据的出端等级。
具体地,在第一设备未得到第一用户的生物特征的原始数据的账号的情况下,第一设备确定该第一设备的数据的出端等级为第四等级。在第一设备确定第一用户的生物特征的原始数据对应的账号的情况下,执行步骤224f至步骤226f。
步骤224f,第一设备向第二设备发送第七信息,所述第七信息用于指示第一设备确定的第一用户的生物特征的原始数据对应的账号。
步骤225f,第二设备确定第二设备上是否存有第一设备确定的第一用户的生物特征的原始数据对应的账号。
步骤226f,第二设备向第一设备发送第八信息,该第八信息用于指示第二设备上是否存有第一设备确定的第一用户的生物特征的原始数据对应的账号。
在第二设备上存有第一设备确定的第一用户的生物特征的原始数据对应的账号的情况下,第一设备确定第一设备的数据的出端等级为第二等级;在第二设备上未存有第一设备确定的第一用户的生物特征的原始数据对应的账号的情况下,第一设备确定第一设备的数据的出端等级为第三等级。
进一步地,在第一设备确定第一设备的数据的出端等级为第三等级的情况下,第一设备还可以根据第二设备发送的第一用户的注册信息的具体形式,确定第一设备的数据的出端等级为第三等级中的哪个子等级。具体地,当第一用户的注册信息是第一用户的3D人脸、指纹、虹膜或DNA时,第一设备确定第一设备的数据的出端等级为第一子等级;当第一用户的注册信息是第一用户的2D人脸或静脉时,第一设备确定第一设备的数据的出端等级为第二子等级;当第一用户的注册信息是第一用户的声音或签名时,第二设备确定第一设备的数据的出端等级为第三子等级。
(3)在第一用户的输入的注册信息是第一用户的账号的情况下,第二设备中注册过第一用户的账号,第二设备将第一用户的账号发送给第一设备;第一设备确定是否存有第一用户的账号,在第一设备存有第一用户的账号的情况下,第一设备确定第一设备的数据 的出端等级为第二等级;在第一设备未存有第一用户的账号的情况下,第一设备确定第一设备的数据的出端等级是第四等级。
在上述情况2下,第一设备可以将第一设备确定的第一设备的数据的出端等级发送给第二设备。
上述情况1和2中,家庭网络中的第一设备可以是一个或多个,均可以采用上述方式确定第一设备的数据的出端等级。第二设备给第一设备发送生物特征的原始数据时,可以是发送给家庭网络中除了第二设备以外的所有设备,家庭网络中除了第二设备以外的所有设备都是第一设备;还可以是第二设备根据家庭网络中设备的性能选出能够具有生物特征识别功能的设备,将生物特征的原始数据发送给具有生物特征识别功能的设备,在得到生物特征的原始数据对应的账号后,然后再确认家庭网络中的除第二设备以外的其他设备是否存有该账号,完成家庭网络中的除第二设备以外的其他设备(即第一设备)的数据的出端等级。
当第一设备的数据的出端等级确定后,第二设备还可以执行以下步骤230。
步骤230,第二设备获取第一用户的第一数据请求消息,该第一数据请求消息用于请求分享第一用户的第一数据。
示例性地,第二设备可以通过语音识别功能识别到第一用户对第一数据的请求;或者,第二设备还可以通过第一用户的输入获取第一用户对第一数据的请求。
用户在设备上输入的注册信息,使用设备产生的数据可以根据数据的风险程度分为高影响的个人数据、中影响的个人数据、低影响的个人数据和非个人数据。其中,高影响的个人数据可以包括精确的位置数据和/或健康数据,其中,精准的位置数据可以理解为是经纬度坐标或轨迹。例如,精确的位置数据可以是用户使用设备时,用户的实时的精确位置数据。中影响的数据可以包括一般位置数据和/或视频数据。其中,一般位置数据可以理解为终端设备所在的小区标识(cell Identity,CELL ID)或者该设备所连接的无限保真WI-FI的基本服务集标识(basic service set identifier,BSSID)。一般位置数据无法直接定位到经纬度坐标,但是可以大概标识用户位置的信息。一般的位置数据可以是用户使用设备时,用户的历史位置数据。示例性地,用户感兴趣的地方,例如,用户喜欢吃饭的地方,用户喜欢娱乐的地方。低影响的数据可以包括物流数据、日程计划数据和/或喜好数据;非个人数据可以包括设备能力数据和/或设备状态数据。
高影响的个人数据的风险性>中影响的个人数据的风险性>低影响的个人数据的风险性>非个人数据的风险性。其中,高影响个人数据可以理解为该部分个人数据对用户而言风险程度影响最高,即该部分数据的风险程度最高;中影响个人数据可以理解为该部分个人数据对用户而言风险程度影响较高,即该部分数据的风险程度较高;低影响个人数据可以理解为该部分个人数据对用户而言风险程度影响较低,即该部分数据的风险程度较低;非个人数据可以理解为该部分数据与用户无关,而是设备本身的一些数据。
可选地,本申请实施例中风险程度还可以用隐私程度代替,风险性还可以用隐私性代替。
用户通过用户的输入的注册信息使用设备时,在设备上产生数据的时候,设备会根据数据的风险程度,将设备上的数据打上标签。例如,将精确的位置数据打上高影响的个人数据的标签;将一般位置数据打上中影响的个人数据的标签;将用户的喜好数据打上低影 响的个人数据的标签;将设备能力数据打上非个人数据的标签。
例如,如图3所示,对于一个请求数据的设备而言,被请求数据的设备的数据的出端等级越高,请求数据的设备可访问数据的最高风险性越高。具体地,在被请求数据的设备的数据的出端等级为第二等级的情况下,请求数据的设备可访问被请求数据的设备中的数据类型为第二类型,可以访问最高风险性的数据是中影响的个人数据,第二类型的数据可以包括中影响的个人数据、低影响的个人数据和非个人数据。在被请求数据的设备的数据的出端等级为第三等级的情况下,请求数据的设备可访问被请求数据的设备中的数据类型为第三类型,可以访问最高风险性的数据是低影响的个人数据,第三类型的数据可以包括低影响的个人数据和非个人数据。在被请求数据的设备的数据的出端等级为第四等级的情况下,请求数据的设备可访问被请求数据的设备中的数据类型为第四类型可以包括非个人数据。可以理解地,当请求数据的设备就是被请求数据的设备时,则可访问所有数据类型,即第一类型,可以访问的最高风险性的数据是高影响的个人数据,第一类型的数据包括高影响的个人数据、中影响的个人数据、低影响的个人数据和非个人数。
进一步地,在上述第三等级可以细化分为以下至少两种:第一子等级、第二子等级、第三子等级的情况下,被请求数据的设备的数据的出端等级对应的数据类型可以不一样。具体地,在被请求数据的设备的数据的出端等级为第一子等级的情况下,请求数据的设备可访问被请求数据的设备的数据类型包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据,例如,用户拍的照片或用户录制的视频。在被请求数据的设备的数据的出端等级为第二子等级的情况下,请求数据的设备可访问被请求数据的设备的数据类型包括物流数据、日程计划数据、设备能力数据和/或设备状态数据,例如,用户的快递运输数据。在被请求数据的设备的数据的出端等级为第三子等级的情况下,请求数据的设备可访问被请求数据的设备的数据类型包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据,例如,用户喜欢听的歌的类型或用户喜欢听的歌星;又例如,用户的运动喜好;又例如,用户观看的视频记录。
上述第二设备可以是请求数据的设备,上述第一设备可以是被请求数据的设备。
在第二设备获取第一用户对第一数据的请求后,还可以包括步骤240。
步骤240,确定第一设备是否分享第一数据。以下,以两种方式具体描述步骤240。
方式1:第二设备确定第一设备是否分享第一数据。
如图8所示,具体的步骤240可以包括步骤241a至步骤244a。
步骤241a,第二设备确定第一数据是否属于第一设备的数据的出端等级对应的数据类型的数据,在第一数据属于第一设备的数据的出端等级对应的数据类型的数据,第二设备执行步骤242a;在第一数据不属于第一设备的数据的出端等级对应的数据类型的数据的情况下,第二设备不会向第一设备发送第一数据请求消息。
步骤242a,第二设备向第一设备发送第一数据请求消息。
在第一设备是多个的情况下,第二设备可以根据预设规则,确定向多个第一设备中的至少一个第一设备发送上述第一数据请求消息。示例性地,预设规则可以是多个第一设备中与第二设备之间的距离小于第一阈值的第一设备;或者,预设规则可以是第二设备请求数据的频率大于第二阈值的第一设备;或者,预设规则可以是第一设备的置信度大于第三阈值的第一设备。
步骤243a,第一设备查找第一设备是否存有第一数据。
在第一设备未存有第一数据的情况下,第一设备不会向第二设备分享第一数据;在至少一个第一设备存有第一数据的情况下,执行步骤244a。
第一设备上是否存有第一数据,具体地,是指第一设备上是否有第一用户的账号关联的第一数据。
步骤244a,第一设备向第二设备分享第一数据。
可选地,在第一设备将第一数据分享给第二设备后,当第二设备获取到第一用户的第二数据请求消息,该第二数据请求消息用于请求分享第二数据,该第二数据与第一数据同属于第一设备的数据的出端等级对应的数据类型的数据,且该第二设备获取第一用户第一数据请求消息的时刻与第二设备获取第一用户的第二数据请求消息的时刻之间的时间小于或等于第一时间时,此时,第二设备直接向第一设备发送第一用户的第二数据请求消息,在第一设备存有第二数据的情况下,向第二设备分享第二数据。从而可以有效地为用户提供差异个性的体验。
方式2:第一设备确定第一设备是否分享第一数据。
如图8所示,具体的步骤240可以包括步骤241b至步骤244b。
步骤241b,第二设备第一设备发送第一数据请求消息。
在第一设备是多个的情况下,第二设备可以根据预设规则,确定向多个第一设备中的至少一个第一设备发送第一数据请求消息。示例性地,预设规则可以是多个第一设备中与第二设备之间的距离小于第一阈值的第一设备;或者,预设规则可以是第二设备请求数据的频率大于第二阈值的第一设备;或者,预设规则可以是第一设备的置信度大于第三阈值的第一设备。
步骤242b,第一设备根据第一数据请求消息,确定第一数据是否属于第一设备的数据的出端等级对应的数据类型的数据。
具体地,在第一数据不属于第一设备的数据的出端等级对应的数据类型的数据的情况下,第一设备不会向第二设备分享第一数据。在第一数据属于第一设备的数据的出端等级对应的数据类型的数据情况下,还包括步骤243b。
步骤243b,第一设备查找第一设备是否存有第一数据。
具体地,在第一设备未存有第一数据的情况下,第一设备不会向第二设备分享第一数据;在第一设备存有第一数据的情况下,还执行步骤244b。
第一设备上是否存有第一数据,具体地,是指第一设备上是否有第一用户的账号关联的第一数据。
步骤244b,第一设备将第一数据分享给第二设备。
可选地,在第一设备将第一数据分享给第二设备后,当第二设备获取到第一用户的第二数据请求消息,该第二数据请求消息用于请求分享第二数据,该第二数据与第一数据同属于第一设备的数据的出端等级对应的数据类型的数据,且该第二设备获取第一用户第一数据请求消息的时刻与第二设备获取第一用户的第二数据请求消息的时刻之间的时间小于或等于第一时间时,第二设备向第一设备发送第一用户的第二数据请求消息,在第一设备存有第二数据的情况下,向第二设备分享第二数据。从而可以有效地为用户提供差异个性的体验。
例如,结合图1和图9所示,当用户1通过账号A使用车辆136(该车辆136是第二设备的一例)时,车辆136会通过网络向一个或多个设备发送用户1的账号A,其中,一个或多个设备可以是与车辆136都连接在同一个网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以多个设备分别是手机111(该手机111是第一设备的一例)和手机101(该手机101是第一设备的另一例)为例进行描述。当手机111接收到用户1的账号A后,手机111确定手机111中存有用户1的账号A,手机111确定手机111的数据的出端等级为第二等级,该手机111的数据的出端等级是相对于车辆136而言,则手机111将手机111的数据的出端等级发送给车辆136。车辆136获取用户1的数据请求消息,该用户1的数据请求消息用于请求分享用户1喜欢娱乐的地方,因为上述用户1的喜欢娱乐的地方属于手机111的数据的出端等级对应的数据类型的数据,则车辆136向手机111发送上述用户1的数据请求消息,手机111接收到该用户1的数据请求消息之后,在手机111上存有用户1喜欢娱乐的地方的情况下,将用户1喜欢娱乐的地方分享给车辆136。当手机101接收到用户1的账号A后,手机101确定手机101中未存有用户1的账号A,则手机101确定手机101的数据的出端等级为第四等级,该手机101的数据的出端等级是相对于车辆136而言,则手机101将手机101的数据的出端等级发送给车辆136。车辆136获取用户1的数据请求消息,该用户1的请求消息用于请求分享用户1喜欢娱乐的地方,因为上述用户1的喜欢娱乐的地方不属于手机101的数据的出端等级对应的数据类型的数据,则车辆136可以不向手机101发送上述用户1的数据请求消息,即车辆136只可以得到手机111上用户1喜欢娱乐的地方,从而车辆136的驾驶员可以根据手机111上用户1喜欢娱乐的地方,从而驾驶车辆136去往目的地。
例如,结合图1和图10所示,当用户2通过账号B使用电视131(该电视131是第二设备的一例)时,电视131会通过网络向一个或多个设备发送用户2的账号B,其中,一个或多个设备可以是与电视131都连接在同一个家庭网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以多个设备分别是平板电脑103(该平板电脑103是第一设备的一例)和音响123(该音响123是第一设备的另一例)为例进行描述。当平板电脑103接收到用户2的账号B后,平板电脑103确定平板电脑103中存有用户2的账号B,则平板电脑103向电视131发送平板电脑103上存有用户2的账号B,则电视131确定平板电脑103的数据的出端等级为第二等级,该平板电脑103的数据的出端等级是相对于电视131而言,则电视131将平板电脑103的数据的出端等级发送给平板电脑103。电视131获取用户2的数据请求消息,该用户2的数据请求消息用于请求分享用户2历史歌单数据,电视131将用户2的数据请求消息发送给平板电脑103,因为上述用户2的历史歌单数据属于平板电脑103的数据的出端等级对应的数据类型的数据,则在平板电脑103上存有用户2历史歌单数据的情况下,将用户2历史歌单数据分享给电视131。当音响123接收到用户2的账号B后,音响123确定音响123中未存有用户2的账号B,则音响123向电视131发送音响123未存用户2的账号B的指令,则电视131确定音响123的数据的出端等级为第四等级,该音响123的数据的出端等级是相对于电视131而言,则电视131将音响123的数据的出端等级发送给音响123。电视131获取用户2的数据请求消息,该用户2的数据请求消息用于请求分享用户2历史歌单数据,电视131将用户2的数据请求消息发送给音响123,因为上述用户2的历史歌单数据不属于音响123的数据的出端等级对应的 数据类型的数据,则音响123不会将用户2历史歌单数据分享给电视131。当用户3通过语音使用电视131时,电视131上没有用户3的声纹对应的账号,电视131无法识别用户3的声纹,电视131会通过网络向一个或多个设备发送用户3的语音,其中,一个或多个设备可以是与电视131都连接在同一个家庭网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以多个设备分别是平板电脑103和音响123为例进行描述。当平板电脑103接收到用户3的语音后,平板电脑103未识别出用户3的声纹,确定平板电脑103中未存有与用户3的声纹对应的账号,则平板电脑103确定平板电脑103的数据的出端等级为第四等级,该平板电脑103的数据的出端等级是相对于电视131而言,平板电脑103将平板电脑103的数据的出端等级发送给电视131,电视131获取用户3的数据请求消息,该用户3的数据请求消息用于请求分享用户3的历史歌单数据,电视131确定用户3的历史歌单数据不属于平板电脑103的数据的出端等级对应的数据类型的数据,则电视131不向平板电脑103发送上述用户3的数据请求消息。当音响123接收到用户3的语音后,音响123识别出用户3的声纹,确定音响123中存有与用户3的声纹对应的账号,则音响123确定音响123的数据的出端等级为第三等级,且用户3使用声纹使用电视131,因此音响123的数据的出端等级是第三子等级,该音响123的数据的出端等级是相对于电视131而言,音响123将音响123的数据的出端等级发送给电视131,电视131获取用户3的数据请求消息,该用户3的数据请求消息用于请求分享用户3的历史歌单数据,电视131确定用户3的历史歌单数据属于音响123的数据的出端等级对应的数据类型的数据,则电视131向音响123发送上述用户3的数据请求消息,在音响123上存有用户3的历史歌单数据的情况下,音响123将用户3的历史歌单数据分享给电视131。这样,当用户2通过账号B使用了电视111时,电视111会接收到用户2通过账号B在平板电脑103上存的历史歌单数据,当用户3通过语音使用电视111可以访问用户3在音响123上存的历史歌单数据。
例如,如图1所示,具体地,用户3可以通过用户3的人脸图像、用户3的指纹或用户3的声音使用了手机121;用户3可以通过用户3的人脸图像和用户3的声音使用了手表122;用户3可以通过用户3的声音使用了音响123,其中,用户3通过用户3的生物特征的原始数据在手机121存的数据是按照用户3使用的账号C进行存储的;用户3通过用户3的生物特征的原始数据在手表122存的数据是按照用户3使用的账号C进行存储的;用户3通过用户3的生物特征的原始数据在音响123存的数据是按照用户3使用的账号C进行存储的。
结合图1和图11所示,当用户3通过用户3的语音使用车辆102(该车辆102是第二设备的一例)时,车辆102会通过网络向一个或多个设备发送用户3的用户3的原始声音和账号B,车辆102无法识别用户3的声纹但可以识别用户3的语音,即车辆102无法识别声音的身份,可以识别出语音的内容。其中,一个或多个设备可以是与车辆102都连接在同一个网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以一个设备是音响123(该音响123是第一设备的一例)为例进行描述。当音响123接收到用户3的用户3的语音和账号B后,音响123确定音响123中存的用户2的声纹对应的账号为账号B,则音响123确定音响123的数据的出端等级为第二等级,该音响123的数据的出端等级是相对于车辆102而言,音响123将音响123的数据的出端等级发送给车辆102,车 辆102获取用户3的数据请求消息,该用户3的数据请求消息用于请求分享用户3的历史歌单数据,车辆102确定用户3的历史歌单数据属于音响123的数据的出端等级对应的数据类型的数据,则车辆102向音响123发送上述用户3的数据请求消息,在音响123上存有用户3的历史歌单数据的情况下,音响123将用户3的历史歌单数据分享给车辆102。当用户3通过用户3的指纹也使用车辆102时,车辆102会对用户3的指纹进行识别,得到用户3的指纹对应的账号为账号B,则车辆102通过网络向一个或多个设备发送账号B,其中,一个或多个设备可以是与车辆102都连接在同一个网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以一个设备是手机101(该手机101是第一设备的另一例)为例进行描述。当手机101接收到账号B后,手机101确定手机101中存有账号B,则手机101确定手机101的数据的出端等级为第二等级,该手机101的数据的出端等级是相对于车辆102而言,手机101将手机101的数据的出端等级发送给车辆102,车辆102获取用户3的数据请求消息,该请求消息用于请求分享用户3的喜欢健身的地方,车辆102确定喜欢健身的地方属于手机101的数据的出端等级对应的数据类型的数据,则车辆102向手机101发送上述用户3的数据请求消息,在手机101上存有用户3的喜欢健身的地方的情况下,手机101将用户3的喜欢健身的地方分享给车辆102。这样,当用户3通过不同的生物特征的原始数据都使用了车辆102时,车辆102不仅会接收到用户3在音响123上存的历史歌单数据,车辆102还会接收到用户3在手机101上存用户3喜欢健身的地方,从而车辆102可以根据用户3的历史歌单,播放用户3比较喜爱的歌;车辆102还可以根据用户3的原始指纹在手机101存的用户3的喜欢健身的地方,从而驾驶车辆102去用户3喜欢健身的地方。
结合图1和图12所示,当用户3通过用户3的原始声音使用平板电脑103(该平板电脑103是第二设备的一例)时,平板电脑103会对用户3的原始声音进行声纹识别,得到用户3的声纹对应的账号为账号B,则平板电脑103通过网络向一个或多个设备发送账号B,其中,一个或多个设备可以是与平板电脑103都连接在同一个家庭网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以一个设备是手表122(手表122是第一设备的一例)为例进行描述。当手表122接收到账号B后,手表122确定手表122中未存账号B,则手表122确定手表122的数据的出端等级为第四等级,平板电脑103向手表122发送上述用户3的数据请求消息,手表122上没有用户3的账号B关联的数据,因此手表122不会分享数据分享给平板电脑103。当用户3通过用户3的原始2D人脸使用平板电脑103时,平板电脑103会对用户3的原始2D人脸进行识别,得到用户3的原始2D人脸对应的账号为账号B,则平板电脑103通过网络向一个或多个设备发送账号B,其中,一个或多个设备可以是与平板电脑103都连接在同一个家庭网络的设备,例如,该一个或多个设备可以是如图1中的设备,这里以一个设备是手机121(该手机121是第一设备的另一例)为例进行描述。当手机121接收到账号B后,手机121确定手机121中未存有账号B,则手机121确定手机121的数据的出端等级为第二等级,手机121将手机121的数据的出端等级发送给平板电脑103,平板电脑103获取用户3的数据请求消息,该用户3的数据请求消息用于请求分享用户3的日程计划数据,平板电脑103确定日程计划数据属于手机121的数据的出端等级对应的数据类型的数据,则平板电脑103向手机121发送上述用户3的数据请求消息,在手机121上存有用户3的日程计划数据的情况下,手机 121将用户3的日程计划数据分享给平板电脑103。
又例如,如图1所示,一个或多个用户在使用电视131、手机132、平板电脑133、手表134、音响135或车辆136时,都是处于游客状态,即该一个或多个用户没有通过任何账号,也没有通过任何生物特征的原始数据来使用电视131、手机132、平板电脑133、手表134、音响135或车辆136,则电视131、手机132、平板电脑133、手表134、音响135或车辆136上不会存该一个或多个用户的个人数据(例如,历史观看视频),则电视131、手机132、平板电脑133、手表134、音响135或车辆136在进行数据分享时,只能分享各个设备的非个人数据,即设备的设备能力数据和/或设备状态数据。例如,该一个或多个用户在电视131产生的数据在存的过程中,不会将每个用户与每个用户在电视131上的存的数据对应存储,只会将所有使用电视131的用户产生的非个人数据存下来,电视131只会与其他设备分享电视131的设备能力数据或设备状态数据。
上述方法200还可以包括步骤250。
250,第二设备对第一设备分享的第一数据进行保存。
以上,结合图2至图12详细说明了本申请实施例提供的方法。以下,结合图13至图14详细说明本申请实施例提供的装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,部分内容不再赘述。
图13示出了本申请实施例提供的电子设备1300的结构示意图。
在一种可实现的方式中,该电子设备1300可以是上述方法200中的第一设备,该电子设备1300可以执行上述方法200中第一设备所执行的步骤,具体可以参考上述方法200的描述,这里不再赘述。
在另一种可实现的方式中,该电子设备1300可以是上述方法200中的第二设备,该电子设备1300可以执行上述方法200中第二设备所执行的步骤,具体可以参考上述方法200的描述,这里不再赘述。
电子设备1300可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
电子设备1300可以包括处理器1310,外部存储器接口1320,内部存储器1321,通用串行总线(universal serial bus,USB)接口1330,充电管理模块1340,电源管理模块1341,电池1342,天线1,天线2,移动通信模块1350,无线通信模块1360,音频模块1370,扬声器1370A,受话器1370B,麦克风1370C,耳机接口1370D,传感器模块1380,按键1390,马达1391,指示器1392,摄像头1393,显示屏1394,以及用户标识模块(subscriber identification module,SIM)卡接口1395等。其中传感器模块1380可以包括压力传感器1380A,陀螺仪传感器1380B,气压传感器1380C,磁传感器1380D,加速度传感器1380E,距离传感器1380F,接近光传感器1380G,指纹传感器1380H,温度传感器1380J,触摸传感器1380K,环境光传感器1380L,骨传导传感器1380M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备1300的具体限定。在本申请另一些实施例中,电子设备1300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器1310可以包括一个或多个处理单元,例如:处理器1310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器1310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器1310中的存储器为高速缓冲存储器。该存储器可以保存处理器1310刚用过或循环使用的指令或数据。如果处理器1310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器1310的等待时间,因而提高了系统的效率。
在一些实施例中,处理器1310可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器1310可以包含多组I2C总线。处理器1310可以通过不同的I2C总线接口分别耦合触摸传感器1380K,充电器,闪光灯,摄像头1393等。例如:处理器1310可以通过I2C接口耦合触摸传感器1380K,使处理器1310与触摸传感器1380K通过I2C总线接口通信,实现电子设备1300的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器1310可以包含多组I2S总线。处理器1310可以通过I2S总线与音频模块1370耦合,实现处理器1310与音频模块1370之间的通信。在一些实施例中,音频模块1370可以通过I2S接口向无线通信模块1360传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块1370与无线通信模块1360可以通过PCM总线接口耦合。在一些实施例中,音频模块1370也可以通过PCM接口向无线通信模块1360传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被 用于连接处理器1310与无线通信模块1360。例如:处理器1310通过UART接口与无线通信模块1360中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块1370可以通过UART接口向无线通信模块1360传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器1310与显示屏1394,摄像头1393等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器1310和摄像头1393通过CSI接口通信,实现电子设备1300的拍摄功能。处理器1310和显示屏1394通过DSI接口通信,实现电子设备1300的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器1310与摄像头1393,显示屏1394,无线通信模块1360,音频模块1370,传感器模块1380等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口1330是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口1330可以用于连接充电器为电子设备1300充电,也可以用于电子设备1300与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备1300的结构限定。在本申请另一些实施例中,电子设备1300也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块1340用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块1340可以通过USB接口1330接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块1340可以通过电子设备1300的无线充电线圈接收无线充电输入。充电管理模块1340为电池1342充电的同时,还可以通过电源管理模块1341为电子设备供电。
电源管理模块1341用于连接电池1342,充电管理模块1340与处理器1310。电源管理模块1341接收电池1342和/或充电管理模块1340的输入,为处理器1310,内部存储器1321,显示屏1394,摄像头1393,和无线通信模块1360等供电。电源管理模块1341还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块1341也可以设置于处理器1310中。在另一些实施例中,电源管理模块1341和充电管理模块1340也可以设置于同一个器件中。
电子设备1300的无线通信功能可以通过天线1,天线2,移动通信模块1350,无线通信模块1360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备1300中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块1350可以提供应用在电子设备1300上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块1350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块1350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模 块1350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块1350的至少部分功能模块可以被设置于处理器1310中。在一些实施例中,移动通信模块1350的至少部分功能模块可以与处理器1310的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器1370A,受话器1370B等)输出声音信号,或通过显示屏1394显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器1310,与移动通信模块1350或其他功能模块设置在同一个器件中。
无线通信模块1360可以提供应用在电子设备1300上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块1360可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块1360经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器1310。无线通信模块1360还可以从处理器1310接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备1300的天线1和移动通信模块1350耦合,天线2和无线通信模块1360耦合,使得电子设备1300可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备1300通过GPU,显示屏1394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏1394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器1310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏1394用于显示图像,视频等。显示屏1394包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。 在一些实施例中,电子设备1300可以包括1个或N个显示屏1394,N为大于1的正整数。
电子设备1300可以通过ISP,摄像头1393,视频编解码器,GPU,显示屏1394以及应用处理器等实现拍摄功能。
ISP用于处理摄像头1393反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头1393中。
摄像头1393用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备1300可以包括1个或N个摄像头1393,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备1300在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备1300可以支持一种或多种视频编解码器。这样,电子设备1300可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备1300的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口1320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备1300的存储能力。外部存储卡通过外部存储器接口1320与处理器1310通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器1321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器1321可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备1300使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器1321可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器1310通过运行存储在内部存储器1321的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备1300的各种功能应用以及数据处理。
电子设备1300可以通过音频模块1370,扬声器1370A,受话器1370B,麦克风1370C,耳机接口1370D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块1370用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块1370还可以用于对音频信号编码和解码。在一些实施 例中,音频模块1370可以设置于处理器1310中,或将音频模块1370的部分功能模块设置于处理器1310中。
扬声器1370A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备1300可以通过扬声器1370A收听音乐,或收听免提通话。
受话器1370B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备1300接听电话或语音信息时,可以通过将受话器1370B靠近人耳接听语音。
麦克风1370C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风1370C发声,将声音信号输入到麦克风1370C。电子设备1300可以设置至少一个麦克风1370C。在另一些实施例中,电子设备1300可以设置两个麦克风1370C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备1300还可以设置三个,四个或更多麦克风1370C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口1370D用于连接有线耳机。耳机接口1370D可以是USB接口1330,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器1380A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器1380A可以设置于显示屏1394。压力传感器1380A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器1380A,电极之间的电容改变。电子设备1300根据电容的变化确定压力的强度。当有触摸操作作用于显示屏1394,电子设备1300根据压力传感器1380A检测所述触摸操作强度。电子设备1300也可以根据压力传感器1380A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器1380B可以用于确定电子设备1300的运动姿态。在一些实施例中,可以通过陀螺仪传感器1380B确定电子设备1300围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器1380B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器1380B检测电子设备1300抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备1300的抖动,实现防抖。陀螺仪传感器1380B还可以用于导航,体感游戏场景。
气压传感器1380C用于测量气压。在一些实施例中,电子设备1300通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器1380D包括霍尔传感器。电子设备1300可以利用磁传感器1380D检测翻盖皮套的开合。在一些实施例中,当电子设备1300是翻盖机时,电子设备1300可以根据磁传感器1380D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器1380E可检测电子设备1300在各个方向上(一般为三轴)加速度的大小。当电子设备1300静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器1380F,用于测量距离。电子设备1300可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备1300可以利用距离传感器1380F测距以实现快速对焦。
接近光传感器1380G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备1300通过发光二极管向外发射红外光。电子设备1300使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备1300附近有物体。当检测到不充分的反射光时,电子设备1300可以确定电子设备1300附近没有物体。电子设备1300可以利用接近光传感器1380G检测用户手持电子设备1300贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器1380G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器1380L用于感知环境光亮度。电子设备1300可以根据感知的环境光亮度自适应调节显示屏1394亮度。环境光传感器1380L也可用于拍照时自动调节白平衡。环境光传感器1380L还可以与接近光传感器1380G配合,检测电子设备1300是否在口袋里,以防误触。
指纹传感器1380H用于采集指纹。电子设备1300可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器1380J用于检测温度。在一些实施例中,电子设备1300利用温度传感器1380J检测的温度,执行温度处理策略。例如,当温度传感器1380J上报的温度超过阈值,电子设备1300执行降低位于温度传感器1380J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备1300对电池1342加热,以避免低温导致电子设备1300异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备1300对电池1342的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器1380K,也称“触控器件”。触摸传感器1380K可以设置于显示屏1394,由触摸传感器1380K与显示屏1394组成触摸屏,也称“触控屏”。触摸传感器1380K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏1394提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器1380K也可以设置于电子设备1300的表面,与显示屏1394所处的位置不同。
骨传导传感器1380M可以获取振动信号。在一些实施例中,骨传导传感器1380M可以获取人体声部振动骨块的振动信号。骨传导传感器1380M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器1380M也可以设置于耳机中,结合成骨传导耳机。音频模块1370可以基于所述骨传导传感器1380M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器1380M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键1390包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备1300可以接收按键输入,产生与电子设备1300的用户设置以及功能控制有关的 键信号输入。
马达1391可以产生振动提示。马达1391可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏1394不同区域的触摸操作,马达1391也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器1392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口1395用于连接SIM卡。SIM卡可以通过插入SIM卡接口1395,或从SIM卡接口1395拔出,实现和电子设备1300的接触和分离。电子设备1300可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口1395可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口1395可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口1395也可以兼容不同类型的SIM卡。SIM卡接口1395也可以兼容外部存储卡。电子设备1300通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备1300采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备1300中,不能和电子设备1300分离。
电子设备1300的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备1300的软件结构。
图14是本发明实施例提供的电子设备1300的软件结构示意图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图14所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图14所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备1300的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备1300软件以及硬件的工作流程。
当触摸传感器1380K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头1393捕获静态图像或视频。
本申请实施例还提供了一种计算机可读介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述任一方法实施例中的方法。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品被计算机执行时实现上述任一方法实施例中的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种数据分享的方法,其特征在于,包括:
    第一设备获取来自第二设备的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的账号或所述第一用户的生物特征的原始数据;
    所述第一设备根据所述第一用户的注册信息,确定所述第一设备的数据的出端等级;所述第一设备的数据的出端等级对应不同的数据类型,所述不同的数据类型的数据具有不同的最高风险性;
    所述第一设备获取来自所述第二设备的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;
    所述第一设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据,发送所述第一数据给所述第二设备。
  2. 根据权利要求1所述的方法,其特征在于,在所述第一用户的注册信息包括所述第一用户的生物特征的原始数据的情况下,所述第一设备根据所述第一用户的注册信息,确定第一设备的数据的出端等级包括:
    所述第一设备对所述第一用户的生物特征的原始数据进行识别,确定是否得到与所述第一用户的生物特征的原始数据相对应的账号;
    在所述第一设备确定未得到与所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第四等级;
    在所述第一设备得到与所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第二设备中存储的所有账号中是否存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号;
    在所述第二设备中存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第二等级;
    在所述第二设备中不存在所述第一设备得到的所述第一用户的生物特征的原始数据相对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第三等级。
  3. 根据权利要求2所述的方法,其特征在于,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:
    在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;
    在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或
    在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
  4. 根据权利要求1所述的方法,其特征在于,在所述第一用户的注册信息包括所述第一用户的账号的情况下,所述第一设备根据所述第一用户的注册信息,确定第一设备的数据的出端等级包括:
    所述第一设备确定是否存有所述第一用户的账号;
    在所述第一设备存有所述第一用户的账号的情况下,确定所述第一设备的数据的出端等级为第二等级;
    在所述第一设备未存有所述第一用户的账号的情况下,确定所述第一设备的数据的出端等级为第四等级。
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,
    所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,
    所述第四等级对应的数据类型为第四类型,所述第四类型对应的数据包括设备能力数据和/或设备状态数据。
  6. 根据权利要求3所述的方法,其特征在于,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,
    所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,
    所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备向所述第二设备发送所述第一设备的数据的出端等级。
  8. 一种获取数据的方法,其特征在于,包括:
    第二设备获取第一用户输入的第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;
    所述第二设备向第一设备发送所述第一用户的注册信息;
    所述第二设备接收所述第一设备发送的第一信息,所述第一信息用于指示所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;
    所述第二设备根据所述第一信息,确定所述第一设备的数据的出端等级;
    所述第二设备获取所述第一用户的第一数据请求消息,所述第一数据请求消息用于请求分享所述第一用户的第一数据;
    所述第二设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据;不同的所述数据类型的数据具有不同的最高风险性;
    所述第二设备发送所述第一数据请求消息,接收所述第一设备发送的所述第一数据。
  9. 根据权利要求8所述的方法,其特征在于,所述第二设备根据所述第一信息,确定所述第一设备的数据的出端等级包括:
    所述第二设备确定所述第二设备是否存储有所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;
    在所述第二设备存储有所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为所述第二等级;
    在所述第二设备未存储所述第一设备确定的所述第一用户的生物特征的原始数据对 应的账号的情况下,确定所述第一设备的数据的出端等级为所述第三等级。
  10. 根据权利要求9所述的方法,其特征在于,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:
    在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;
    在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或
    在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
  11. 根据权利要求9或10所述的方法,其特征在于,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,
    所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据。
  12. 根据权利要求10所述的方法,其特征在于,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,
    所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,
    所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
  13. 根据权利要求8至12中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二设备向所述第一设备发送所述第一设备的数据的出端等级。
  14. 一种获取数据的方法,其特征在于,包括:
    第二设备获取第一用户的注册信息,所述第一用户的注册信息包括所述第一用户的生物特征的原始数据;
    所述第二设备对所述第一用户的生物特征的原始数据进行识别,并确定所述第二设备是否可以得到与所述第一用户的生物特征的原始数据对应的账号;
    在所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备向所述第一设备发送第二信息,所述第二信息用于指示所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号;以及
    所述第二设备接收所述第一设备发送的第三信息,所述第三信息用于指示所述第一设备是否存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号;
    所述第二设备根据所述第三信息,确定所述第一设备的数据的出端等级;
    所述第二设备获取所述第一用户的数据请求消息,所述数据请求消息用于请求所述第一设备分享所述第一用户在所述第一设备上存的第一数据;
    所述第二设备确定所述第一数据属于所述第一设备的数据的出端等级对应的数据类型的数据,不同的所述数据类型的数据具有不同的最高风险性;
    所述第二设备向所述第一设备发送所述第一用户的数据请求消息,接收所述第一设备 发送的第一数据。
  15. 根据权利要求14所述的方法,其特征在于,所述第二设备根据所述第三信息,确定所述第一设备的数据的出端等级包括:
    在所述第一设备存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第二等级;
    在所述第一设备未存有所述第二设备得到与所述第一用户的生物特征的原始数据对应的账号的情况下,确定所述第一设备的数据的出端等级为第四等级。
  16. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    在所述第二设备未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备将所述第一用户的注册信息发送给所述第一设备;
    所述第二设备接收所述第一设备发送的第三指令,所述第三指令用于指示所述第一设备未得到第一用户的生物特征的原始数据对应的账号;
    所述第二设备根据所述第三指令,确定所述第一设备的数据的出端等级是第四等级。
  17. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    在所述第二设备未得到与所述第一用户的生物特征的原始数据对应的账号的情况下,所述第二设备将所述第一用户的注册信息发送给所述第一设备;
    所述第二设备接收所述第一设备发送的第四信息,所述第四信息用于指示所述第一设备确定的所述第一用户的生物特征的原始数据对应的账号;
    所述第二设备根据所述第四信息,确定所述第一设备的数据的出端等级为第三等级。
  18. 根据权利要求17所述的方法,其特征在于,在所述确定所述第一设备的数据的出端等级为所述第三等级之后,所述方法还包括:
    在所述第一用户的注册信息是所述第一用户的3D人脸、指纹、虹膜或DNA的情况下,确定所述第一设备的数据的出端等级为第三等级中的第一子等级;
    在所述第一用户的注册信息是所述第一用户的2D人脸或静脉的情况下,确定所述第一设备的数据的出端等级为第三等级中的第二子等级;或
    在所述第一用户的注册信息是所述第一用户的声音或签名的情况下,确定所述第一设备的数据的出端等级为第三等级中的第三子等级。
  19. 根据权利要求15至17中任一项所述的方法,其特征在于,所述第二等级对应的数据类型为第二类型,所述第二类型对应的数据包括一般位置数据、视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,
    所述第三等级对应的数据类型为第三类型,所述第三类型对应的数据包括视频数据、物流数据、日程计划数据、喜好数据、设备能力数据和/或设备状态数据;和/或,
    所述第四等级对应的数据类型为第四类型,所述第四类型对应的数据包括设备能力数据和/或设备状态数据。
  20. 根据权利要求18所述的方法,其特征在于,所述第一子等级对应的数据类型为第一子类型,所述第一子类型对应的数据包括照片数据、录制的视频数据、设备能力数据和/或设备状态数据;和/或,
    所述第二子等级对应的数据类型确定为第二子类型,所述第二子类型对应的数据包括物流数据、日程计划数据、设备能力数据和/或设备状态数据;和/或,
    所述第三子等级对应的数据类型确定为第三子类型,所述第三子类型对应的数据包括喜好数据、观看的视频数据、设备能力数据和/或设备状态数据。
  21. 根据权利要求14至20中任一项所述的方法,其特征在于,所述方法还包括:
    所述第二设备向所述第一设备发送所述第一设备的数据的出端等级。
  22. 一种终端设备,其特征在于,包括:处理器,所述处理器与存储器耦合;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1至21中任一项所述的方法。
PCT/CN2020/128996 2020-01-23 2020-11-16 数据分享的方法和装置 WO2021147483A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010076673.0 2020-01-23
CN202010076673.0A CN111339513B (zh) 2020-01-23 2020-01-23 数据分享的方法和装置

Publications (1)

Publication Number Publication Date
WO2021147483A1 true WO2021147483A1 (zh) 2021-07-29

Family

ID=71181431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128996 WO2021147483A1 (zh) 2020-01-23 2020-11-16 数据分享的方法和装置

Country Status (2)

Country Link
CN (1) CN111339513B (zh)
WO (1) WO2021147483A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339513B (zh) * 2020-01-23 2023-05-09 华为技术有限公司 数据分享的方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156194A1 (en) * 2011-12-19 2013-06-20 Fujitsu Limited Secure recording and sharing system of voice memo
CN108985255A (zh) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN110198362A (zh) * 2019-05-05 2019-09-03 华为技术有限公司 一种在联系人中添加智能家居设备的方法及系统
CN111339513A (zh) * 2020-01-23 2020-06-26 华为技术有限公司 数据分享的方法和装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9742760B1 (en) * 2014-06-16 2017-08-22 TouchofModern, Inc. System and method for improving login and registration efficiency to network-accessed data
CN105100708B (zh) * 2015-06-26 2018-12-25 小米科技有限责任公司 请求处理方法及装置
CN107103245B (zh) * 2016-02-23 2022-08-02 中兴通讯股份有限公司 文件的权限管理方法及装置
CN106534280B (zh) * 2016-10-25 2019-12-03 Oppo广东移动通信有限公司 数据分享方法及装置
US10916243B2 (en) * 2016-12-27 2021-02-09 Amazon Technologies, Inc. Messaging from a shared device
JP2019159974A (ja) * 2018-03-15 2019-09-19 オムロン株式会社 認証装置、認証方法、及び認証プログラム
CN108600793B (zh) * 2018-04-08 2022-07-05 北京奇艺世纪科技有限公司 一种分级控制方法及装置
CN108833357A (zh) * 2018-05-22 2018-11-16 中国互联网络信息中心 信息查看方法及装置
CN108985089B (zh) * 2018-08-01 2020-08-07 清华大学 互联网数据共享系统
CN109035937A (zh) * 2018-08-29 2018-12-18 芜湖新使命教育科技有限公司 授权共享网络教育系统
CN109299047A (zh) * 2018-09-21 2019-02-01 深圳市九洲电器有限公司 分布式系统数据共享方法及装置、数据共享分布式系统
CN109325742A (zh) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 业务审批方法、装置、计算机设备及存储介质
CN109885999A (zh) * 2019-01-29 2019-06-14 努比亚技术有限公司 一种账号注册方法、终端及计算机可读存储介质
CN110287036A (zh) * 2019-05-09 2019-09-27 华为技术有限公司 一种设备共享方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156194A1 (en) * 2011-12-19 2013-06-20 Fujitsu Limited Secure recording and sharing system of voice memo
CN108985255A (zh) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 数据处理方法、装置、计算机可读存储介质和电子设备
CN110198362A (zh) * 2019-05-05 2019-09-03 华为技术有限公司 一种在联系人中添加智能家居设备的方法及系统
CN111339513A (zh) * 2020-01-23 2020-06-26 华为技术有限公司 数据分享的方法和装置

Also Published As

Publication number Publication date
CN111339513A (zh) 2020-06-26
CN111339513B (zh) 2023-05-09

Similar Documents

Publication Publication Date Title
WO2021052263A1 (zh) 语音助手显示方法及装置
US11868463B2 (en) Method for managing application permission and electronic device
CN110276177B (zh) 智能终端的登录方法及电子设备
CN113722058B (zh) 一种资源调用方法及电子设备
WO2021253975A1 (zh) 应用程序的权限管理方法、装置和电子设备
CN113496426A (zh) 一种推荐服务的方法、电子设备和系统
WO2021052204A1 (zh) 基于通讯录的设备发现方法、音视频通信方法及电子设备
CN114173000B (zh) 一种回复消息的方法、电子设备和系统、存储介质
WO2022160991A1 (zh) 权限控制方法和电子设备
WO2022042770A1 (zh) 控制通信服务状态的方法、终端设备和可读存储介质
CN114095599B (zh) 消息显示方法和电子设备
WO2021218429A1 (zh) 应用窗口的管理方法、终端设备及计算机可读存储介质
CN114124980A (zh) 一种启动应用的方法、装置及系统
CN111835904A (zh) 一种基于情景感知和用户画像开启应用的方法及电子设备
WO2021147483A1 (zh) 数据分享的方法和装置
WO2023071940A1 (zh) 跨设备的导航任务的同步方法、装置、设备及存储介质
CN114006698B (zh) token刷新方法、装置、电子设备及可读存储介质
CN115941220A (zh) 跨设备认证方法和装置
CN115701018A (zh) 安全调用服务的方法、安全注册服务的方法及装置
CN114692132A (zh) 应用程序管控方法、装置、电子设备及可读存储介质
CN113867851A (zh) 电子设备操作引导信息录制方法、获取方法和终端设备
WO2022052767A1 (zh) 一种控制设备的方法、电子设备和系统
EP4428667A1 (en) Application card display method and apparatus, terminal device, and readable storage medium
WO2022042774A1 (zh) 头像显示方法及电子设备
WO2023016347A1 (zh) 声纹认证应答方法、系统及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915992

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915992

Country of ref document: EP

Kind code of ref document: A1