CN111339513B - Data sharing method and device - Google Patents

Data sharing method and device Download PDF

Info

Publication number
CN111339513B
CN111339513B CN202010076673.0A CN202010076673A CN111339513B CN 111339513 B CN111339513 B CN 111339513B CN 202010076673 A CN202010076673 A CN 202010076673A CN 111339513 B CN111339513 B CN 111339513B
Authority
CN
China
Prior art keywords
data
user
level
type
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010076673.0A
Other languages
Chinese (zh)
Other versions
CN111339513A (en
Inventor
阙鑫地
林嵩晧
林于超
张舒博
郑理文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010076673.0A priority Critical patent/CN111339513B/en
Publication of CN111339513A publication Critical patent/CN111339513A/en
Priority to PCT/CN2020/128996 priority patent/WO2021147483A1/en
Application granted granted Critical
Publication of CN111339513B publication Critical patent/CN111339513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Abstract

The application provides a data sharing method and device, wherein the method comprises the following steps: the method comprises the steps that a device with requested data obtains registration information of a first user from the device with requested data, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the equipment of the requested data determines the data output grade of the equipment of the requested data according to the registration information of the first user; the device requested with the data acquires a first data request message of a first user of the device requesting with the data, and the first data request message is used for requesting to share the first data of the first user, so that the device requested with the data can determine whether the first data belongs to the data output grade of the device requested with the data according to different forms of registration information of the first user, data sharing of the first user between the device requesting with the data and the device requested with the data is realized, and differential personalized experience is provided for the first user.

Description

Data sharing method and device
Technical Field
The present application relates to the field of information processing, and more particularly, to a method and apparatus for data sharing.
Background
With the rapid development of the fields of intelligent devices and internet of things (internet of things, ioT), multi-intelligent device collaborative integration is already a common knowledge in the industry. In order to achieve collaboration among multiple intelligent devices, it is desirable that user data and device data be able to flow and share among multiple intelligent devices or multiple accounts. In the context of multiple smart devices, such as in the home context, private devices (e.g., cell phones or watches) and home public devices (e.g., televisions, vehicles, or speakers) are included. The user cannot be provided with a differentiated personalized experience depending on the user using the smart device.
Disclosure of Invention
The method and the device for determining data sharing can provide different personalized experiences for users according to the users.
In a first aspect, a method for determining data sharing is provided, including: the method comprises the steps that first equipment obtains registration information of a first user from second equipment, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the first device determines the output grade of the data of the first device according to the registration information of the first user; the output end level of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks; the first device obtains a first data request message of the first user from the second device, wherein the first data request message is used for requesting sharing of first data of the first user; the first device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device, and sends the first data to the second device.
Alternatively, the account number of the first user may be one or more.
The account number may be a mobile phone number, a user name set by a user, a mailbox, and the like, for example.
The registration information of the first user is the registration information of the first user input by the first user to the second device, namely, the first user uses the second device through the registration information of the first user.
The second device may be a device with a biometric function, for example, the second device may be a mobile phone, a vehicle, or a tablet computer; alternatively, the second device may be a device that can collect biometric features but does not have biometric functionality, e.g., the second device may be a watch, a sound, a television.
Wherein the first device is a device other than the second device in a network co-located with the second device; alternatively, the first device is a device selected according to the functions of all devices in a network that are co-located with the second device, for example, the first device is a device having a biometric function.
Alternatively, the devices in the network may be mutually trusted devices. For example, the network may be a home network in which devices are mutually trusted devices. As another example, the network may also be a work network in which devices are mutually trusted devices. The device in the network is not only a device connected to the network, but also a device added to the network by scanning a two-dimensional code (identification code), and the two-dimensional code may be preset.
Wherein the first device may also be a device other than the second device in the same group as the second device in a network; alternatively, the first device is a device selected according to the functions of all devices in the same group as the second device in the same network, for example, the first device is a device having a biometric function.
Alternatively, a plurality of groups may be preset in the above network, and the devices in each of the plurality of groups may be mutually trusted devices. For example, the network may be a home network, in which a home group and a visitor group may be preset, the home group including the first device and the second device, the first device and the second device in the home group being devices that trust each other; the devices in the visitor group and the devices in the family group are not mutually trusted devices, but non-private information interaction can be performed between the devices in the access group and the devices in the family group. Wherein, the devices in the home group may be not only devices connected to the home network but also devices joined to the home network by scanning the two-dimensional code, and the devices in the guest group are only devices connected to the home network.
Alternatively, the first data may be any data. Illustratively, the first data may be real-time location data of the user, location data where the user likes entertainment, photo data taken, recorded video data, viewed video data, historical song data, and the like.
First, the first device obtains registration information of a first user of the second device, where the registration information of the first user includes an account number of the first user or original data of a biometric feature of the first user, and the first device may determine an outbound level of data of the first device according to the registration information of the first user. The output level of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks; secondly, the first device acquires a first data request message of a first user from the second device, wherein the first data request message is used for requesting sharing of first data of the first user; and finally, the first device sends the first data to the second device under the condition that the first data is determined to belong to the data of the data type corresponding to the output level of the data of the first device. The first equipment can determine the output grade of the data of the first equipment according to the form of the registration information of the first user, and under the condition that the first data belongs to the data of the data type corresponding to the output grade of the data of the first equipment, the first data generated by the first user on the first equipment is shared to the second equipment, so that differential personalized experience is provided for the first user.
With reference to the first aspect, in some implementations of the first aspect, in a case where the registration information of the first user includes raw data of a biometric of the first user, determining, by the first device, an outbound level of data of the first device according to the registration information of the first user includes: the first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained or not; determining that the output grade of the data of the first device is the fourth grade under the condition that the first device determines that the account corresponding to the original data of the biological characteristics of the first user is not obtained; determining whether an account corresponding to the original data of the biological characteristics of the first user obtained by the first device exists in all the accounts stored in the second device under the condition that the first device obtains the account corresponding to the original data of the biological characteristics of the first user; determining that the output grade of the data of the first device is the second grade under the condition that an account corresponding to the original data of the biological characteristics of the first user obtained by the first device exists in the second device; and under the condition that an account number corresponding to the original data of the biological characteristics of the first user obtained by the first device does not exist in the second device, determining that the output grade of the data of the first device is the third grade.
With reference to the first aspect, in some implementations of the first aspect, in a case where the registration information of the first user includes raw data of a biometric of the first user, determining, by the first device, an outbound level of data of the first device according to the registration information of the first user includes: the first device determines whether to obtain an account corresponding to the original data of the biological characteristics of the first user according to the registration information of the first user; determining that the output grade of the data of the first device is the fourth grade under the condition that an account corresponding to the original data of the biological characteristics of the first user is not obtained on the first device; when the first device obtains an account corresponding to the original data of the biological characteristics of the first user, the first device sends seventh information to the second device, wherein the seventh information is used for indicating the first device to obtain the account corresponding to the original data of the biological characteristics of the first user; the first device receives eighth information sent by the second device, wherein the eighth information is used for indicating whether the second device has an account number corresponding to the original data of the biological characteristics of the first user, which is determined by the first device; under the condition that the second equipment stores the account number corresponding to the original data of the biological characteristics of the first user, which is determined by the first equipment, determining that the output grade of the data of the first equipment is the second grade; and under the condition that the second equipment does not store the account number corresponding to the original data of the biological characteristics of the first user, which is determined by the first equipment, determining that the output grade of the data of the first equipment is the third grade.
With reference to the first aspect, in certain implementation manners of the first aspect, after the determining that the output level of the data of the first device is the third level, the method further includes: determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user; determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or if the registration information of the first user is the voice or signature of the first user, determining that the output level of the data of the first device is a third sub-level in a third level.
According to the specific biological characteristics used by the user, the output grade of the data of the first device is subdivided into a third grade, so that different personalized experiences can be provided for the user according to different biological characteristics used by the user.
With reference to the first aspect, in some implementations of the first aspect, in a case where the registration information of the first user includes an account of the first user, determining, by the first device, an outbound level of data of the first device according to the registration information of the first user includes: the first device determines whether an account number of the first user exists; under the condition that the first equipment stores the account number of the first user, determining the output grade of the data of the first equipment as a second grade; and under the condition that the first equipment does not store the account number of the first user, determining that the output grade of the data of the first equipment is a fourth grade.
The output level of the data of the first device may be understood as a level of sharing the data on the first device to other devices. The data output level of the first device is set relative to the device requesting the data, and the data output levels of the different devices requesting the data may be different and may be the same. When the device requesting the data is the first device itself, the output level of the data of the first device is the first level. When the device requesting the data is not the first device itself, the output level of the data of the first device is determined by what form of registration information the device requesting the data accesses the data on the first device. If the equipment for requesting the data adopts the account number used in the first equipment to access the first equipment, the data output grade of the first equipment is a second grade for the equipment for requesting the data; if the device requesting the data accesses the first device by adopting the original data of the biological characteristics corresponding to the account number used in the first device, the data output grade of the first device is a third grade for the device requesting the data; if the device requesting the data does not access the first device using the same account number or the same biometric original data as the first device, the data of the first device is output to the fourth level for the device requesting the data.
In the case where the device that requests data is not the first device itself, the device that requests data may be the second device, and the device that is requested data may be the first device.
According to different forms of registration information used by the user, the output grade of the data of the first device is divided, so that different personalized experiences can be provided for the user.
With reference to the first aspect, in certain implementation manners of the first aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule plan data, preference data, equipment capacity data and/or equipment state data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
The risk of the data corresponding to the second type, the risk of the data corresponding to the third type and the risk of the data corresponding to the fourth type are sequentially reduced.
Wherein the general location data may be personal data of a medium influence; video data, logistic data, schedule data, preference data may be personal data with low impact; the device capability data and/or the device status data are non-personal data.
With reference to the first aspect, in some implementations of the first aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or determining the data type corresponding to the second sub-level as a second sub-type, wherein the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capacity data and/or equipment state data; and/or, determining the data type corresponding to the third sub-level as a third sub-type, wherein the data corresponding to the third sub-type comprises preference data, viewing video data, equipment capability data and/or equipment state data.
The risk of the data corresponding to the first subtype, the risk of the data corresponding to the second subtype and the risk of the data corresponding to the third subtype are sequentially reduced.
The data output grades of the first equipment are different, and the data types corresponding to the data output grades of the first equipment are also different, so that different personalized experiences can be provided for users.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes: and the first device sends the output grade of the data of the first device to the second device.
With reference to the first aspect, in certain implementations of the first aspect, the biometric features include one or more of: physical biological characteristics, soft biological characteristics, behavioral biological characteristics.
With reference to the first aspect, in certain implementations of the first aspect, the physical biological features include: face, fingerprint, iris, retina, DNA, skin, hand, or vein; the behavioral biometric includes: sound, signature or gait; the soft biological features include: sex, age, height or weight.
In a second aspect, a method for acquiring data is provided, including: the method comprises the steps that second equipment obtains registration information of a first user input by the first user, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the second device sends registration information of the first user to the first device; the second device obtains a first data request message of the first user, wherein the first data request message is used for requesting sharing of first data of the first user; the second device sends the first data request message and receives the first data sent by the first device.
Optionally, the second device may recognize the request for the first data by the first user through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through the first user's input.
With reference to the second aspect, in certain implementations of the second aspect, in a case where the input registration information of the first user includes raw data of a biometric of the first user, the method further includes: the second device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; under the condition that the second equipment obtains an account corresponding to the original data of the biological characteristics of the first user, the second equipment sends fifth information to the first equipment, wherein the fifth information is used for indicating the second equipment to obtain the account corresponding to the original data of the biological characteristics of the first user; and under the condition that the second equipment does not obtain the account number corresponding to the original data of the biological characteristics of the first user, the second equipment sends sixth information to the first equipment, wherein the sixth information is used for indicating the original data of the biological characteristics of the first user.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: the second device receives the output level of the data of the first device sent by the first device, the output level of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks.
In a third aspect, a method for acquiring data is provided, including: the second equipment acquires registration information input by a first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the second device sends registration information of the first user to the first device; the second device receives first information sent by the first device, wherein the first information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device; the second device determines the output grade of the data of the first device according to the first information; the second device obtains a first data request message of the first user, wherein the first data request message is used for requesting sharing of first data of the first user; the second device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device; data of different said data types have different highest risk; the second device sends the first data request message and receives the first data sent by the first device.
Optionally, the second device may recognize the request for the first data by the first user through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through the first user's input.
With reference to the third aspect, in some implementations of the third aspect, in a case where the registration information of the first user includes original data of a biometric of the first user, determining, by the second device, an egress level of the data of the first device according to the first information includes: the second device determines whether an account number corresponding to the original data of the biological characteristics of the first user determined by the first device is stored in the second device; under the condition that the second device stores the account corresponding to the original data of the biological characteristics of the first user determined by the first device, determining that the output grade of the data of the first device is the second grade; and under the condition that the second equipment does not store the account number corresponding to the original data of the biological characteristics of the first user, which is determined by the first equipment, determining that the output grade of the data of the first equipment is the third grade.
With reference to the third aspect, in certain implementations of the third aspect, after the determining that the output level of the data of the first device is the third level, the method further includes: determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user; determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or if the registration information of the first user is the voice or signature of the first user, determining that the output level of the data of the first device is a third sub-level in a third level.
With reference to the third aspect, in some implementations of the third aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data, and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, preference data, equipment capability data and/or equipment state data.
With reference to the third aspect, in some implementations of the third aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or determining the data type corresponding to the second sub-level as a second sub-type, wherein the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capacity data and/or equipment state data; and/or, determining the data type corresponding to the third sub-level as a third sub-type, wherein the data corresponding to the third sub-type comprises preference data, viewing video data, equipment capability data and/or equipment state data.
With reference to the third aspect, in certain implementations of the third aspect, the method further includes: and the second device sends the output grade of the data of the first device to the first device.
In a fourth aspect, a method for data sharing is provided, including: the method comprises the steps that first equipment receives registration information of a first user sent by second equipment, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the first device identifies the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; under the condition that the first equipment determines to obtain an account corresponding to the original data of the biological characteristics of the first user, the first equipment sends first information to the second equipment, wherein the first information is used for indicating the account corresponding to the original data of the biological characteristics of the first user determined by the first equipment; the first device obtains a first data request message of the first user from the second device, wherein the first data request message is used for requesting sharing of first data of the first user; the first device sends the first data to the second device.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the method further includes: and under the condition that the first device determines that the account corresponding to the original data of the biological characteristics of the first user is not obtained, the first device sends a first instruction to the second device, wherein the first instruction is used for indicating that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the method further includes: the first device receives the output level of the data of the first device sent by the second device, the output level of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks.
In a fifth aspect, a method for determining an egress level of data is provided, including: the method comprises the steps that second equipment obtains registration information of a first user input by the first user, wherein the registration information of the first user comprises an account number of the first user; the second device sends registration information of the first user to the first device; the second device receives a second instruction sent by the first device, wherein the second instruction is used for indicating whether the first device has the registration information of the first user; and the second equipment determines the output grade of the data of the first equipment according to the second instruction.
With reference to the fifth aspect, in some implementations of the fifth aspect, the determining, by the second device, an outbound rank of data of the first device according to the second instruction includes: under the condition that the first equipment stores an account corresponding to the original data of the biological characteristics of the first user, determining that the output grade of the data of the first equipment is a second grade; and under the condition that the first equipment does not store the account corresponding to the original data of the biological characteristics of the first user, determining the output grade of the data of the first equipment as a fourth grade.
With reference to the fifth aspect, in certain implementation manners of the fifth aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data and/or device status data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
With reference to the fifth aspect, in certain implementations of the fifth aspect, the method further includes: and the second device sends the output grade of the data of the first device to the first device.
With reference to the fifth aspect, in certain implementations of the fifth aspect, the method further includes: the second device obtains a first data request message of the first user, wherein the first data request message is used for requesting sharing of first data of the first user; the second device sends the first data request message.
With reference to the fifth aspect, in certain implementations of the fifth aspect, before the second device sends the first data request message, the method further includes: and the second device determines that the first data belongs to the data of the data type corresponding to the output level of the data of the first device.
In a sixth aspect, a method for determining an egress level of data is provided, including: the method comprises the steps that first equipment receives registration information of a first user sent by second equipment, wherein the registration information of the first user comprises an account number of the first user; the first device determines whether an account number of the first user exists; the first device sends a second instruction to the second device, wherein the second instruction is used for indicating whether an account number of the first user exists on the first device or not.
With reference to the sixth aspect, in certain implementations of the sixth aspect, the method further includes: and the first equipment receives the output grade of the data of the first equipment, which is sent by the second equipment.
With reference to the sixth aspect, in certain implementations of the sixth aspect, the method further includes: the first device receives a first data request message of the first user sent by the second device, wherein the first data request message is used for requesting sharing of first data of the first user; the first device determines that the first device stores the first data, and the first device shares the first data to the second device.
With reference to the sixth aspect, in certain implementation manners of the sixth aspect, after the first device receives the first data request message of the first user sent by the second device, the method further includes: and the first device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device.
In a seventh aspect, a method for acquiring data is provided, including: the method comprises the steps that second equipment obtains registration information of a first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the second device identifies the original data of the biological characteristics of the first user and determines whether the second device can obtain an account corresponding to the original data of the biological characteristics of the first user; when the second device obtains an account corresponding to the original data of the biological characteristics of the first user, the second device sends second information to the first device, wherein the second information is used for indicating the second device to obtain the account corresponding to the original data of the biological characteristics of the first user; the second device receives third information sent by the first device, wherein the third information is used for indicating whether the first device has an account corresponding to the original data of the biological characteristics of the first user obtained by the second device; the second device determines the output grade of the data of the first device according to the third information; the second device obtains a data request message of the first user, wherein the data request message is used for requesting the first device to share first data stored on the first device by the first user; the second device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device, and the data of different data types have different highest risks; the second device sends a data request message of the first user to the first device and receives first data sent by the first device.
With reference to the seventh aspect, in some implementations of the seventh aspect, determining, by the second device, an outbound level of data of the first device according to the third information includes: determining that the output grade of the data of the first device is a second grade under the condition that the first device stores an account corresponding to the original data of the biological characteristics of the first user obtained by the second device; and under the condition that the first equipment does not store the account number corresponding to the original data of the biological characteristics of the first user, determining that the output grade of the data of the first equipment is a fourth grade.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: the second device sends registration information of the first user to the first device under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user; the second device receives a third instruction sent by the first device, wherein the third instruction is used for indicating that the first device does not obtain an account corresponding to the original data of the biological characteristics of the first user; and the second device determines that the output level of the data of the first device is a fourth level according to the third instruction.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: the second device sends registration information of the first user to the first device under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user; the second device receives fourth information sent by the first device, wherein the fourth information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device; and the second device determines that the output grade of the data of the first device is a third grade according to the fourth information.
With reference to the seventh aspect, in certain implementation manners of the seventh aspect, after the determining that the output level of the data of the first device is the third level, the method further includes: determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user; determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or if the registration information of the first user is the voice or signature of the first user, determining that the output level of the data of the first device is a third sub-level in a third level.
With reference to the seventh aspect, in certain implementation manners of the seventh aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule plan data, preference data, equipment capacity data and/or equipment state data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
With reference to the seventh aspect, in some implementations of the seventh aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or determining the data type corresponding to the second sub-level as a second sub-type, wherein the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capacity data and/or equipment state data; and/or, determining the data type corresponding to the third sub-level as a third sub-type, wherein the data corresponding to the third sub-type comprises preference data, viewing video data, equipment capability data and/or equipment state data.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: and the second device sends the output grade of the data of the first device to the first device.
In an eighth aspect, a method for data sharing is provided, including: the first device receives second information sent by the second device, wherein the second information is used for indicating an account corresponding to the original data of the biological characteristics of the first user, which is obtained by identifying the original data of the biological characteristics of the first user by the second device; the first device searches whether an account indicated by the second information exists on the first device; the first equipment sends third information to the second equipment, wherein the third information is used for indicating whether an account indicated by the second information exists on the first equipment or not; the first device obtains a first data request message of the first user from the second device, wherein the first data request message is used for requesting sharing of first data of the first user; the first device sends the first data to the second device.
With reference to the eighth aspect, in certain implementations of the eighth aspect, the method further includes: the method comprises the steps that first equipment receives registration information of a first user sent by second equipment, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained or not; in the case that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user, the first device sends a third instruction to the second device, wherein the third instruction is used for indicating that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user; and under the condition that the first device obtains the account corresponding to the original data of the biological characteristics of the first user, the first device sends fourth information to the second device, wherein the fourth information is used for indicating the account corresponding to the original data of the biological characteristics of the first user determined by the first device.
With reference to the eighth aspect, in certain implementations of the eighth aspect, the method further includes: and the first equipment receives the output grade of the data of the first equipment, which is sent by the second equipment.
In a ninth aspect, there is provided an apparatus for data sharing, including: a processor coupled to the memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, to cause the apparatus to perform the method described in the first aspect and in certain implementations of the first aspect, the method described in the fourth aspect and in certain implementations of the fourth aspect, the method described in certain implementations of the sixth aspect and in certain implementations of the eighth aspect.
In a tenth aspect, there is provided an apparatus for data sharing, including: a processor coupled to the memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, to cause the apparatus to perform the method described in the second aspect and in certain implementations of the second aspect, the method described in the third aspect and in certain implementations of the third aspect, the method described in certain implementations of the fifth aspect and in certain implementations of the seventh aspect.
In an eleventh aspect, there is provided a computer readable medium comprising a computer program which, when run on a computer, causes the computer to perform the method described in the first to eighth aspects and in certain implementations of the first to eighth aspects.
In a twelfth aspect, there is provided a system chip comprising an input-output interface and at least one processor for invoking instructions in a memory to perform the operations of the methods of the above-described first through eighth aspects and in some implementations of the first through eighth aspects.
Optionally, the system chip may further include at least one memory for storing instructions for execution by the processor and a bus.
Drawings
FIG. 1 is an exemplary diagram of an application scenario in which the methods and apparatus of embodiments of the present application may be applied.
Fig. 2 is a schematic flowchart of a method 200 for data sharing according to an embodiment of the present application.
Fig. 3 is a schematic diagram of authority levels of a device accessing a database and sharable data corresponding to each authority level according to an embodiment of the present application.
Fig. 4 is a specific schematic flow chart of step 220 in the method 200 provided in an embodiment of the present application.
Fig. 5 is another specific schematic flow chart of step 220 in the method 220 provided by an embodiment of the present application.
Fig. 6 is yet another specific schematic flow chart of step 220 in the method 220 provided by an embodiment of the present application.
Fig. 7 is yet another specific schematic flow chart of step 220 in a method 220 provided by an embodiment of the present application.
Fig. 8 is a specific schematic flow chart of step 240 in the method 200 provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of an example of data sharing among multiple devices according to an embodiment of the present application.
Fig. 10 is a schematic diagram of another example of data sharing among multiple devices according to an embodiment of the present application.
Fig. 11 is a schematic diagram of another example of data sharing among multiple devices according to an embodiment of the present application.
Fig. 12 is a schematic diagram of another example of data sharing among multiple devices according to an embodiment of the present application.
Fig. 13 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Fig. 14 is a schematic software structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the present application will be described below with reference to the accompanying drawings.
FIG. 1 is an exemplary diagram of an application scenario in which the methods and apparatus of embodiments of the present application may be applied. The scenario shown in fig. 1 includes a cell phone 101, a vehicle 102, a tablet computer (pad) 103, a watch 104, a cell phone 111, a cell phone 121, a watch 122, a sound 123, a television 131, a cell phone 132, a tablet computer 133, a watch 134, a sound 135, and a vehicle 136. Account numbers B are respectively registered on the mobile phone 101, the vehicle 102, the tablet computer 103 and the watch 104; only the account a of the user 1 is registered on the mobile phone 111, and/or the biological characteristics of the user 1 exist on the mobile phone 111; the mobile phone 121, the watch 122 and the sound 123 are registered with the account number C, and the original data of the biological characteristics of the same user exist in the mobile phone 121, the watch 122 and the sound 123; the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 do not have any account registration, and the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 do not have the original data of the biological characteristics of the same user, namely, the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 can be used by a single user or multiple users.
It should be understood that the devices shown in fig. 1 are only one example, and that more or fewer devices may be included in the system. For example, it may include only the television 131, the cell phone 121, the cell phone 111, the tablet 103, the watch 104, the stereo 123, and the vehicle 136.
The mobile phone 101, the mobile phone 111, the mobile phone 121, the mobile phone 132, the tablet computer 103, the tablet computer 133, the watch 104, the watch 122, the watch 134, the vehicle 102 and the vehicle 136 in fig. 1 may be terminal devices with biometric feature recognition functions, for example, the mobile phone 101 may perform face recognition and the mobile phone 121 may perform voiceprint recognition.
Of course, the mobile phone 101 and the mobile phone 121 may also recognize the same biometric. For example, both mobile phone 101 and mobile phone 121 can perform face recognition; for another example, both cell phone 101 and cell phone 121 may perform voiceprint recognition.
The terminal device in the embodiment of the present application may be a mobile phone (mobile phone), a tablet computer, a computer with a wireless transceiving function, a Virtual Reality (VR) terminal, an augmented reality (augmented reality, AR) terminal, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), or the like.
The television 131, the stereo 123, and the stereo 135 in fig. 1 may represent devices that can collect biometric features but do not have a biometric recognition function, for example, the television 131 can collect a face image and collect a voice of a person, but have no face recognition function and no voiceprint recognition function; for another example, sound 123 and sound 135 may capture images of a person's face and capture sounds of a person, but without a face recognition function and without a voiceprint recognition function.
The biological features in embodiments of the present application may include one or more of the following: physical biological characteristics, behavioral biological characteristics, and soft biological characteristics. The physical biological features may include: face, fingerprint, iris, retina, deoxyribonucleic acid (deoxyribonucleic acid, DNA), skin, hand, vein. Behavioral biometric characteristics may include: voiceprint, signature, gait. Soft biological features may include: sex, age, height, weight.
Each of the devices shown in fig. 1 may communicate over a network. Optionally, the network comprises a wireless fidelity (wireless fidelity, WI-FI) network or a bluetooth network. It will be appreciated that the network may also include wireless communication networks such as 2G, 3G, 4G, 5G communication networks. The network may be in particular a working network or a home network. For example, after the television 131 collects the face image and sound of its user, the face image and sound of the user and the data generated on the television 131 are stored, and the face image can be sent to the mobile phone 101 and the sound information can be sent to the mobile phone 121 and the sound 135 through the above network.
With the rapid development of smart devices and IoT fields, multi-smart device collaborative integration has become a common knowledge in the industry. In order to achieve collaboration among multiple intelligent devices, it is desirable that user data and device data be able to flow and share among multiple intelligent devices or multiple accounts. In the context of multiple smart devices, such as in the home context, private devices (e.g., cell phones or watches) and home public devices (e.g., televisions, cars or speakers), where multiple people are using the home public device, individuals cannot be identified currently, and if different personalized experiences are provided by the user, this cannot be achieved currently.
Therefore, it is needed to provide a method for sharing data across devices, which provides different personalized experiences for users according to different forms of registration information of devices used by the users, so as to realize data sharing among multiple intelligent devices.
Fig. 2 shows a method 200 for data sharing according to an embodiment of the present application. It should be understood that fig. 2 illustrates steps or operations of the method, but these steps or operations are only examples, and the technical solution proposed in the present application may also perform other operations or variations of the operations in fig. 2.
Hereinafter, the first device and the second device may be terminal devices, which may be any one of the devices shown in fig. 1. The first user may be any user using the first device and the second device. The first device may be plural, and in the case where the first device is plural, each of the plural first devices may perform the steps performed by the first device in the following method.
In step 210, the second device obtains registration information of the first user input by the first user.
Wherein the registration information of the first user may include an account number of the first user and/or raw data of a biometric of the first user.
The account number may be registered by the first user; alternatively, the account may not be registered by the first user, but the first user uses the second device through the account.
The account number may be a mobile phone number, a user name set by a user, a mailbox, and the like, for example.
The raw data of the above-described biological characteristics can be understood as data of unprocessed biological characteristics.
The first user uses the second device through the registration information of the first user input by the first user. The first user uses the second device by way of an account number of the first user, for example. For example, the user uses the second device through account 1; alternatively, the first user uses the second device with the raw data of the first user's biometric; for example, the user uses the second device through a face image of the user.
The second device may be a device with a biometric function, for example, as shown in fig. 1, the second device may be a mobile phone 111, a mobile phone 101, a vehicle 102, a tablet computer 103, a mobile phone 121, a mobile phone 132, a tablet computer 133, or a vehicle 136; alternatively, the second device may be a device that may collect biometric characteristics but does not have biometric functionality, for example, as shown in fig. 1, the second device may be a watch 104, a watch 122, a sound 123, a television 131, a watch 134, or a sound 135.
The biometric identification technique (biometric identification technology) is a technique for performing identity authentication using human body biometric features. More specifically, the biological feature recognition technology is to closely combine a computer with high-tech means such as optics, acoustics, a biological sensor, a biological statistics principle and the like, and to identify the identity of the person by utilizing the inherent physiological characteristics and behavior characteristics of the human body.
In the case where the second device is a device having a biometric function, the second device saves the recognition result (the recognition result is the user) after the original data of the biometric feature of the user is recognized.
The above-described biometric recognition result may be understood as a biometric identity obtained from biometric recognition. For example, before the mobile phone 101 identifies the user using the mobile phone 101, the mobile phone 101 collects the face image of the owner 1 of the mobile phone 101, converts the face image of the owner 1 into digital codes, and combines the digital codes to obtain the face feature template of the owner 1. When the mobile phone 101 performs identity recognition on the user using the mobile phone 101, the mobile phone 101 collects the original data of the face image of the user using the mobile phone 101, compares the collected original data of the face image of the user using the mobile phone 101 with the face feature template of the owner 1 stored in the database of the mobile phone 101, and determines that the user using the mobile phone 101 is the owner 1 when the original data of the face image of the user using the mobile phone 101 is matched with the face feature template of the owner 1 stored in the database of the mobile phone 101. For another example, before the user using the stereo 123 is identified, the stereo 123 collects the sound of the owner 2 of the stereo 123, converts the sound of the owner 2 into a digital code, and combines the digital codes to obtain the voiceprint feature template of the owner 2. When the user using the sound equipment 123 is identified, the sound equipment 123 collects the original data of the sound of the user using the sound equipment 123, compares the collected original data of the sound of the user using the sound equipment 123 with the voiceprint feature template of the owner 2 stored in the database of the sound equipment 123, and determines that the user using the sound equipment 123 is the owner 2 when the original data of the sound of the user using the sound equipment 123 matches with the voiceprint feature template of the owner 2 stored in the database of the sound equipment 123. The identity of the user using the mobile phone 101 is the owner 1 and the identity of the user using the sound equipment 123 is the owner 2, namely the biological characteristic recognition result, and the mobile phone 101 and the sound equipment 123 can save the biological recognition result.
When a user uses the device through the user's registration information entered by the user, the user's registration information is stored in one-to-one correspondence with data generated when the user uses the device through the user's registration information entered by the user. Wherein the registration information may be an account number or raw data of a biometric feature. The account number may be a mobile phone number, a user name set by a user, a mailbox, and the like, for example.
Whether the user uses the device through the account number or the original data of the biological characteristics of the user, the user uses the device to generate data on the device, and the data are stored in a memory of the device according to the account number used by the user, namely, in the case that the registration information of the user is the account number of the user, the data generated by the user through the account number of the user using the device are stored in the memory according to the account number used by the user; when a plurality of accounts use the device, data generated on the device in the use process of each account are also stored according to the accounts, each account can correspond to a storage engine, and the data stored by the corresponding account can be accessed through the corresponding storage engine. In the case that the registration information of the user is the original data of the biological characteristics of the user, an account corresponding to the original data of the biological characteristics of the user exists on the device, the data stored on the device by the original data of the biological characteristics of the user is stored in the database according to the account corresponding to the original data of the biological characteristics of the user, wherein the original data of the biological characteristics and the account can be in a one-to-one correspondence, for example, one face corresponds to one account; alternatively, the raw data of the biometric feature may be a many-to-one relationship with the account number, e.g., multiple fingerprints corresponding to one account number, or one face and one fingerprint corresponding to one account number. For example, a biological feature such as a fingerprint, an iris, a face and the like which is recorded by an account A in a device when in use can be bound with the account A, and a biological feature recorded by an account B when in use can be bound with the account B.
For example, as shown in fig. 1, when the user 1 uses the mobile phone 111 by the account a, data stored on the mobile phone 111 by the user 1 by the account a is stored in the database by the account a. The data stored by the user 1 on the mobile phone 111 through the account a may include a photo taken by the user 1 on the mobile phone 111 through the account a, a historical song list stored by the user 1 on the mobile phone 111 through the account a, historical position data of the user 1 stored by the user 1 on the mobile phone 111 through the account a, and the like. When the user 2 uses the mobile phone 111 through the account B, the data stored on the mobile phone 111 through the account B by the user 2 is stored according to the account B in the database.
For another example, as shown in fig. 1, when the user 3 has used the mobile phone 121 through the account number C before the user 3 uses the mobile phone 121 through the face image of the user 3, the data stored on the mobile phone 121 through the face image of the user 3 by the user 3 is stored in the database according to the account number C. The data stored by the user 3 on the mobile phone 121 through the face image of the user 3 may include a historical video watching record stored by the user 3 on the mobile phone 121 through the face image of the user 3, a historical movement condition record of the user 3 stored by the user 3 on the mobile phone 121 through the face image of the user 3, and the like.
After the second device acquires the registration information input by the first user, the first user needs to synchronize the data stored on at least one first device by the registration information input by the first user to the second device. Step 220 is also included before the first user synchronizes data stored on the at least one first device to the second device via the registration information entered by the first user.
Step 220, determining an egress level of the data of the first device.
Wherein the first device is a device other than the second device in a network co-located with the second device; alternatively, the first device is a device selected according to the functions of all devices in a network that are co-located with the second device, for example, the first device is a device having a biometric function.
Alternatively, the devices in the network may be mutually trusted devices. For example, the network may be a home network in which devices are mutually trusted devices. As another example, the network may also be a work network in which devices are mutually trusted devices. The device in the network is not only a device connected to the network, but also a device added to the network by scanning a two-dimensional code (identification code), and the two-dimensional code may be preset.
Wherein the first device is a device other than the second device in the same group in a network as the second device; alternatively, the first device is a device selected according to the functions of all devices in the same group as the second device in the same network, for example, the first device is a device having a biometric function.
Alternatively, a plurality of groups may be preset in the above network, and the devices in each of the plurality of groups may be mutually trusted devices. For example, the network may be a home network, in which a home group and a visitor group may be preset, the home group including the first device and the second device, the first device and the second device in the home group being devices that trust each other; the devices in the visitor group and the devices in the family group are not mutually trusted devices, but non-private information interaction can be performed between the devices in the access group and the devices in the family group. Wherein, the devices in the home group may be not only devices connected to the home network but also devices joined to the home network by scanning the two-dimensional code, and the devices in the guest group are only devices connected to the home network.
The output level of the data of the first device may be understood as a level of sharing the data on the first device to other devices. The data output level of the first device is set relative to the device requesting the data, and the data output levels of the different devices requesting the data may be different and may be the same. When the device requesting the data is the first device itself, the output level of the data of the first device is the first level. When the device requesting the data is not the first device itself, the level of egress of the data by the first device is determined by what form of registration information the device requesting the data (e.g., the second device) accesses the data on the first device. If the equipment for requesting the data adopts the account number used in the first equipment to access the first equipment, the data output grade of the first equipment is a second grade for the equipment for requesting the data; if the device requesting the data accesses the first device by adopting the original data of the biological characteristics corresponding to the account number used in the first device, the data output grade of the first device is a third grade for the device requesting the data; if the device requesting the data does not access the first device using the same account number or the same biometric original data as the first device, the data of the first device is output to the fourth level for the device requesting the data.
Illustratively, as shown in the right triangle of fig. 3, when a user uses a device, data generated in the device may be used by the device, and for the device itself, the data of the device may be output on a first level, such as that user a logs into device 1 through account a or a face (face-bound account a), data generated during the use of device 1 is associated with account a, and when user a logs into device 1 again using account a or a face, all data associated with account a and data unrelated to any account may be used or accessed by a program in device 1. When a user obtains data related to the user in a first device through a second device, the output level of the data of the first device can be divided into at least two levels: a second level, a third level, and a fourth level. Specifically, when a user uses a second device through the same account as the first device, for the second device, the output level of the data of the first device is a second level; when a user uses a second device through the original data of the first biological characteristic which is the same as the first device, the output level of the data of the first device is a third level for the second device, wherein the original data of the first biological characteristic is the original data of any biological characteristic of the user; when the user does not use the second device through the same account number and the same original data of the biological characteristics as the first device, the output level of the data of the first device is a fourth level for the second device.
The output level of the data of the equipment is sequentially a first level, a second level, a third level and a fourth level from high to low.
For example, as shown in fig. 1, if the user only uses the mobile phone 111 through the account a, the data stored in the mobile phone 111 through the account a is not allowed to be output, i.e. the data stored in the mobile phone 111 through the account a is not shared with other devices. The user uses the mobile phone 101, the vehicle 102, the tablet computer 103 and the watch 104 through the account B respectively, and then the output grade of the data stored on the mobile phone 101 through the account B is a second grade; the output grade of the data stored by the user on the vehicle 102 through the account number B is a second grade; the output grade of the data stored on the tablet computer 103 by the user through the account B is a second grade; the user has a second level of the output level of the data stored on the watch 104 by account number B. The user uses the mobile phone 121, the watch 122 and the sound 123 through the original data of the first biological feature of the user, and the output grade of the data stored on the mobile phone 121 through the original data of the first biological feature of the user is a third grade; the user has a third level of the output level of the data stored on the watch 122 from the raw data of the user's first biometric characteristic; the output level of the data stored on the sound 123 by the user through the original data of the first biometric feature of the user is the third level. The user does not use any account number, and the original data of the biological characteristics of any user uses the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136, then the output level of the data of the television 131 is a fourth level; the output level of the data of the mobile phone 132 is a fourth level; the output level of the data of the tablet computer 133 is a fourth level; the output level of the data of the watch 134 is a fourth level; the output level of the data of the sound equipment 135 is a fourth level; the data of the vehicle 136 is output on a fourth level.
Further, when the user uses the plurality of devices using the raw data of different biological characteristics, the third level refinement may include at least two of the following according to an accuracy rate or a false break rate of the raw data of the identified biological characteristics: a first sub-level, a second sub-level, a third sub-level. The output level of the data of the device is sequentially a first sub-level, a second sub-level and a third sub-level from high to low.
In the case that a user uses a plurality of devices through a 3D face, fingerprint, iris or DNA of the user, that is, when the same user uses the same 3D face, fingerprint, iris or DNA on a plurality of devices, determining an output level of data of the device as a first sub-level; for example, when the user a uses the device a through the fingerprint of the user a, and uses other devices (e.g., the device C) through the fingerprint of the user a, the output level of the data stored on the device a by the user a through the fingerprint of the user a is the first sub-level; the output level of the data stored by user a on device a through user a's fingerprint is also the first sub-level.
In the case that the user uses a plurality of devices through the 2D face or vein of the user, that is, when the same user uses the same 2D face or vein on a plurality of devices, determining an output level of data of the device as a second sub-level; for example, when the user a uses the device a through the 2D face of the user a, and uses other devices (for example, the device C) through the 2D face of the user a, the output level of the data stored on the device a by the user a through the 2D face of the user a is the second sub-level; the output level of the data stored by the user a on the device C through the 2D face of the user a is also the second sub-level.
In the case where a user uses a plurality of devices by the user's voice or signature, that is, the same user uses the same voice or signature on a plurality of devices, the output level of the data of the device is determined as a third sub-level. For example, when the user a uses the device a by the sound of the user a, and uses another device (for example, the device C) by the sound of the user a, the output level of the data stored in the device a by the sound of the user a is the third sub-level; the output level of the data stored on the device C by the user a through the voice of the user a is the third sub-level.
When the first user uses the second device, the first user needs to acquire the data in the first device, and then needs to determine the output level of the data of the first device, where the output level of the data of the first device is relative to the second device. In the following, how to determine the egress level of the data of the first device is described in detail in two cases. Taking the example that the first device and the second device belong to the same home network, the home network may communicate in wifi mode or bluetooth mode.
Case 1: the second device determines an egress level of the data of the first device.
(1) In the case where the registration information input by the first user includes the original data of the biometric feature of the first user and the second device is a device that can acquire the biometric feature but does not have the biometric function, the second device can acquire only the biometric feature, and the second device needs to complete the biometric function by means of the other device.
As shown in fig. 4, a specific step 220 may include steps 220a through 223a.
In step 220a, the second device transmits the raw data of the biometric feature of the first user to the first device.
In step 221a, the first device identifies the original data of the biometric feature of the first user, and determines whether an account corresponding to the original data of the biometric feature of the first user is available.
When the first device determines that the account corresponding to the original data of the biometric feature of the first user is not obtained, step 222a is performed.
In step 222a, the first device sends a first instruction to the second device, where the first instruction is used to indicate that the first device does not obtain an account number corresponding to the original data of the biometric feature of the first user. And after the second device receives the first instruction, determining that the output level of the data of the first device is a fourth level.
For example, as shown in fig. 1, when the television 131 (this television 131 is an example of the second device) collects a face image of the user using the television 131 through the camera of the television 131, and performs identity recognition on the user using the television 131, the television 131 transmits the collected face image of the user using the television 131 to the mobile phone 132 (this mobile phone 132 is an example of the first device), the mobile phone 132 compares the original data of the face image of the user using the television 131 transmitted by the television 131 with the feature templates stored in the database of the mobile phone 132, and when the original data of the face image of the user using the television 131 does not match with the feature templates of the owner 3 stored in the database of the mobile phone 132, the mobile phone 132 does not obtain an account number corresponding to the face image of the user using the television 131, and the mobile phone 132 transmits the first command to the television 131.
When the first device identifies the original data of the biological characteristics of the first user to obtain an identification result, the first device can obtain an account corresponding to the original data of the biological characteristics of the first user. In case the first device determines that an account number corresponding to the original data of the biometric feature of the first user is obtained, steps 222a' and 223a are performed.
In step 222a', the first device sends first information to the second device, where the first information is used to indicate an account number corresponding to the original data of the biometric feature of the first user determined by the first device.
In step 223a, the second device determines, according to the first information sent by the first device, an egress level of the data of the first device.
Specifically, under the condition that the second device determines that the account number corresponding to the original data of the biological characteristics of the first user determined by the first device is stored in the second device, the second device determines that the output level of the data of the first device is the second level. And under the condition that the second equipment determines that the account corresponding to the original data of the biological characteristics of the first user determined by the first equipment is not stored, the second equipment determines that the output grade of the data of the first equipment is a third grade.
Further, in the case that the output level of the data of the first device is determined to be the third level, the second device may further determine which sub-level of the third level is the output level of the data of the first device according to the specific form of the registration information of the first user input by the first user. Specifically, when the registration information of the first user input by the first user is a 3D face, fingerprint, iris or DNA of the first user, the second device determines that the output level of the data of the first device is a first sub level; when the registration information of the first user input by the first user is a 2D face or vein of the first user, the second device determines that the output level of the data of the first device is a second sub-level; when the registration information of the first user input by the first user is voiceprint or signature of the first user, the second device determines that the output level of the data of the first device is a third sub-level.
For example, as shown in fig. 1, when the sound 123 (this sound 123 is an example of the second device) collects the sound of the user using the sound 123 through the microphone of the sound 123, and the user using the sound 123 is identified, the sound 123 transmits the collected sound of the user using the sound 123 to the mobile phone 121 (this mobile phone 121 is an example of the first device), the mobile phone 121 compares the original data of the sound of the user using the sound 123 transmitted by the sound 123 with the feature templates stored in the database of the mobile phone 121, and when the original data of the sound of the user using the sound 123 matches with the feature templates of the account number C stored in the database of the mobile phone 121, the mobile phone 121 obtains the account number C corresponding to the sound of the user using the sound 123, the mobile phone 121 transmits the account number C to the sound 123, and the sound 123 determines that the account number C is stored therein, and the sound 123 determines that the output level of the data of the mobile phone 121 is the second level. When the sound 123 determines that the account number C is not stored in itself, the sound 123 determines that the output level of the data of the mobile phone 101 is the third level. Still further, the stereo 123 may determine that the output level of the data of the mobile phone 101 is a third sub-level of the third level.
(2) In the case where the registration information input by the first user includes raw data of the biometric feature of the first user and the second device is provided with a biometric function device, the second device may perform the biometric function.
As shown in fig. 5, the specific step 220 may include steps 220b to 224b, and steps 221c to 224c.
In step 220b, the second device identifies the original data of the biometric feature of the first user and determines whether the second device can obtain an account corresponding to the original data of the biometric feature of the first user.
Specifically, in the case where the second device obtains an account corresponding to the original data of the biometric feature of the first user, steps 221b to 224b are performed. In the case where the second device does not obtain an account corresponding to the original data of the biometric feature of the first user, steps 221c to 224c are performed.
In step 221b, the second device sends second information to the first device, where the second information is used to instruct the second device to identify an account corresponding to the original data of the first user's biometric feature, where the account is obtained by identifying the original data of the first user's biometric feature.
Step 222b, the first device searches whether the account indicated by the second information exists on the first device, and performs step 223b.
In step 223b, the first device sends third information to the second device, where the third information is used to indicate whether the account indicated by the second information is stored on the first device.
In step 224b, the second device determines, according to the third information, an egress level of the data of the first device.
Specifically, under the condition that the third information indicates that the account indicated by the second information exists on the first device, the second device determines that the output level of the data of the first device is a second level; and under the condition that the third information indicates that the account indicated by the second information is not stored on the first device, the second device determines that the output grade of the data of the first device is a fourth grade.
For example, as shown in fig. 1, the mobile phone 121 (the mobile phone 121 is an example of the second device) may identify the fingerprint of the user using the mobile phone 121 and obtain the account number C corresponding to the fingerprint of the user of the mobile phone 121. When the mobile phone 121 sends the account number C to the watch 122 (the watch 122 is an example of the first device), the watch 122 determines that the account number C is stored in the watch 122, and the watch 122 sends the information that the account number C is stored in the watch 122 to the mobile phone 121, so that the mobile phone 121 determines that the output level of the data of the watch 122 is the second level. When the mobile phone 121 sends the account number C to the sound equipment 135 (the sound equipment 135 is another example of the first device), the sound equipment 135 determines that the account number C is not stored in the sound equipment 135, and the sound equipment 135 sends the information of the account number C not stored in the watch 122 to the mobile phone 121, so that the mobile phone 121 determines that the output level of the data of the sound equipment 135 is the fourth level.
In step 221c, the second device sends the raw data of the biometric feature of the first user to the first device.
In step 222c, the first device identifies the original data of the biometric feature of the first user, and determines whether an account corresponding to the original data of the biometric feature of the first user is available.
And under the condition that the first equipment does not obtain the account corresponding to the original data of the biological characteristics of the first user, the first equipment sends a third instruction to the second equipment, wherein the third instruction is used for indicating that the first equipment does not obtain the account corresponding to the original data of the biological characteristics of the first user, and the second equipment determines that the output grade of the data of the first equipment is a fourth grade according to the third instruction. In case that the first device obtains an account corresponding to the original data of the biometric feature of the first user, steps 223c to 224c are performed.
In step 223c, the first device sends fourth information to the second device, where the fourth information is used to indicate the account number corresponding to the original data of the biometric feature of the first user determined by the first device.
In step 224c, the second device determines, according to the fourth information, that the output level of the data of the first device is the third level.
Further, the second device may further determine, according to the fourth information and the specific form of the registration information of the first user input by the first user, which sub-level of the third level is the output level of the data of the first device. Specifically, when the registration information of the first user input by the first user is a 3D face, fingerprint, iris or DNA of the first user, the second device determines that the output level of the data of the first device is a first sub level; when the registration information of the first user input by the first user is a 2D face or vein of the first user, the second device determines that the output level of the data of the first device is a second sub-level; when the registration information of the first user input by the first user is the voice or signature of the first user, the second device determines that the output level of the data of the first device is a third sub-level.
(3) When the registration information input by the first user is the account number of the first user, registering the account number of the first user in the second device, and sending the account number of the first user to the first device by the second device; the first device determines whether an account number of the first user exists or not, and the first device sends a second instruction to the second device, wherein the second instruction is used for indicating whether the account number of the first user exists on the first device or not. Under the condition that the account number of the first user is stored in the first device, the second device determines that the output grade of the data of the first device is a second grade; and under the condition that the account number of the first user is not stored on the first device, the second device determines that the output grade of the data of the first device is a fourth grade.
In case 1 above, the second device may send the output level of the data of the first device determined by the second device to the first device.
Case 2: the first device determines an egress level of data of the first device.
(1) In the case where the registration information input by the first user includes the original data of the biometric feature of the first user, and the second device is a device that can collect the biometric feature but does not have the biometric feature, the second device can collect only the biometric feature, and the second device needs to complete the biometric feature recognition function by means of the other device.
Alternatively, the account number of the first user may be one or more.
As shown in fig. 6, the specific step 220 may further include steps 220d to 222d.
In step 220d, the second device sends the original data of the biometric feature of the first user and all account numbers stored in the second device to the first device.
In step 221d, the first device identifies the original data of the biometric feature of the first user and determines whether an account corresponding to the original data of the biometric feature of the first user is available.
In step 222d, the first device determines an out-level of the data of the first device.
Specifically, when the first device determines that an account corresponding to the original data of the biological characteristics of the first user exists in all the accounts stored in the second device, the first device determines that the output level of the data of the first device is a second level; when the first device determines that the account corresponding to the original data of the biological characteristics of the first user does not exist in all the accounts stored in the second device, the first device determines that the output grade of the data of the first device is a third grade. And under the condition that the first device does not obtain the account number corresponding to the original data of the biological characteristics of the first user, the first device determines that the output end grade of the data of the first device is a fourth grade.
Further, in the case that the output level of the data of the first device is determined to be the third level, the first device may further determine, according to the specific form of the registration information of the first user sent by the second device, which sub-level in the third level is the output level of the data of the first device. Specifically, when the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user, the first device determines that an output level of data of the first device is a first sub-level; when the registration information of the first user is a 2D face or vein of the first user, the first device determines that the output level of the data of the first device is a second sub-level; when the registration information of the first user is a sound or signature of the first user, the second device determines that the output level of the data of the first device is the third sub-level.
(2) In the case where the registration information input by the first user includes the original data of the biometric feature of the first user and the second device is provided with the biometric function device, the second device may perform the biometric function.
As shown in fig. 7, the specific step 220 may further include steps 220e to 222e, 221f and 222f.
In step 220e, the second device identifies the original data of the biometric feature of the first user and determines whether an account corresponding to the original data of the biometric feature of the first user is available.
In the case that the second device obtains an account number corresponding to the original data of the biometric feature of the first user, the method 200 further includes 221e and 222e. In the case that the second device does not obtain an account corresponding to the original data of the biometric feature of the first user, the method 200 further includes 221f and 222f.
In step 221e, the second device sends fifth information to the first device, where the fifth information is used to instruct the second device to obtain an account number corresponding to the original data of the biometric feature of the first user.
In step 222e, the first device determines an egress level of the data of the first device according to the fifth information.
Specifically, under the condition that the first device determines that the first device stores an account corresponding to the biological feature original data of the first user sent by the second device, the first device determines that the output level of the data of the first device is a second level; and under the condition that the first equipment determines that the account corresponding to the original data of the biological characteristics of the first user sent by the second equipment is not stored in the first equipment, the first equipment determines that the output grade of the data to the first equipment is a fourth grade.
In step 221f, the second device sends sixth information to the first device, said sixth information being used to indicate the original data of the biometric feature of said first user.
In step 222f, the first device determines whether to obtain the account corresponding to the original data of the biometric feature of the first user according to the sixth information.
Step 223f, determining an egress level of the data of the first device.
Specifically, under the condition that the first device does not obtain the account number of the original data of the biological characteristics of the first user, the first device determines that the output level of the data of the first device is the fourth level. In case the first device determines an account number corresponding to the original data of the biometric feature of the first user, steps 224f to 226f are performed.
In step 224f, the first device sends seventh information to the second device, where the seventh information is used to indicate an account number corresponding to the original data of the biometric feature of the first user determined by the first device.
In step 225f, the second device determines whether the account number corresponding to the original data of the biometric feature of the first user determined by the first device is stored on the second device.
In step 226f, the second device sends eighth information to the first device, where the eighth information is used to indicate whether the account number corresponding to the original data of the biometric feature of the first user determined by the first device exists on the second device.
Under the condition that an account corresponding to the original data of the biological characteristics of the first user determined by the first device is stored in the second device, the first device determines that the output grade of the data of the first device is a second grade; and under the condition that the account corresponding to the original data of the biological characteristics of the first user determined by the first device is not stored on the second device, the first device determines that the output grade of the data of the first device is a third grade.
Further, in the case that the first device determines that the output level of the data of the first device is the third level, the first device may further determine which sub-level of the third level is the output level of the data of the first device according to the specific form of the registration information of the first user sent by the second device. Specifically, when the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user, the first device determines that an output level of data of the first device is a first sub-level; when the registration information of the first user is a 2D face or vein of the first user, the first device determines that the output level of the data of the first device is a second sub-level; when the registration information of the first user is a sound or signature of the first user, the second device determines that the output level of the data of the first device is the third sub-level.
(3) When the input registration information of the first user is the account number of the first user, registering the account number of the first user in the second device, and sending the account number of the first user to the first device by the second device; the method comprises the steps that a first device determines whether an account number of a first user exists or not, and the first device determines that the output grade of data of the first device is a second grade under the condition that the account number of the first user exists in the first device; and under the condition that the first equipment does not store the account number of the first user, the first equipment determines that the output level of the data of the first equipment is a fourth level.
In case 2 above, the first device may send the output level of the data of the first device determined by the first device to the second device.
In the above cases 1 and 2, the first device in the home network may be one or more, and the outbound level of the data of the first device may be determined in the above manner. When the second device sends the original data of the biological characteristics to the first device, the original data may be sent to all devices except the second device in the home network, and all devices except the second device in the home network are the first device; the second device may select a device capable of having a biometric feature recognition function according to the performance of the device in the home network, send the original data of the biometric feature to the device having the biometric feature recognition function, and then confirm whether other devices except the second device in the home network store the account after obtaining the account corresponding to the original data of the biometric feature, thereby completing the output level of the data of the other devices except the second device (i.e., the first device) in the home network.
After the data of the first device is determined at the output level, the second device may also perform the following step 230.
In step 230, the second device obtains a first data request message of the first user, where the first data request message is used to request sharing of first data of the first user.
Illustratively, the second device may recognize the first user's request for the first data through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through the first user's input.
The registration information entered by the user on the device, the data generated using the device may be classified into high-impact personal data, medium-impact personal data, low-impact personal data, and non-personal data according to the risk level of the data. The highly influencing personal data may include, among other things, precise location data, which may be understood as latitude and longitude coordinates or trajectories, and/or health data. For example, the accurate location data may be real-time accurate location data of the user as the user uses the device. The data affected in (c) may include general location data and/or video data. The general location data may be understood as a CELL Identity (CELL ID) where the terminal device is located or a basic service set Identity (basic service set identifier, BSSID) of an infinitesimal WI-FI to which the device is connected. General location data cannot be located directly to latitude and longitude coordinates, but may presumably identify information of the user's location. The general location data may be historical location data of the user when the user uses the device. Illustratively, where the user is interested, for example where the user likes to eat, where the user likes to entertain. The low impact data may include logistical data, calendar planning data, and/or preference data; the non-personal data may include device capability data and/or device status data.
Risk of high-impact personal data > risk of low-impact personal data > risk of non-personal data. Wherein, the high-influence personal data can be understood as that the part of personal data has the highest influence on the risk degree of the user, namely, the part of personal data has the highest risk degree; the influence of the personal data is understood to mean that the part of the personal data has a higher risk level influence on the user, i.e. the part of the personal data has a higher risk level; low impact personal data may be understood as having a lower risk level impact of the portion of personal data on the user, i.e. having a lower risk level of the portion of data; non-personal data may be understood as part of the data that is not relevant to the user, but rather some data of the device itself.
Optionally, the risk level may be replaced by the privacy level, and the risk may be replaced by the privacy level in the embodiments of the present application.
When a user uses the device through the registration information input by the user, the device marks the data on the device according to the risk degree of the data when the data is generated on the device. For example, tagging accurate location data with highly influential personal data; labeling the general location data with personal data affected in the middle; labeling personal data with low influence on preference data of a user; the device capability data is tagged with non-personal data.
For example, as shown in fig. 3, for a device requesting data, the higher the level of data out of the device being requested for data, the higher the highest risk that the device requesting data may access the data. Specifically, in the case where the data output level of the device that requests data is the second level, the data type in the device that can access the requested data is the second type, and the data that can access the highest risk is the middle-affected personal data, and the second type of data may include the middle-affected personal data, the low-affected personal data, and the non-personal data. In the case where the data of the device that requested the data has the output level of the third level, the type of data in the device that can access the requested data is the third type, the data that can access the highest risk is the low-impact personal data, and the third type of data can include the low-impact personal data and the non-personal data. In the case where the data of the device that requested the data has the output level of the fourth level, the fourth type of data among the devices that the device that requested the data can access the requested data may include non-personal data. It will be appreciated that when the device requesting the data is the device that is being requested, then all data types are accessible, i.e. the first type of data that has the highest risk of being accessible is high-impact personal data, the first type of data comprising high-impact personal data, medium-impact personal data, low-impact personal data and non-personal data.
Further, the third level may be subdivided into at least two of the following: in the case of the first sub-level, the second sub-level, and the third sub-level, the data types corresponding to the output-end levels of the data of the devices for which the data is requested may be different. In particular, in the case where the data output level of the device for which data is requested is the first sub-level, the data type of the device for which data is requested may be accessed by the device for which data is requested includes photo data, recorded video data, device capability data, and/or device status data, for example, a photo taken by a user or a video recorded by a user. In the case where the data of the device for which data is requested has the output level of the second sub-level, the data type of the device for which data is requested may be accessed by the device for which data is requested includes logistics data, schedule data, device capability data, and/or device status data, for example, express delivery transportation data of the user. In the case where the data output level of the data of the device requesting the data is the third sub-level, the data type of the device having access to the requested data includes preference data, viewing video data, device capability data, and/or device status data, for example, the type of song the user likes to hear or singer the user likes to hear; for another example, the user's sports preferences; as another example, a video recording is viewed by a user.
The second device may be a device that requests data, and the first device may be a device that requests data.
Step 240 may also be included after the second device obtains the first user's request for the first data.
Step 240 determines whether the first device shares the first data. Step 240 is described in more detail in two ways below.
Mode 1: the second device determines whether the first device shares the first data.
As shown in fig. 8, a specific step 240 may include steps 241a to 244a.
Step 241a, the second device determines whether the first data belongs to the data of the data type corresponding to the output level of the data of the first device, and executes step 242a on the data of the data type corresponding to the output level of the data of the first device; in the case that the first data does not belong to the data of the data type corresponding to the output level of the data of the first device, the second device does not send the first data request message to the first device.
In step 242a, the second device sends a first data request message to the first device.
In the case that the plurality of first devices are provided, the second device may determine to transmit the first data request message to at least one first device of the plurality of first devices according to a preset rule. The preset rule may be, for example, a first device, of the plurality of first devices, having a distance to the second device that is less than a first threshold; alternatively, the preset rule may be that the second device requests data at a frequency greater than a second threshold; alternatively, the preset rule may be the first device having a confidence level of the first device greater than a third threshold.
In step 243a, the first device looks up whether the first device has the first data.
Under the condition that the first device does not store the first data, the first device does not share the first data with the second device; in case at least one first device has first data stored, step 244a is performed.
Whether the first device has the first data or not, specifically, whether the first device has the first data associated with the account number of the first user or not is referred to as whether the first device has the first data associated with the account number of the first user or not.
In step 244a, the first device shares the first data with the second device.
Optionally, after the first device shares the first data to the second device, when the second device obtains a second data request message of the first user, where the second data request message is used to request to share the second data, the second data and the first data are data of a data type corresponding to an output level of the data belonging to the first device, and a time between a time when the second device obtains the first data request message of the first user and a time when the second device obtains the second data request message of the first user is less than or equal to the first time, at this time, the second device directly sends the second data request message of the first user to the first device, and shares the second data to the second device when the first device has the second data. Therefore, different individual experiences can be effectively provided for the user.
Mode 2: the first device determines whether the first device shares the first data.
As shown in fig. 8, a specific step 240 may include steps 241b to 244b.
Step 241b, the second device first device sends a first data request message.
In case that the first device is plural, the second device may determine to transmit the first data request message to at least one of the plural first devices according to a preset rule. The preset rule may be, for example, a first device, of the plurality of first devices, having a distance to the second device that is less than a first threshold; alternatively, the preset rule may be that the second device requests data at a frequency greater than a second threshold; alternatively, the preset rule may be the first device having a confidence level of the first device greater than a third threshold.
In step 242b, the first device determines, according to the first data request message, whether the first data belongs to data of a data type corresponding to an output level of the data of the first device.
Specifically, in the case where the first data does not belong to the data of the data type corresponding to the output level of the data of the first device, the first device does not share the first data with the second device. In the case that the first data belongs to the data of the data type corresponding to the output level of the data of the first device, step 243b is further included.
In step 243b, the first device looks up whether the first device has the first data.
Specifically, in the case that the first device does not have the first data, the first device does not share the first data with the second device; step 244b is also performed in the event that the first device has first data stored.
Whether the first device has the first data or not, specifically, whether the first device has the first data associated with the account number of the first user or not is referred to as whether the first device has the first data associated with the account number of the first user or not.
In step 244b, the first device shares the first data to the second device.
Optionally, after the first device shares the first data to the second device, when the second device obtains a second data request message of the first user, where the second data request message is used to request to share the second data, the second data and the first data are data of a data type corresponding to an output level of the data belonging to the first device, and a time between a time when the second device obtains the first data request message of the first user and a time when the second device obtains the second data request message of the first user is less than or equal to the first time, the second device sends the second data request message of the first user to the first device, and shares the second data to the second device when the first device stores the second data. Therefore, different individual experiences can be effectively provided for the user.
For example, as shown in connection with fig. 1 and 9, when the user 1 uses the vehicle 136 (the vehicle 136 is an example of the second device) through the account a, the vehicle 136 transmits the account a of the user 1 to one or more devices through the network, where the one or more devices may be devices connected to the same network as the vehicle 136, for example, the one or more devices may be devices as in fig. 1, and here, the plurality of devices are the mobile phone 111 (the mobile phone 111 is an example of the first device) and the mobile phone 101 (the mobile phone 101 is another example of the first device) respectively. After the mobile phone 111 receives the account number a of the user 1, the mobile phone 111 determines that the account number a of the user 1 is stored in the mobile phone 111, the mobile phone 111 determines that the output level of the data of the mobile phone 111 is the second level, and the output level of the data of the mobile phone 111 is relative to the vehicle 136, and then the mobile phone 111 sends the output level of the data of the mobile phone 111 to the vehicle 136. The vehicle 136 obtains the data request message of the user 1, where the data request message of the user 1 is used for requesting to share the place where the user 1 likes entertainment, because the place where the user 1 likes entertainment belongs to the data of the data type corresponding to the output level of the data of the mobile phone 111, the vehicle 136 sends the data request message of the user 1 to the mobile phone 111, and after the mobile phone 111 receives the data request message of the user 1, the place where the user 1 likes entertainment is shared to the vehicle 136 when the place where the user 1 likes entertainment is stored on the mobile phone 111. When the mobile phone 101 receives the account a of the user 1, the mobile phone 101 determines that the account a of the user 1 is not stored in the mobile phone 101, the mobile phone 101 determines that the output level of the data of the mobile phone 101 is the fourth level, the output level of the data of the mobile phone 101 is relative to the vehicle 136, and the mobile phone 101 sends the output level of the data of the mobile phone 101 to the vehicle 136. The vehicle 136 obtains the data request message of the user 1, where the request message of the user 1 is used for requesting to share the place where the user 1 likes entertainment, because the place where the user 1 likes entertainment does not belong to the data of the data type corresponding to the output level of the data of the mobile phone 101, the vehicle 136 may not send the data request message of the user 1 to the mobile phone 101, that is, the vehicle 136 may only obtain the place where the user 1 likes entertainment on the mobile phone 111, so that the driver of the vehicle 136 may drive the vehicle 136 to the destination according to the place where the user 1 likes entertainment on the mobile phone 111.
For example, as shown in fig. 1 and 10, when the user 2 uses the television 131 (the television 131 is an example of the second device) through the account B, the television 131 sends the account B of the user 2 to one or more devices through the network, where the one or more devices may be devices that are all connected to the same home network as the television 131, for example, the one or more devices may be devices as in fig. 1, and here, the plurality of devices are respectively a tablet 103 (the tablet 103 is an example of the first device) and a sound 123 (the sound 123 is another example of the first device) are described as an example. When the tablet pc 103 receives the account number B of the user 2, the tablet pc 103 determines that the account number B of the user 2 is stored in the tablet pc 103, and then the tablet pc 103 sends the account number B of the user 2 stored in the tablet pc 103 to the television 131, and then the television 131 determines that the output level of the data of the tablet pc 103 is the second level, and the output level of the data of the tablet pc 103 is relative to the television 131, and then the television 131 sends the output level of the data of the tablet pc 103 to the tablet pc 103. The television 131 obtains a data request message of the user 2, where the data request message of the user 2 is used to request sharing of the historical song data of the user 2, and the television 131 sends the data request message of the user 2 to the tablet computer 103, because the historical song data of the user 2 belongs to data of a data type corresponding to an output level of the data of the tablet computer 103, and the historical song data of the user 2 is shared to the television 131 under the condition that the historical song data of the user 2 is stored on the tablet computer 103. When the audio 123 receives the account B of the user 2, the audio 123 determines that the account B of the user 2 is not stored in the audio 123, and if the audio 123 transmits a command to the television 131 that the account B of the user 2 is not stored in the audio 123, the television 131 determines that the output level of the data of the audio 123 is the fourth level, and if the output level of the data of the audio 123 is relative to the television 131, the television 131 transmits the output level of the data of the audio 123 to the audio 123. The television 131 obtains the data request message of the user 2, where the data request message of the user 2 is used to request to share the historical song data of the user 2, and the television 131 sends the data request message of the user 2 to the stereo 123, because the historical song data of the user 2 does not belong to the data of the data type corresponding to the output level of the data of the stereo 123, the stereo 123 will not share the historical song data of the user 2 to the television 131. When the user 3 uses the television 131 through voice, the television 131 does not have an account number corresponding to the voiceprint of the user 3, and the television 131 cannot recognize the voiceprint of the user 3, and the television 131 may send the voice of the user 3 to one or more devices through a network, where the one or more devices may be devices connected to the same home network as the television 131, for example, the one or more devices may be devices as in fig. 1, and herein, the plurality of devices are respectively a tablet computer 103 and a sound device 123 are described as an example. When the tablet computer 103 receives the voice of the user 3, the tablet computer 103 does not recognize the voice print of the user 3, and determines that the account number corresponding to the voice print of the user 3 is not stored in the tablet computer 103, the tablet computer 103 determines that the output level of the data of the tablet computer 103 is the fourth level, the output level of the data of the tablet computer 103 is relative to the television 131, the tablet computer 103 sends the output level of the data of the tablet computer 103 to the television 131, the television 131 obtains a data request message of the user 3, the data request message of the user 3 is used for requesting to share the historical song data of the user 3, and the television 131 determines that the historical song data of the user 3 does not belong to the data of the tablet computer 103 and the data request message of the user 3 is not sent to the tablet computer 103. When the sound equipment 123 receives the voice of the user 3, the sound equipment 123 recognizes the voice of the user 3, the account corresponding to the voice of the user 3 is determined to be stored in the sound equipment 123, the sound equipment 123 determines that the output level of the data of the sound equipment 123 is the third level, and the user 3 uses the television 131, so that the output level of the data of the sound equipment 123 is the third sub-level, the output level of the data of the sound equipment 123 is relative to the television 131, the sound equipment 123 sends the output level of the data of the sound equipment 123 to the television 131, the television 131 obtains a data request message of the user 3, the data request message of the user 3 is used for requesting to share the historical song data of the user 3, the television 131 determines that the historical song data of the user 3 belongs to the data of the data type corresponding to the output level of the data of the sound equipment 123, and the television 131 sends the data request message of the user 3 to the sound equipment 123, and the sound equipment 123 shares the historical song data of the user 3 to the television 131 when the historical song data of the user 3 is stored on the sound equipment 123. Thus, when the user 2 uses the television 111 through the account B, the television 111 receives the historical song data stored on the tablet computer 103 by the user 2 through the account B, and when the user 3 uses the television 111 through the voice, the user 3 can access the historical song data stored on the sound device 123.
For example, as shown in fig. 1, specifically, the user 3 may use the mobile phone 121 by a face image of the user 3, a fingerprint of the user 3, or a sound of the user 3; user 3 may use watch 122 through the face image of user 3 and the sound of user 3; the user 3 can use the sound 123 through the sound of the user 3, wherein the data stored in the mobile phone 121 by the original data of the biological characteristics of the user 3 is stored according to the account number C used by the user 3; the data stored in the watch 122 by the user 3 through the original data of the biological characteristics of the user 3 are stored according to the account number C used by the user 3; the data stored in the audio 123 by the original data of the biometric feature of the user 3 is stored in accordance with the account number C used by the user 3.
As shown in connection with fig. 1 and 11, when the user 3 uses the vehicle 102 (the vehicle 102 is an example of the second device) through the voice of the user 3, the vehicle 102 transmits the original voice of the user 3 and the account B of the user 3 to one or more devices through the network, and the vehicle 102 cannot recognize the voiceprint of the user 3 but can recognize the voice of the user 3, that is, the identity of the voice cannot be recognized by the vehicle 102, and can recognize the content of the voice. Wherein the one or more devices may be devices that are all connected to the same network as the vehicle 102, for example, the one or more devices may be devices as in fig. 1, where one device is depicted as an audio 123 (the audio 123 is an example of a first device). When the sound 123 receives the voice of the user 3 and the account B of the user 3, the sound 123 determines that the account corresponding to the voiceprint of the user 2 stored in the sound 123 is the account B, the sound 123 determines that the output level of the data of the sound 123 is the second level, the sound 123 transmits the output level of the data of the sound 123 to the vehicle 102 with respect to the vehicle 102, the vehicle 102 acquires the data request message of the user 3, the data request message of the user 3 is used for requesting to share the historical song data of the user 3, the vehicle 102 determines that the historical song data of the user 3 belongs to the data of the data type corresponding to the output level of the data of the sound 123, the vehicle 102 transmits the data request message of the user 3 to the sound 123, and the sound 123 transmits the historical song data of the user 3 to the vehicle 102 when the historical song data of the user 3 is stored in the sound 123. When the user 3 uses the vehicle 102 through the fingerprint of the user 3, the vehicle 102 identifies the fingerprint of the user 3, and obtains the account corresponding to the fingerprint of the user 3 as the account B, and then the vehicle 102 sends the account B to one or more devices through the network, where the one or more devices may be devices that are all connected to the same network as the vehicle 102, for example, the one or more devices may be devices as in fig. 1, and here, a device is described as a mobile phone 101 (the mobile phone 101 is another example of the first device). When the mobile phone 101 receives the account number B, the mobile phone 101 determines that the mobile phone 101 has the account number B stored in the mobile phone 101, and the mobile phone 101 determines that the output level of the data of the mobile phone 101 is the second level, the mobile phone 101 sends the output level of the data of the mobile phone 101 to the vehicle 102 relative to the vehicle 102, the vehicle 102 obtains a data request message of the user 3, the request message is used for requesting to share the place like to exercise of the user 3, the vehicle 102 determines that the place like to exercise belongs to the data of the data type corresponding to the output level of the data of the mobile phone 101, and the vehicle 102 sends the data request message of the user 3 to the mobile phone 101, and the mobile phone 101 shares the place like to exercise of the user 3 to the vehicle 102 when the place like to exercise of the user 3 is stored in the mobile phone 101. Thus, when the user 3 uses the vehicle 102 through the original data of different biological characteristics, the vehicle 102 not only receives the historical songs of the user 3 stored on the stereo 123, but also receives the places where the user 3 likes to exercise on the mobile phone 101 by the vehicle 102, so that the vehicle 102 can play the favorite songs of the user 3 according to the historical songs of the user 3; the vehicle 102 may also drive the vehicle 102 to a place where the user 3 likes to exercise where the mobile phone 101 stores the original fingerprint of the user 3 where the user 3 likes to exercise.
As shown in fig. 1 and fig. 12, when the user 3 uses the tablet computer 103 (the tablet computer 103 is an example of the second device) through the original voice of the user 3, the tablet computer 103 performs voiceprint recognition on the original voice of the user 3 to obtain an account number B corresponding to the voiceprint of the user 3, and then the tablet computer 103 sends the account number B to one or more devices through a network, where the one or more devices may be devices connected to the same home network as the tablet computer 103, for example, the one or more devices may be devices as shown in fig. 1, and herein, a device is a wristwatch 122 (the wristwatch 122 is an example of the first device) is described as an example. After the watch 122 receives the account number B, the watch 122 determines that the account number B is not stored in the watch 122, and the watch 122 determines that the output level of the data of the watch 122 is the fourth level, and the tablet computer 103 sends the data request message of the user 3 to the watch 122, and the watch 122 does not have the data associated with the account number B of the user 3, so that the watch 122 does not share the data with the tablet computer 103. When the user 3 uses the tablet pc 103 through the original 2D face of the user 3, the tablet pc 103 recognizes the original 2D face of the user 3, and obtains the account corresponding to the original 2D face of the user 3 as the account B, and then the tablet pc 103 sends the account B to one or more devices through a network, where the one or more devices may be devices connected to the same home network as the tablet pc 103, for example, the one or more devices may be devices as shown in fig. 1, and here, a device is a mobile phone 121 (the mobile phone 121 is another example of the first device) is described as an example. After the mobile phone 121 receives the account number B, the mobile phone 121 determines that the account number B does not exist in the mobile phone 121, the mobile phone 121 determines that the output level of the data of the mobile phone 121 is the second level, the mobile phone 121 sends the output level of the data of the mobile phone 121 to the tablet computer 103, the tablet computer 103 obtains a data request message of the user 3, the data request message of the user 3 is used for requesting to share schedule data of the user 3, the tablet computer 103 determines that the schedule data belongs to data of a data type corresponding to the output level of the data of the mobile phone 121, and the tablet computer 103 sends the data request message of the user 3 to the mobile phone 121, and under the condition that the schedule data of the user 3 exists on the mobile phone 121, the mobile phone 121 shares the schedule data of the user 3 to the tablet computer 103.
For another example, as shown in fig. 1, when one or more users use the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136, the one or more users are all in a guest state, that is, the one or more users do not use the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136 through any account number or through any original data of biological characteristics, personal data (such as historical viewing video) of the one or more users is not stored on the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136, and only non-personal data of each device, that is, device capability data and/or device state data of the device can be shared when the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136 performs data sharing. For example, the one or more users may not store each user in correspondence with the stored data of each user on the television 131 during the storage of the data generated by the television 131, only store non-personal data generated by all users using the television 131, and the television 131 may only share the device capability data or the device status data of the television 131 with other devices.
The method 200 described above may also include step 250.
And 250, the second device stores the first data shared by the first device.
The method provided in the embodiment of the present application is described in detail above with reference to fig. 2 to 12. The following describes in detail the apparatus provided in the embodiments of the present application with reference to fig. 13 to 14. It should be understood that the descriptions of the apparatus embodiments and the descriptions of the method embodiments correspond to each other, and thus, descriptions of details not shown may be referred to the above method embodiments, and for the sake of brevity, some parts of the descriptions are omitted.
Fig. 13 shows a schematic structural diagram of an electronic device 1300 according to an embodiment of the present application.
In an implementation manner, the electronic device 1300 may be the first device in the method 200, and the electronic device 1300 may perform the steps performed by the first device in the method 200, and reference may be made to the description of the method 200 specifically, which is not repeated herein.
In another implementation manner, the electronic device 1300 may be the second device in the method 200, and the electronic device 1300 may perform the steps performed by the second device in the method 200, and specific reference may be made to the description of the method 200, which is not repeated herein.
The electronic device 1300 may be a cell phone, tablet, desktop, laptop, handheld, notebook, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, as well as a cellular telephone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home device, and/or smart city device, with the specific types of such electronic devices not being particularly limited in the embodiments of the present application.
The electronic device 1300 may include a processor 1310, an external memory interface 1320, an internal memory 1321, a universal serial bus (universal serial bus, USB) interface 1330, a charge management module 1340, a power management module 1341, a battery 1342, an antenna 1, an antenna 2, a mobile communication module 1350, a wireless communication module 1360, an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, a sensor module 1380, keys 1390, a motor 1391, an indicator 1392, a camera 1393, a display 1394, and a subscriber identification module (subscriber identification module, SIM) card interface 1395, etc. The sensor module 1380 may include, among other things, a pressure sensor 1380A, a gyroscope sensor 1380B, a barometric sensor 1380C, a magnetic sensor 1380D, an acceleration sensor 1380E, a distance sensor 1380F, a proximity light sensor 1380G, a fingerprint sensor 1380H, a temperature sensor 1380J, a touch sensor 1380K, an ambient light sensor 1380L, a bone conduction sensor 1380M, and the like.
It should be understood that the illustrated structure of the embodiments of the present invention does not constitute a particular limitation of the electronic device 1300. In other embodiments of the present application, electronic device 1300 may include more or less components than those illustrated, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 1310 may include one or more processing units, such as: the processor 1310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1310 for storing instructions and data. In some embodiments, the memory in processor 1310 is a cache memory. The memory may hold instructions or data that the processor 1310 has just used or recycled. If the processor 1310 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided, reducing the latency of the processor 1310, and thus improving the efficiency of the system.
In some embodiments, the processor 1310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 1310 may contain multiple sets of I2C buses. The processor 1310 may be coupled to the touch sensor 1380K, charger, flash, camera 1393, etc., respectively, through different I2C bus interfaces. For example: the processor 1310 may be coupled to the touch sensor 1380K through an I2C interface, such that the processor 1310 communicates with the touch sensor 1380K through an I2C bus interface to implement a touch function of the electronic device 1300.
The I2S interface may be used for audio communication. In some embodiments, the processor 1310 may contain multiple sets of I2S buses. The processor 1310 may be coupled to the audio module 1370 through an I2S bus to enable communication between the processor 1310 and the audio module 1370. In some embodiments, the audio module 1370 may transmit an audio signal to the wireless communication module 1360 through the I2S interface to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 1370 and the wireless communication module 1360 may be coupled through a PCM bus interface. In some embodiments, the audio module 1370 may also transmit an audio signal to the wireless communication module 1360 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 1310 with the wireless communication module 1360. For example: the processor 1310 communicates with a bluetooth module in the wireless communication module 1360 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 1370 may transmit an audio signal to the wireless communication module 1360 through a UART interface to realize a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect processor 1310 to peripheral devices such as display 1394, camera 1393, etc. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 1310 and camera 1393 communicate via a CSI interface, implementing the photographing functions of electronic device 1300. The processor 1310 communicates with the display screen 1394 via a DSI interface to implement the display functions of the electronic device 1300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect processor 1310 with camera 1393, display 1394, wireless communication module 1360, audio module 1370, sensor module 1380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 1330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1330 may be used to connect a charger to charge the electronic device 1300, or may be used to transfer data between the electronic device 1300 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the connection between the modules illustrated in the embodiments of the present invention is merely illustrative, and is not meant to limit the structure of the electronic device 1300. In other embodiments of the present application, the electronic device 1300 may also employ different interfaces in the above embodiments, or a combination of multiple interfaces.
The charge management module 1340 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 1340 may receive charging inputs of a wired charger through the USB interface 1330. In some wireless charging embodiments, the charge management module 1340 may receive wireless charging inputs through a wireless charging coil of the electronic device 1300. The charging management module 1340 charges the battery 1342 and can also supply power to the electronic device through the power management module 1341.
The power management module 1341 is used to connect the battery 1342, the charge management module 1340 and the processor 1310. The power management module 1341 receives input from the battery 1342 and/or the charge management module 1340, and provides power to the processor 1310, the internal memory 1321, the display 1394, the camera 1393, the wireless communication module 1360, and so forth. The power management module 1341 may also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 1341 may also be provided in the processor 1310. In other embodiments, the power management module 1341 and the charge management module 1340 may be provided in the same device.
The wireless communication functions of the electronic device 1300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1350, the wireless communication module 1360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1350 may provide a solution for wireless communications, including 2G/3G/4G/5G, as applied to the electronic device 1300. The mobile communication module 1350 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 1350 may receive electromagnetic waves from the antenna 1, filter, amplify the received electromagnetic waves, and transmit the electromagnetic waves to a modem processor for demodulation. The mobile communication module 1350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be disposed in the processor 1310. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be provided in the same device as at least some of the modules of the processor 1310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 1370A, the receiver 1370B, and the like), or displays images or videos through the display screen 1394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 1350 or other functional modules, independent of the processor 1310.
The wireless communication module 1360 may provide solutions for wireless communication including wireless local area networks (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) networks), bluetooth (BT), global navigation satellite systems (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 1300. The wireless communication module 1360 may be one or more devices integrating at least one communication processing module. The wireless communication module 1360 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 1310. The wireless communication module 1360 may also receive signals to be transmitted from the processor 1310, frequency modulate them, amplify them, and convert them to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 1350 of electronic device 1300 are coupled, and antenna 2 and wireless communication module 1360 are coupled, such that electronic device 1300 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 1300 implements display functions through a GPU, a display screen 1394, an application processor, and the like. The GPU is a microprocessor for processing images and is connected with the display screen 1394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1394 is used for displaying images, videos, and the like. The display screen 1394 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, electronic device 1300 may include 1 or N display screens 1394, N being a positive integer greater than 1.
The electronic device 1300 can realize a photographing function through an ISP, a camera 1393, a video codec, a GPU, a display screen 1394, an application processor, and the like.
The ISP is used to process the data fed back by camera 1393. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 1393.
Camera 1393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 1300 may include 1 or N cameras 1393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 1300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 1300 may support one or more video codecs. In this way, the electronic device 1300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 1300 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 1320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 1300. The external memory card communicates with the processor 1310 via an external memory interface 1320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 1321 may be used to store computer-executable program code that includes instructions. The internal memory 1321 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 1300 (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 1321 may include a high-speed random access memory, and may also include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 1310 performs various functional applications of the electronic device 1300, as well as data processing, by executing instructions stored in the internal memory 1321, and/or instructions stored in a memory provided in the processor.
The electronic device 1300 may implement audio functions through an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 1370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 1370 may also be used to encode and decode audio signals. In some embodiments, the audio module 1370 may be provided in the processor 1310, or a part of functional modules of the audio module 1370 may be provided in the processor 1310.
Speakers 1370A, also known as "horns," are used to convert audio electrical signals into sound signals. The electronic device 1300 may listen to music, or to hands-free conversations, through the speaker 1370A.
Receiver 1370B, also referred to as a "receiver," converts an audio electrical signal into a sound signal. When electronic device 1300 is answering a phone call or voice message, voice can be received by placing receiver 1370B close to the human ear.
A microphone 1370C, also called a "microphone" or "microphone", is used to convert a sound signal into an electrical signal. When making a call or transmitting voice information, the user can sound near the microphone 1370C through the mouth, inputting a sound signal to the microphone 1370C. The electronic device 1300 may be provided with at least one microphone 1370C. In other embodiments, the electronic device 1300 may be provided with two microphones 1370C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1300 may also be provided with three, four, or more microphones 1370C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording, etc.
The earphone interface 1370D is used to connect a wired earphone. Headset interface 1370D may be USB interface 1330 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry Association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 1380A may be disposed on display 1394. The pressure sensor 1380A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. When a force is applied to the pressure sensor 1380A, the capacitance between the electrodes changes. The electronics 1300 determine the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 1394, the electronic apparatus 1300 detects the touch operation intensity from the pressure sensor 1380A. The electronic device 1300 may also calculate the location of the touch based on the detection signal of the pressure sensor 1380A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 1380B may be used to determine a motion gesture of the electronic device 1300. In some embodiments, the angular velocity of electronic device 1300 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 1380B. The gyro sensor 1380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 1380B detects the shake angle of the electronic apparatus 1300, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic apparatus 1300 by the reverse motion, thereby realizing anti-shake. The gyro sensor 1380B may also be used to navigate, somatosensory game scenes.
The air pressure sensor 1380C is used to measure air pressure. In some embodiments, the electronic device 1300 calculates altitude from barometric pressure values measured by the barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 1380D includes a hall sensor. The electronic device 1300 may detect the opening and closing of the flip holster using the magnetic sensor 1380D. In some embodiments, when the electronic device 1300 is a flip machine, the electronic device 1300 may detect the opening and closing of the flip according to the magnetic sensor 1380D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 1380E may detect the magnitude of acceleration of the electronic device 1300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 1300 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1380F for measuring distance. The electronic device 1300 may measure the distance by infrared or laser. In some embodiments, the electronic device 1300 may range using the distance sensor 1380F to achieve fast focus.
The proximity light sensor 1380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1300 emits infrared light outward through the light emitting diode. The electronic device 1300 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 1300. When insufficient reflected light is detected, the electronic device 1300 may determine that there is no object in the vicinity of the electronic device 1300. The electronic device 1300 may detect that the user holds the electronic device 1300 near the ear to talk using the proximity light sensor 1380G, so as to automatically extinguish the screen for power saving purposes. The proximity light sensor 1380G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 1380L is used to sense ambient light levels. The electronic device 1300 can adaptively adjust the display 1394 brightness based on perceived ambient light levels. The ambient light sensor 1380L may also be used to automatically adjust white balance during photographing. The ambient light sensor 1380L may also cooperate with the proximity light sensor 1380G to detect if the electronic device 1300 is in a pocket to prevent false touches.
The fingerprint sensor 1380H is used to collect a fingerprint. The electronic device 1300 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 1380J is used to detect temperature. In some embodiments, the electronic device 1300 performs a temperature processing strategy using the temperature detected by the temperature sensor 1380J. For example, when the temperature reported by temperature sensor 1380J exceeds a threshold, electronic device 1300 performs a reduction in performance of a processor located near temperature sensor 1380J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 1300 heats the battery 1342 to avoid the electronic device 1300 from shutting down abnormally due to low temperatures. In other embodiments, when the temperature is below a further threshold, the electronic device 1300 performs boosting of the output voltage of the battery 1342 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 1380K is also referred to as a "touch device". The touch sensor 1380K may be disposed on the display screen 1394, and the touch sensor 1380K and the display screen 1394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1380K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display screen 1394. In other embodiments, touch sensor 1380K may also be disposed on a surface of electronic device 1300 other than where display 1394 is located.
The bone conduction sensor 1380M may acquire a vibration signal. In some embodiments, bone conduction sensor 1380M may acquire a vibration signal of a human vocal tract vibrating bone piece. The bone conduction sensor 1380M may also contact the pulse of a human body to receive a blood pressure pulsation signal. In some embodiments, bone conduction sensor 1380M may also be provided in the headset in combination with the osteogenic headset. The audio module 1370 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 1380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signals acquired by the bone conduction sensor 1380M, so that a heart rate detection function is realized.
Key 1390 includes a power on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 1300 may receive key inputs, generate key signal inputs related to user settings and function control of the electronic device 1300.
Motor 1391 may generate a vibration alert. The motor 1391 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1391 may also correspond to different vibration feedback effects by touching different areas of the display screen 1394. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 1392 may be an indicator light, which may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 1395 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 1395 or removed from the SIM card interface 1395 to enable contact and separation with the electronic device 1300. The electronic device 1300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1395 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 1395 can be used to insert multiple cards at the same time. The types of the plurality of cards may be the same or different. SIM card interface 1395 may also be compatible with different types of SIM cards. SIM card interface 1395 may also be compatible with external memory cards. The electronic device 1300 interacts with the network through the SIM card to realize functions such as talking and data communication. In some embodiments, the electronic device 1300 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 1300 and cannot be separated from the electronic device 1300.
The software system of the electronic device 1300 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the invention takes an Android system with a layered architecture as an example, and illustrates a software structure of the electronic device 1300.
Fig. 14 is a schematic software structure of an electronic device 1300 according to an embodiment of the present invention.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 14, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 14, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide communication functionality for the electronic device 1300. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device 1300 software and hardware is illustrated below in connection with capturing a photo scene.
When touch sensor 1380K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video through a camera 1393.
The present application also provides a computer readable medium having stored thereon a computer program which, when executed by a computer, implements the method of any of the method embodiments described above.
The present application also provides a computer program product which, when executed by a computer, implements the method of any of the method embodiments described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A method of data sharing, comprising:
the method comprises the steps that first equipment obtains registration information of a first user from second equipment, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user;
the first device determines the output grade of the data of the first device according to the registration information of the first user; the output end level of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks;
the first device obtains a first data request message from the second device, wherein the first data request message is used for requesting sharing of first data of the first user;
the first device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device, and sends the first data to the second device;
in the case that the registration information of the first user includes the original data of the biometric feature of the first user, the determining, by the first device, the outbound rank of the data of the first device according to the registration information of the first user includes:
The first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained or not;
determining that the output grade of the data of the first device is a fourth grade under the condition that the first device determines that the account corresponding to the original data of the biological characteristics of the first user is not obtained;
determining whether an account corresponding to the original data of the biological characteristics of the first user obtained by the first device exists in all the accounts stored in the second device under the condition that the first device obtains the account corresponding to the original data of the biological characteristics of the first user;
determining that the output grade of the data of the first device is a second grade under the condition that an account corresponding to the original data of the biological characteristics of the first user obtained by the first device exists in the second device;
and under the condition that an account number corresponding to the original data of the biological characteristics of the first user obtained by the first device does not exist in the second device, determining that the output grade of the data of the first device is a third grade.
2. The method of claim 1, wherein after the determining that the outbound rank of the data of the first device is the third rank, the method further comprises:
determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user;
determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or (b)
And determining that the output level of the data of the first device is a third sub-level in a third level in the case that the registration information of the first user is the voice or signature of the first user.
3. The method of claim 1, wherein, in the case where the registration information of the first user includes an account of the first user, the determining, by the first device, an outbound rank of data of the first device according to the registration information of the first user includes:
the first device determines whether an account number of the first user exists;
Under the condition that the first equipment stores the account number of the first user, determining the output grade of the data of the first equipment as a second grade;
and under the condition that the first equipment does not store the account number of the first user, determining that the output grade of the data of the first equipment is a fourth grade.
4. A method according to any one of claims 1 to 3, wherein the data type corresponding to the second level is a second type, the data corresponding to the second type comprising general location data, video data, logistical data, calendar data, preference data, device capability data and/or device status data; and/or the number of the groups of groups,
the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule plan data, preference data, equipment capability data and/or equipment state data; and/or the number of the groups of groups,
the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data;
wherein the general location data is used to represent information of the first user location.
5. The method of claim 2, wherein the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the number of the groups of groups,
the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule plan data, equipment capacity data and/or equipment state data; and/or the number of the groups of groups,
the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type includes preference data, viewing video data, device capability data and/or device status data.
6. A method according to any one of claims 1 to 3, further comprising:
and the first device sends the output grade of the data of the first device to the second device.
7. A method of acquiring data, comprising:
the method comprises the steps that second equipment obtains registration information of a first user input by the first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user;
The second device sends registration information of the first user to the first device;
the second device receives first information sent by the first device, wherein the first information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device;
the second device determines the output grade of the data of the first device according to the first information;
the second device obtains a first data request message of the first user, wherein the first data request message is used for requesting sharing of first data of the first user;
the second device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device; data of different said data types have different highest risk;
the second device sends the first data request message and receives the first data sent by the first device;
the second device determining, according to the first information, an egress level of data of the first device includes:
the second device determines whether an account number corresponding to the original data of the biological characteristics of the first user determined by the first device is stored in the second device;
Under the condition that the second equipment stores the account corresponding to the original data of the biological characteristics of the first user determined by the first equipment, determining that the output grade of the data of the first equipment is a second grade;
and under the condition that the second equipment does not store the account number corresponding to the original data of the biological characteristics of the first user, which is determined by the first equipment, determining that the output grade of the data of the first equipment is a third grade.
8. The method of claim 7, wherein after the determining that the outbound rank of the data of the first device is the third rank, the method further comprises:
determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user;
determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or (b)
And determining that the output level of the data of the first device is a third sub-level in a third level in the case that the registration information of the first user is the voice or signature of the first user.
9. The method according to claim 7 or 8, wherein the data type corresponding to the second level is a second type, the data corresponding to the second type comprising general location data, video data, logistical data, calendar plan data, preference data, device capability data and/or device status data; and/or the number of the groups of groups,
the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule plan data, preference data, equipment capacity data and/or equipment state data;
wherein the general location data is used to represent information of the first user location.
10. The method of claim 8, wherein the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the number of the groups of groups,
the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule plan data, equipment capacity data and/or equipment state data; and/or the number of the groups of groups,
The data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type includes preference data, viewing video data, device capability data and/or device status data.
11. The method according to claim 7 or 8, characterized in that the method further comprises:
and the second device sends the output grade of the data of the first device to the first device.
12. A method of acquiring data, comprising:
the method comprises the steps that second equipment obtains registration information of a first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user;
the second device identifies the original data of the biological characteristics of the first user and determines whether the second device can obtain an account corresponding to the original data of the biological characteristics of the first user;
when the second device obtains an account corresponding to the original data of the biological characteristics of the first user, the second device sends second information to the first device, wherein the second information is used for indicating the second device to obtain the account corresponding to the original data of the biological characteristics of the first user; and
The second device receives third information sent by the first device, wherein the third information is used for indicating whether the first device has an account corresponding to the original data of the biological characteristics of the first user obtained by the second device;
the second device determines the output grade of the data of the first device according to the third information;
the second device obtains a data request message of the first user, wherein the data request message is used for requesting the first device to share first data stored on the first device by the first user;
the second device determines that the first data belongs to data of a data type corresponding to the output level of the data of the first device, and the data of different data types have different highest risks;
the second device sends a data request message of the first user to the first device and receives first data sent by the first device;
the second device sends registration information of the first user to the first device under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user;
The second device receives a third instruction sent by the first device, wherein the third instruction is used for indicating that the first device does not obtain an account corresponding to the original data of the biological characteristics of the first user;
and the second device determines that the output level of the data of the first device is a fourth level according to the third instruction.
13. The method of claim 12, wherein the second device determining the outbound rank of the data of the first device based on the third information comprises:
determining that the output grade of the data of the first device is a second grade under the condition that the first device stores an account corresponding to the original data of the biological characteristics of the first user obtained by the second device;
and under the condition that the first equipment does not store the account number corresponding to the original data of the biological characteristics of the first user, determining that the output grade of the data of the first equipment is a fourth grade.
14. The method according to claim 12, wherein the method further comprises:
the second device sends registration information of the first user to the first device under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user;
The second device receives fourth information sent by the first device, wherein the fourth information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device;
and the second device determines that the output grade of the data of the first device is a third grade according to the fourth information.
15. The method of claim 14, wherein after the determining that the outbound rank of the data of the first device is the third rank, the method further comprises:
determining that the output level of the data of the first device is a first sub-level in a third level in the case that the registration information of the first user is a 3D face, fingerprint, iris or DNA of the first user;
determining that the output level of the data of the first device is a second sub-level in a third level if the registration information of the first user is a 2D face or vein of the first user; or (b)
And determining that the output level of the data of the first device is a third sub-level in a third level in the case that the registration information of the first user is the voice or signature of the first user.
16. The method of claim 13, wherein the second level of corresponding data type is a second type, the second type of corresponding data comprising general location data, video data, logistical data, calendar data, preference data, device capability data, and/or device status data; and/or the number of the groups of groups,
the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data;
wherein the general location data is used to represent information of the first user location.
17. The method of claim 14, wherein the data type corresponding to the third level is a third type, and the data corresponding to the third type includes video data, logistical data, calendar data, preference data, device capability data, and/or device status data.
18. The method of claim 15, wherein the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the number of the groups of groups,
The data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule plan data, equipment capacity data and/or equipment state data; and/or the number of the groups of groups,
the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type includes preference data, viewing video data, device capability data and/or device status data.
19. The method according to any one of claims 12 to 15, further comprising:
and the second device sends the output grade of the data of the first device to the first device.
20. A device for sharing data, comprising: a processor coupled to the memory;
the memory is used for storing a computer program;
the processor is configured to execute a computer program stored in the memory to cause the apparatus to perform the method of any one of claims 1 to 19.
CN202010076673.0A 2020-01-23 2020-01-23 Data sharing method and device Active CN111339513B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010076673.0A CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device
PCT/CN2020/128996 WO2021147483A1 (en) 2020-01-23 2020-11-16 Data sharing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010076673.0A CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device

Publications (2)

Publication Number Publication Date
CN111339513A CN111339513A (en) 2020-06-26
CN111339513B true CN111339513B (en) 2023-05-09

Family

ID=71181431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010076673.0A Active CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device

Country Status (2)

Country Link
CN (1) CN111339513B (en)
WO (1) WO2021147483A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339513B (en) * 2020-01-23 2023-05-09 华为技术有限公司 Data sharing method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100708A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Request processing method and device
CN106534280A (en) * 2016-10-25 2017-03-22 广东欧珀移动通信有限公司 Data sharing method and device
US9742760B1 (en) * 2014-06-16 2017-08-22 TouchofModern, Inc. System and method for improving login and registration efficiency to network-accessed data
CN107103245A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 The right management method and device of file
CN108600793A (en) * 2018-04-08 2018-09-28 北京奇艺世纪科技有限公司 a kind of hierarchical control method and device
CN108833357A (en) * 2018-05-22 2018-11-16 中国互联网络信息中心 Information inspection method and device
CN108985255A (en) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 Data processing method, device, computer readable storage medium and electronic equipment
CN108985089A (en) * 2018-08-01 2018-12-11 清华大学 Internet data shared system
CN109035937A (en) * 2018-08-29 2018-12-18 芜湖新使命教育科技有限公司 Authorize shared Network Education System
CN109299047A (en) * 2018-09-21 2019-02-01 深圳市九洲电器有限公司 Distributed system data sharing method and device, data sharing distributed system
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN109885999A (en) * 2019-01-29 2019-06-14 努比亚技术有限公司 A kind of account register method, terminal and computer readable storage medium
CN110198362A (en) * 2019-05-05 2019-09-03 华为技术有限公司 A kind of method and system for adding smart home device in contact person
JP2019159974A (en) * 2018-03-15 2019-09-19 オムロン株式会社 Authentication device, authentication method and authentication program
CN110287036A (en) * 2019-05-09 2019-09-27 华为技术有限公司 A kind of collaborative share methods, devices and systems

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9372975B2 (en) * 2011-12-19 2016-06-21 Fujitsu Limited Secure recording and sharing system of voice memo
US10916243B2 (en) * 2016-12-27 2021-02-09 Amazon Technologies, Inc. Messaging from a shared device
CN111339513B (en) * 2020-01-23 2023-05-09 华为技术有限公司 Data sharing method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9742760B1 (en) * 2014-06-16 2017-08-22 TouchofModern, Inc. System and method for improving login and registration efficiency to network-accessed data
CN105100708A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Request processing method and device
CN107103245A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 The right management method and device of file
CN106534280A (en) * 2016-10-25 2017-03-22 广东欧珀移动通信有限公司 Data sharing method and device
JP2019159974A (en) * 2018-03-15 2019-09-19 オムロン株式会社 Authentication device, authentication method and authentication program
CN108600793A (en) * 2018-04-08 2018-09-28 北京奇艺世纪科技有限公司 a kind of hierarchical control method and device
CN108833357A (en) * 2018-05-22 2018-11-16 中国互联网络信息中心 Information inspection method and device
CN108985255A (en) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 Data processing method, device, computer readable storage medium and electronic equipment
CN108985089A (en) * 2018-08-01 2018-12-11 清华大学 Internet data shared system
CN109035937A (en) * 2018-08-29 2018-12-18 芜湖新使命教育科技有限公司 Authorize shared Network Education System
CN109299047A (en) * 2018-09-21 2019-02-01 深圳市九洲电器有限公司 Distributed system data sharing method and device, data sharing distributed system
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN109885999A (en) * 2019-01-29 2019-06-14 努比亚技术有限公司 A kind of account register method, terminal and computer readable storage medium
CN110198362A (en) * 2019-05-05 2019-09-03 华为技术有限公司 A kind of method and system for adding smart home device in contact person
CN110287036A (en) * 2019-05-09 2019-09-27 华为技术有限公司 A kind of collaborative share methods, devices and systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Survey on Various File Sharing Methods in P2P Networks;S Vimal et al.;《2017 THIRD INTERNATIONAL CONFERENCE ON SCIENCE TECHNOLOGY ENGINEERING & MANAGEMENT (ICONSTEM)》;20180118;第305-310页 *
一种分布式家庭娱乐系统的协作体系设计及实现;郑丽娇;《中国优秀硕士学位论文全文数据库信息科技辑》;20200115(第1期);全文 *

Also Published As

Publication number Publication date
WO2021147483A1 (en) 2021-07-29
CN111339513A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US11868463B2 (en) Method for managing application permission and electronic device
CN113722058B (en) Resource calling method and electronic equipment
CN111030990B (en) Method for establishing communication connection, client and server
CN114095599B (en) Message display method and electronic equipment
WO2022160991A1 (en) Permission control method and electronic device
CN113821767A (en) Application program authority management method and device and electronic equipment
CN116070035B (en) Data processing method and electronic equipment
WO2021169370A1 (en) Method for cross-device allocation of service elements, terminal device, and storage medium
CN110248037A (en) A kind of identity document scan method and device
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN111339513B (en) Data sharing method and device
CN114006698B (en) token refreshing method and device, electronic equipment and readable storage medium
WO2022022319A1 (en) Image processing method, electronic device, image processing system and chip system
CN116561085A (en) Picture sharing method and electronic equipment
CN115701018A (en) Method for safely calling service, method and device for safely registering service
CN114254334A (en) Data processing method, device, equipment and storage medium
CN113867851A (en) Electronic equipment operation guide information recording method, electronic equipment operation guide information acquisition method and terminal equipment
CN114205318B (en) Head portrait display method and electronic equipment
CN116048831B (en) Target signal processing method and electronic equipment
CN116095224B (en) Notification display method and terminal device
CN114826636B (en) Access control system and related methods and apparatus
CN116527266A (en) Data aggregation method and related equipment
CN114372220A (en) Method and device for processing webpage access behaviors
CN117177216A (en) Information interaction method and device and electronic equipment
CN116266088A (en) Application card display method and device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant