CN111339513A - Data sharing method and device - Google Patents

Data sharing method and device Download PDF

Info

Publication number
CN111339513A
CN111339513A CN202010076673.0A CN202010076673A CN111339513A CN 111339513 A CN111339513 A CN 111339513A CN 202010076673 A CN202010076673 A CN 202010076673A CN 111339513 A CN111339513 A CN 111339513A
Authority
CN
China
Prior art keywords
data
user
equipment
level
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010076673.0A
Other languages
Chinese (zh)
Other versions
CN111339513B (en
Inventor
阙鑫地
林嵩晧
林于超
张舒博
郑理文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010076673.0A priority Critical patent/CN111339513B/en
Publication of CN111339513A publication Critical patent/CN111339513A/en
Priority to PCT/CN2020/128996 priority patent/WO2021147483A1/en
Application granted granted Critical
Publication of CN111339513B publication Critical patent/CN111339513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a data sharing method and device, and the method comprises the following steps: the method comprises the steps that a device requested for data acquires registration information of a first user from the device requested for data, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the equipment requested with the data determines the data output level of the equipment requested with the data according to the registration information of the first user; the device requesting the data acquires a first data request message of a first user of the device requesting the data, wherein the first data request message is used for requesting to share the first data of the first user, so that the device requesting the data can determine whether the first data belongs to the data outbound grade of the device requesting the data according to different forms of registration information of the first user, data sharing of the first user between the device requesting the data and the device requesting the data is achieved, and differential personalized experience is provided for the first user.

Description

Data sharing method and device
Technical Field
The present application relates to the field of information processing, and more particularly, to a method and apparatus for data sharing.
Background
With the rapid development of the field of intelligent devices and internet of things (IoT), collaborative integration of multiple intelligent devices has become a common consensus in the industry. In order to realize the cooperation of multiple intelligent devices, it is necessary that user data and device data can flow and be shared among multiple intelligent devices or multiple accounts. In a multi-smart device scenario, such as a home scenario, a private device (e.g., a cell phone or watch) and a home public device (e.g., a television, vehicle, or sound box) are included. The user cannot be provided with a differential personalized experience depending on the user using the smart device.
Disclosure of Invention
The application provides a method and a device for determining data sharing, which can provide different personalized experiences for users according to the users.
In a first aspect, a method for determining data sharing is provided, including: the method comprises the steps that a first device obtains registration information of a first user from a second device, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the first equipment determines the data outbound grade of the first equipment according to the registration information of the first user; the data of the first equipment corresponds to different data types in the outbound grade, and the data of the different data types have different highest risks; the first device acquires a first data request message of the first user from the second device, wherein the first data request message is used for requesting to share first data of the first user; and the first equipment determines that the first data belongs to the data of the data type corresponding to the data output level of the first equipment, and sends the first data to the second equipment.
Optionally, the number of the accounts of the first user may be one or more.
The account number may be, for example, a mobile phone number, a user name set by the user, a mailbox, or the like.
The registration information of the first user is registration information of the first user input into the second device by the first user, that is, the first user uses the second device through the registration information of the first user.
The second device may be a device with a biometric function, for example, the second device may be a mobile phone, a vehicle, or a tablet computer; alternatively, the second device may be a device that can capture biometric features but does not have biometric identification capabilities, for example, the second device may be a watch, a stereo, a television.
The first device is a device except the second device in the same network with the second device; alternatively, the first device is a device selected according to the functions of all devices in a network in which the first device is co-located with the second device, for example, the first device is a biometric-enabled device.
Alternatively, the devices in the network may be mutually trusted devices. For example, the network may be a home network, where devices in the home network are mutually trusted devices. As another example, the network may also be a working network, where the devices in the working network are mutually trusted devices. The device in the network is not only a device connected to the network, but also a device added to the network by scanning a two-dimensional code (identification code), which may be preset.
The first device may also be a device other than the second device, which is in the same group in a network as the second device; alternatively, the first device is a device selected according to the functions of all devices in the same group in the same network as the second device, for example, the first device is a device with a biometric function.
Alternatively, a plurality of groups may be set in advance in the network, and the devices in each of the plurality of groups may be devices that can trust each other. For example, the network may be a home network, in which a home group and a guest group may be preset, the home group including the first device and the second device, and the first device and the second device in the home group being devices that trust each other; the devices in the visitor group and the devices in the family group are devices which are not trusted with each other, but the devices in the access group and the devices in the family group can interact with each other through non-private information. The devices in the home group may be not only devices connected to the home network but also devices added to the home network by scanning the two-dimensional code, and the devices in the visitor group are only devices connected to the home network.
Alternatively, the first data may be any data. Illustratively, the first data may be real-time location data of the user, place data where the user likes entertainment, photo data taken, video data recorded, video data viewed, historical song list data, and the like.
First, the first device obtains registration information of a first user of the second device, where the registration information of the first user includes an account number of the first user or raw data of a biometric characteristic of the first user, and the first device may determine an outbound level of data of the first device according to the registration information of the first user. The data of the first equipment corresponds to different data types, and the data of the different data types have different highest risks; secondly, the first device acquires a first data request message of a first user from the second device, wherein the first data request message is used for requesting to share first data of the first user; and finally, the first equipment sends the first data to the second equipment under the condition that the first data is determined to belong to the data of the data type corresponding to the data outbound grade of the first equipment. Therefore, the first device can determine the outbound grade of the data of the first device according to the form of the registration information of the first user, and share the first data generated by the first user on the first device to the second device under the condition that the first data belongs to the data of the data type corresponding to the outbound grade of the data of the first device, so that different personalized experience is provided for the first user.
With reference to the first aspect, in certain implementations of the first aspect, in a case that the registration information of the first user includes raw data of a biometric feature of the first user, the determining, by the first device, an outbound ranking of data of the first device according to the registration information of the first user includes: the first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; determining that the data of the first device is at the fourth level when the first device determines that the account corresponding to the original data of the biological feature of the first user is not obtained; determining whether an account corresponding to the original data of the biological feature of the first user obtained by the first device exists in all accounts stored in the second device under the condition that the account corresponding to the original data of the biological feature of the first user is obtained by the first device; determining that the data of the first device is of the second level when the account corresponding to the original data of the biological feature of the first user obtained by the first device exists in the second device; and determining that the data of the first device is of the third level when the account corresponding to the original data of the biological feature of the first user obtained by the first device does not exist in the second device.
With reference to the first aspect, in certain implementations of the first aspect, in a case that the registration information of the first user includes raw data of a biometric feature of the first user, the determining, by the first device, an outbound ranking of data of the first device according to the registration information of the first user includes: the first equipment determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained or not according to the registration information of the first user; determining that the data of the first device is of the fourth level when the account corresponding to the original data of the biological feature of the first user is not obtained on the first device; when the first device obtains an account corresponding to the original data of the biological characteristics of the first user, the first device sends seventh information to the second device, wherein the seventh information is used for indicating the first device to obtain the account corresponding to the original data of the biological characteristics of the first user; the first device receives eighth information sent by the second device, wherein the eighth information is used for indicating whether an account number which is determined by the first device and corresponds to the original data of the biological characteristics of the first user exists in the second device; determining that the data export grade of the first device is the second grade under the condition that the account corresponding to the original data of the biological characteristics of the first user, which is determined by the first device, is stored in the second device; and determining that the data of the first device is at the third level when the account corresponding to the original data of the biological characteristics of the first user and determined by the first device is not stored in the second device.
With reference to the first aspect, in certain implementations of the first aspect, after the determining that the outbound ranking of the data of the first device is the third ranking, the method further includes: determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user; determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or determining that the data of the first device is output in a third sub-level of a third level when the registration information of the first user is the voice or the signature of the first user.
And according to the specific biological characteristics used by the user, subdividing the data of the first device into a third level, so that different personalized experiences can be provided for the user according to different biological characteristics used by the user.
With reference to the first aspect, in some implementation manners of the first aspect, when the registration information of the first user includes an account of the first user, the determining, by the first device according to the registration information of the first user, an outbound ranking of data of the first device includes: the first equipment determines whether an account of the first user exists or not; determining that the data export grade of the first equipment is a second grade under the condition that the account of the first user is stored in the first equipment; and determining that the data of the first device is of a fourth level when the account of the first user is not stored in the first device.
The data output level of the first device may be understood as a level at which data on the first device is shared with other devices. The outbound level of the data of the first device is set relative to the device requesting the data, and the outbound levels of the data of the first device may be different and may be the same for different devices requesting the data. When the device requesting data is the first device itself, the outbound rank of the data of the first device is the first rank. When the device requesting the data is not the first device itself, the outbound ranking of the data of the first device is determined by what form of registration information the device requesting the data accesses the data on the first device. If the device requesting data accesses the first device by using the account number used in the first device, regarding the device requesting data, the data outbound grade of the first device is a second grade; if the device requesting data accesses the first device by using the original data of the biological characteristics corresponding to the account number used in the first device, regarding the device requesting data, the data of the first device is of a third level; if the device requesting the data does not access the first device using the same account number or biometric of the original data as the first device, the outbound rating of the data of the first device is a fourth rating for the device requesting the data.
In the case where the device requesting data is not the first device itself, the device requesting data may be the second device, and the device requested data may be the first device.
According to different forms of registration information used by the user, the data output level of the first device is divided, so that different personalized experiences can be provided for the user.
With reference to the first aspect, in some implementations of the first aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data, and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, preference data, equipment capability data and/or equipment state data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
Wherein the risk of the data corresponding to the second type, the risk of the data corresponding to the third type, and the risk of the data corresponding to the fourth type are sequentially reduced.
Wherein the general location data may be personal data of a medium influence; the video data, logistics data, schedule data, preference data may be low impact personal data; the device capability data and/or the device status data are non-personal data.
With reference to the first aspect, in certain implementations of the first aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
Wherein the risk of the data corresponding to the first subtype, the risk of the data corresponding to the second subtype and the risk of the data corresponding to the third subtype decrease sequentially.
The data of the first device have different outbound grades, and the data types corresponding to the outbound grades of the data of the first device are different, so that different personalized experiences can be provided for users.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and the first equipment sends the data outbound grade of the first equipment to the second equipment.
With reference to the first aspect, in certain implementations of the first aspect, the biometric characteristic includes one or more of: physical biometric, soft biometric, behavioral biometric.
With reference to the first aspect, in certain implementations of the first aspect, the physical biometric characteristic includes: human face, fingerprint, iris, retina, deoxyribonucleic acid DNA, skin, hand shape or vein; the behavioral biometric characteristics include: voice, signature, or gait; the soft biometric comprises: gender, age, height or weight.
In a second aspect, a method for acquiring data is provided, including: the method comprises the steps that a second device obtains registration information of a first user, which is input by the first user, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user; the second equipment sends the registration information of the first user to the first equipment; the second device acquires a first data request message of the first user, wherein the first data request message is used for requesting to share first data of the first user; and the second equipment sends the first data request message and receives the first data sent by the first equipment.
Optionally, the second device may recognize the request for the first data by the first user through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through input of the first user.
With reference to the second aspect, in some implementations of the second aspect, in a case where the input registration information of the first user includes raw data of a biometric feature of the first user, the method further includes: the second device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; under the condition that the second device obtains an account corresponding to the original data of the biological characteristics of the first user, the second device sends fifth information to the first device, wherein the fifth information is used for indicating the second device to obtain the account corresponding to the original data of the biological characteristics of the first user; and when the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user, the second device sends sixth information to the first device, wherein the sixth information is used for indicating the original data of the biological characteristics of the first user.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes: the second device receives the outbound grade of the data of the first device sent by the first device, the outbound grade of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks.
In a third aspect, a method for acquiring data is provided, including: the method comprises the steps that a second device obtains registration information of a first user input by the first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the second equipment sends the registration information of the first user to the first equipment; the second device receives first information sent by the first device, wherein the first information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device; the second equipment determines the data outbound grade of the first equipment according to the first information; the second device acquires a first data request message of the first user, wherein the first data request message is used for requesting to share first data of the first user; the second equipment determines that the first data belongs to data of a data type corresponding to the data output level of the first equipment; data of different said data types have different highest risks; and the second equipment sends the first data request message and receives the first data sent by the first equipment.
Optionally, the second device may recognize the request for the first data by the first user through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through input of the first user.
With reference to the third aspect, in certain implementations of the third aspect, in a case that the registration information of the first user includes raw data of a biometric feature of the first user, the determining, by the second device, an outbound ranking of the data of the first device according to the first information includes: the second equipment determines whether the second equipment stores an account corresponding to the original data of the biological characteristics of the first user determined by the first equipment; determining that the data export level of the first device is the second level when the account corresponding to the original data of the biological feature of the first user determined by the first device is stored in the second device; and determining that the data of the first device is at the third level when the account corresponding to the original data of the biological feature of the first user determined by the first device is not stored in the second device.
With reference to the third aspect, in certain implementations of the third aspect, after the determining that the outbound ranking of the data of the first device is the third ranking, the method further includes: determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user; determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or determining that the data of the first device is output in a third sub-level of a third level when the registration information of the first user is the voice or the signature of the first user.
With reference to the third aspect, in some implementations of the third aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data, and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, preference data, equipment capability data and/or equipment state data.
With reference to the third aspect, in certain implementations of the third aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
With reference to the third aspect, in certain implementations of the third aspect, the method further includes: and the second equipment sends the data outbound grade of the first equipment to the first equipment.
In a fourth aspect, a method for data sharing is provided, including: the method comprises the steps that a first device receives registration information of a first user sent by a second device, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the first device identifies the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; under the condition that the first device determines to obtain an account corresponding to the original data of the biological characteristics of the first user, the first device sends first information to the second device, wherein the first information is used for indicating the account corresponding to the original data of the biological characteristics of the first user determined by the first device; the first device acquires a first data request message of the first user from the second device, wherein the first data request message is used for requesting to share first data of the first user; and the first equipment sends the first data to the second equipment.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the method further includes: and under the condition that the first device determines that an account corresponding to the original data of the biological characteristics of the first user is not obtained, the first device sends a first instruction to the second device, wherein the first instruction is used for indicating that the account corresponding to the original data of the biological characteristics of the first user is not obtained by the first device.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the method further includes: the first device receives an outbound grade of the data of the first device sent by the second device, the outbound grade of the data of the first device corresponds to different data types, and the data of the different data types have different highest risks.
In a fifth aspect, a method for determining an outbound ranking of data is provided, comprising: the method comprises the steps that a second device obtains registration information of a first user, which is input by the first user, wherein the registration information of the first user comprises an account number of the first user; the second equipment sends the registration information of the first user to the first equipment; the second device receives a second instruction sent by the first device, wherein the second instruction is used for indicating whether the first device stores the registration information of the first user; and the second equipment determines the data output end grade of the first equipment according to the second instruction.
With reference to the fifth aspect, in some implementations of the fifth aspect, the determining, by the second device and according to the second instruction, an outbound ranking of the data of the first device includes: determining that the data export grade of the first device is a second grade under the condition that the first device stores an account corresponding to the original data of the biological characteristics of the first user; and determining that the data of the first device is of a fourth level when the account corresponding to the original data of the biological characteristics of the first user is not stored in the first device.
With reference to the fifth aspect, in some implementations of the fifth aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data, and/or device status data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
With reference to the fifth aspect, in certain implementations of the fifth aspect, the method further comprises: and the second equipment sends the data outbound grade of the first equipment to the first equipment.
With reference to the fifth aspect, in certain implementations of the fifth aspect, the method further comprises: the second device acquires a first data request message of the first user, wherein the first data request message is used for requesting to share first data of the first user; the second device sends the first data request message.
With reference to the fifth aspect, in some implementations of the fifth aspect, before the second device sends the first data request message, the method further includes: and the second equipment determines that the first data belongs to the data of the data type corresponding to the data output level of the first equipment.
In a sixth aspect, a method of determining an egress level of data is provided, comprising: the method comprises the steps that first equipment receives registration information of a first user, which is sent by second equipment, wherein the registration information of the first user comprises an account number of the first user; the first equipment determines whether an account of the first user exists or not; and the first equipment sends a second instruction to the second equipment, wherein the second instruction is used for indicating whether the account of the first user exists on the first equipment.
With reference to the sixth aspect, in certain implementations of the sixth aspect, the method further comprises: and the first equipment receives the data of the first equipment, which is sent by the second equipment, from the output end level.
With reference to the sixth aspect, in certain implementations of the sixth aspect, the method further comprises: the first device receives a first data request message of the first user sent by the second device, wherein the first data request message is used for requesting to share first data of the first user; the first device determines that the first data is stored in the first device, and the first device shares the first data to the second device.
With reference to the sixth aspect, in some implementations of the sixth aspect, after the first device receives the first data request message of the first user sent by the second device, the method further includes: and the first equipment determines that the first data belongs to the data of the data type corresponding to the data output level of the first equipment.
In a seventh aspect, a method for acquiring data is provided, including: the method comprises the steps that a second device obtains registration information of a first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the second device identifies the original data of the biological characteristics of the first user and determines whether the second device can obtain an account corresponding to the original data of the biological characteristics of the first user; when the second device obtains an account corresponding to the original data of the biological characteristics of the first user, the second device sends second information to the first device, wherein the second information is used for indicating the second device to obtain the account corresponding to the original data of the biological characteristics of the first user; the second device receives third information sent by the first device, wherein the third information is used for indicating whether the first device stores an account number of which the second device obtains the original data of the biological characteristics of the first user; the second equipment determines the data outbound grade of the first equipment according to the third information; the second device acquires a data request message of the first user, wherein the data request message is used for requesting the first device to share first data stored on the first device by the first user; the second equipment determines that the first data belongs to data of a data type corresponding to an outbound grade of the data of the first equipment, and the data of different data types have different highest risks; and the second equipment sends a data request message of the first user to the first equipment and receives first data sent by the first equipment.
With reference to the seventh aspect, in some implementations of the seventh aspect, the determining, by the second device and according to the third information, an outbound rank of the data of the first device includes: determining that the data export grade of the first device is a second grade under the condition that the first device stores an account number corresponding to the original data of the biological characteristics of the first user, which is obtained by the second device; and determining that the data of the first device is of a fourth level when the first device does not store the account corresponding to the original data of the biological characteristics of the first user obtained by the second device.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user, the second device sends the registration information of the first user to the first device; the second device receives a third instruction sent by the first device, wherein the third instruction is used for indicating that the first device does not obtain an account corresponding to the original data of the biological characteristics of the first user; and the second equipment determines that the outbound grade of the data of the first equipment is a fourth grade according to the third instruction.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user, the second device sends the registration information of the first user to the first device; the second device receives fourth information sent by the first device, wherein the fourth information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device; and the second equipment determines that the data output end grade of the first equipment is a third grade according to the fourth information.
With reference to the seventh aspect, in certain implementations of the seventh aspect, after the determining that the outbound ranking of the data of the first device is the third ranking, the method further includes: determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user; determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or determining that the data of the first device is output in a third sub-level of a third level when the registration information of the first user is the voice or the signature of the first user.
With reference to the seventh aspect, in some implementations of the seventh aspect, the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, device capability data, and/or device status data; and/or the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, preference data, equipment capability data and/or equipment state data; and/or the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the data type corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type includes photo data, recorded video data, device capability data, and/or device status data; and/or the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
With reference to the seventh aspect, in certain implementations of the seventh aspect, the method further includes: and the second equipment sends the data outbound grade of the first equipment to the first equipment.
In an eighth aspect, a data sharing method is provided, including: the first device receives second information sent by the second device, wherein the second information is used for indicating an account corresponding to the original data of the biological characteristics of the first user, which is obtained by identifying the original data of the biological characteristics of the first user by the second device; the first equipment searches whether an account indicated by the second information exists on the first equipment; the first equipment sends third information to the second equipment, wherein the third information is used for indicating whether an account indicated by the second information exists on the first equipment or not; the first device acquires a first data request message of the first user from the second device, wherein the first data request message is used for requesting to share first data of the first user; and the first equipment sends the first data to the second equipment.
With reference to the eighth aspect, in certain implementations of the eighth aspect, the method further includes: the method comprises the steps that a first device receives registration information of a first user sent by a second device, wherein the registration information of the first user comprises original data of biological characteristics of the first user; the first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained; when the first device does not obtain an account corresponding to the original data of the biological feature of the first user, the first device sends a third instruction to the second device, wherein the third instruction is used for indicating that the account corresponding to the original data of the biological feature of the first user is not obtained by the first device; and under the condition that the first device obtains an account corresponding to the original data of the biological feature of the first user, the first device sends fourth information to the second device, wherein the fourth information is used for indicating the account corresponding to the original data of the biological feature of the first user determined by the first device.
With reference to the eighth aspect, in certain implementations of the eighth aspect, the method further includes: and the first equipment receives the data of the first equipment, which is sent by the second equipment, from the output end level.
In a ninth aspect, an apparatus for data sharing is provided, which includes: a processor coupled with a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory to cause the apparatus to perform the method described in the first aspect and certain implementations of the first aspect, the method described in the fourth aspect and certain implementations of the fourth aspect, the method described in the sixth aspect and certain implementations of the sixth aspect, and the method described in the eighth aspect and certain implementations of the eighth aspect.
In a tenth aspect, there is provided an apparatus for data sharing, comprising: a processor coupled with a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory to cause the apparatus to perform the method described in the second aspect and some implementations of the second aspect, the method described in the third aspect and some implementations of the third aspect, the method described in the fifth aspect and some implementations of the fifth aspect, and the method described in the seventh aspect and some implementations of the seventh aspect.
In an eleventh aspect, there is provided a computer-readable medium comprising a computer program which, when run on a computer, causes the computer to perform the method described in the first to eighth aspects above and in certain implementations of the first to eighth aspects.
In a twelfth aspect, a system chip is provided, which includes an input-output interface and at least one processor, where the at least one processor is configured to call instructions in a memory to perform the operations of the method in the first to eighth aspects and some implementations of the first to eighth aspects.
Optionally, the system-on-chip may further include at least one memory for storing instructions for execution by the processor and a bus.
Drawings
Fig. 1 is a diagram illustrating an application scenario in which the method and apparatus according to the embodiment of the present application may be applied.
Fig. 2 is a schematic flow chart of a data sharing method 200 according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of permission levels of a device for accessing a database and sharable data corresponding to the permission levels according to an embodiment of the present application.
Fig. 4 is a specific schematic flowchart of step 220 in the method 200 provided by the embodiment of the present application.
Fig. 5 is another specific schematic flowchart of step 220 in method 220 provided by an embodiment of the present application.
Fig. 6 is a further specific schematic flowchart of step 220 in method 220 provided by an embodiment of the present application.
Fig. 7 is a further specific schematic flowchart of step 220 in method 220 provided by an embodiment of the present application.
Fig. 8 is a specific schematic flowchart of step 240 in the method 200 provided by the embodiment of the present application.
Fig. 9 is a schematic diagram illustrating an example of data sharing among multiple devices according to an embodiment of the present disclosure.
Fig. 10 is a schematic diagram illustrating another example of data sharing among multiple devices according to an embodiment of the present disclosure.
Fig. 11 is a schematic diagram illustrating another example of data sharing among multiple devices according to an embodiment of the present disclosure.
Fig. 12 is a schematic diagram illustrating another example of data sharing among multiple devices according to an embodiment of the present disclosure.
Fig. 13 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Fig. 14 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Fig. 1 is an exemplary diagram of an application scenario in which the method and apparatus according to the embodiment of the present application may be applied. The scene shown in fig. 1 includes a mobile phone 101, a vehicle 102, a tablet computer (pad)103, a watch 104, a mobile phone 111, a mobile phone 121, a watch 122, a stereo 123, a television 131, a mobile phone 132, a tablet computer 133, a watch 134, a stereo 135, and a vehicle 136. The account number B is respectively registered on the mobile phone 101, the vehicle 102, the tablet computer 103 and the watch 104; only the account A of the user 1 is registered on the mobile phone 111, and/or the biological characteristics of the user 1 exist on the mobile phone 111; the account number C is registered on the mobile phone 121, the watch 122 and the sound 123, and the original data of the biological features of the same user exist in the mobile phone 121, the watch 122 and the sound 123; the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 do not have any account number registration, and the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 do not have original data of the biological features of the same user, namely the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 can be used by a single user or multiple users.
It should be understood that the devices shown in fig. 1 are merely examples, and more or fewer devices may be included in the system. For example, only the television 131, the cell phone 121, the cell phone 111, the tablet computer 103, the watch 104, the stereo 123, and the vehicle 136 may be included.
The mobile phone 101, the mobile phone 111, the mobile phone 121, the mobile phone 132, the tablet computer 103, the tablet computer 133, the watch 104, the watch 122, the watch 134, the vehicle 102, and the vehicle 136 in fig. 1 may all represent terminal devices with a biometric recognition function, for example, the mobile phone 101 may perform face recognition, and the mobile phone 121 may perform voiceprint recognition.
Of course, the same biometric may also be recognized by the cell phone 101 and the cell phone 121. For example, the mobile phone 101 and the mobile phone 121 can both perform face recognition; for another example, both the cell phone 101 and the cell phone 121 can perform voiceprint recognition.
The terminal device in the embodiment of the present application may be a mobile phone (mobile phone), a tablet pc, a computer with a wireless transceiving function, a Virtual Reality (VR) terminal, an Augmented Reality (AR) terminal, a wireless terminal in industrial control (industrial control), a wireless terminal in self driving (self driving), a wireless terminal in remote medical (remote medical), a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home), and the like.
The tv 131, the stereo 123 and the sound 135 in fig. 1 may represent devices that can capture biometrics but do not have biometrics recognition function, for example, the tv 131 can capture face images and capture human voice, but do not have face recognition function and voiceprint recognition function; for another example, the stereo 123 and the stereo 135 may capture a face image and capture a voice of a person, but have no face recognition function and no voiceprint recognition function.
The biometric features in embodiments of the present application may include one or more of: physical biometric, behavioral biometric, soft biometric. The physical biometric characteristics may include: face, fingerprint, iris, retina, deoxyribonucleic acid (DNA), skin, hand, vein. The behavioral biometric may include: voiceprint, signature, gait. Soft biometrics may include: sex, age, height, weight.
Each of the devices shown in fig. 1 may communicate with each other via a network. Optionally, the network includes a wireless fidelity (WI-FI) network or a bluetooth network. It will be appreciated that the above networks may also include wireless communication networks, such as 2G, 3G, 4G, 5G communication networks. The network may be specifically a work network or a home network. For example, after the television 131 collects the face image and voice of the user, the face image and voice of the user and the data generated by the user using the television 131 are stored, and the face image can be sent to the mobile phone 101 through the network, and the voice information can be sent to the mobile phone 121 and the audio 135.
With the rapid development of smart devices and the IoT field, collaborative integration of multiple smart devices has become a common consensus in the industry. In order to realize the cooperation of multiple intelligent devices, it is necessary that user data and device data can flow and be shared among multiple intelligent devices or multiple accounts. In a multi-smart device scenario, such as a home scenario, a private device (e.g., a mobile phone or a watch) and a home public device (e.g., a television, a car, or a sound box), where multiple users may use the home public device, the current home public device cannot identify the individual, and if a user provides a different personalized experience, the current situation cannot be realized.
Therefore, it is highly desirable to provide a cross-device data sharing method, which provides a user with different personalized experiences according to different forms of registration information of the user using the device, and realizes data sharing among multiple intelligent devices.
As shown in fig. 2, a method 200 for data sharing is provided in the present embodiment. It should be understood that fig. 2 shows steps or operations of the method, but these steps or operations are only examples, and the technical solution proposed by the present application can also perform other operations or variations of the respective operations in fig. 2.
Hereinafter, the first device and the second device may be terminal devices, which may be any one of the devices shown in fig. 1. The first user may be any one of users using the first device and the second device. The first device may be plural, and in a case where the first device is plural, each of the plural first devices may perform the steps performed by the first device in the following method.
In step 210, the second device obtains the registration information of the first user input by the first user.
The registration information of the first user may include an account number of the first user and/or raw data of a biometric characteristic of the first user.
The account number may be registered by the first user; alternatively, the account may not be registered by the first user, and only the first user uses the second device through the account.
The account number may be, for example, a mobile phone number, a user name set by the user, a mailbox, or the like.
The raw data of the above-mentioned biometric features can be understood as raw biometric data.
The first user uses the second device through the registration information of the first user input by the first user. Illustratively, the first user uses the second device through the first user's account. For example, the user uses the second device through account 1; alternatively, the first user uses the second device with raw data of the first user's biometric features; for example, the user uses the second device through a face image of the user.
The second device may be a device with a biometric function, for example, as shown in fig. 1, the second device may be a mobile phone 111, a mobile phone 101, a vehicle 102, a tablet computer 103, a mobile phone 121, a mobile phone 132, a tablet computer 133, or a vehicle 136; alternatively, the second device may be a device that can capture biometrics but does not have biometric identification functionality, for example, as shown in fig. 1, the second device may be a watch 104, watch 122, stereo 123, television 131, watch 134, or stereo 135.
Biometric identification technology (biometric identification technology) refers to a technology for performing identity authentication using human body biometrics. More specifically, the biometric identification technology is to closely combine a computer with high-tech means such as optics, acoustics, biosensors and the principle of biometrics, and to identify the identity of an individual by using the inherent physiological characteristics and behavior characteristics of a human body.
In the case where the second device is a device having a biometrics function, the second device stores a recognition result obtained by recognizing the raw data of the biometrics characteristic of the user (the recognition result is the user).
The biometric result may be understood as a biometric identity obtained by biometric recognition. For example, before the mobile phone 101 identifies the user using the mobile phone 101, the mobile phone 101 collects a face image of the owner 1 of the mobile phone 101, converts the face image of the owner 1 into digital codes, and combines the digital codes to obtain a face feature template of the owner 1. When the mobile phone 101 identifies the user using the mobile phone 101, the original data of the face image of the user using the mobile phone 101 is collected, the collected original data of the face image of the user using the mobile phone 101 is compared with the face feature template of the owner 1 stored in the database of the mobile phone 101, and the user using the mobile phone 101 is determined to be the owner 1 under the condition that the original data of the face image of the user using the mobile phone 101 is matched with the face feature template of the owner 1 stored in the database of the mobile phone 101. For another example, before the audio 123 identifies the user using the audio 123, the audio 123 collects the voice of the owner 2 of the audio 123, converts the voice of the owner 2 into digital codes, and combines the digital codes to obtain the voiceprint feature template of the owner 2. When the audio 123 identifies the user using the audio 123, the audio 123 collects the original data of the voice of the user using the audio 123, compares the collected original data of the voice of the user using the audio 123 with the voiceprint feature template of the owner 2 stored in the database of the audio 123, and determines that the user using the audio 123 is the owner 2 when the original data of the voice of the user using the audio 123 matches the voiceprint feature template of the owner 2 stored in the database of the audio 123. The identity of the user using the mobile phone 101 is the owner 1 and the identity of the user using the audio 123 is the owner 2, which are the biometric identification results, and the mobile phone 101 and the audio 123 store the biometric identification results.
When the user uses the device through the registration information of the user input by the user, the registration information of the user and data generated when the user uses the device through the input registration information of the user are stored in a one-to-one correspondence mode. The registration information may be an account number or raw data of a biometric feature. The account may be, for example, a mobile phone number, a user name set by the user, a mailbox, or the like.
Whether the user uses the device through an account or through original data of the biological characteristics of the user, the data generated on the device by the user using the device are stored in a memory of the device according to the account used by the user, namely in the case that the registration information of the user is the account of the user, the data generated by the user using the device through the account of the user is stored in the memory according to the account used by the user; when a plurality of accounts use the equipment, data generated on the equipment in the using process of each account is stored according to the account, each account can correspond to one storage engine, and the data stored by the corresponding account can be accessed through the corresponding storage engine. When the registration information of the user is the original data of the biological characteristics of the user, an account corresponding to the original data of the biological characteristics of the user exists on the device, and the data stored on the device by the user through the original data of the biological characteristics of the user is also stored in a database according to the account corresponding to the original data of the biological characteristics of the user, wherein the original data of the biological characteristics and the account can be in one-to-one correspondence, for example, one face corresponds to one account; alternatively, the original data of the biometric features and the account number may be in a many-to-one relationship, for example, a plurality of fingerprints correspond to one account number, or one face and one fingerprint correspond to one account number. For example, the biological characteristics, such as fingerprints, irises, faces, and the like, recorded by the account a in one device during use are bound to the account a, and the biological characteristics, recorded by the account B during use, are bound to the account B.
For example, as shown in fig. 1, when the user 1 uses the mobile phone 111 by the account a, the data stored on the mobile phone 111 by the account a by the user 1 is stored in the database according to the account a. The data stored by the user 1 on the mobile phone 111 through the account a may include a picture taken by the user 1 on the mobile phone 111 through the account a, a historical song listening list stored by the user 1 on the mobile phone 111 through the account a, historical location data of the user 1 stored by the user 1 on the mobile phone 111 through the account a, and the like. When the user 2 uses the mobile phone 111 through the account B, the data stored on the mobile phone 111 by the account B by the user 2 is stored in the database according to the account B.
For another example, as shown in fig. 1, before the user 3 uses the mobile phone 121 by using the facial image of the user 3, the user 3 has used the mobile phone 121 by using the account C, and the data stored on the mobile phone 121 by using the facial image of the user 3 by the user 3 is stored in the database according to the account C. The data stored on the mobile phone 121 by the user 3 through the facial image of the user 3 may include a historical video watching record stored on the mobile phone 121 by the user 3 through the facial image of the user 3, a historical motion situation record stored on the mobile phone 121 by the user 3 through the facial image of the user 3 for the user 3, and the like.
After the second device acquires the registration information input by the first user, the first user needs to synchronize data stored on at least one first device by the first user through the registration information input by the first user to the second device. Step 220 is also included before the data stored on the at least one first device by the first user via the registration information entered by the first user is synchronized to the second device.
Step 220, determining an outbound level of the data of the first device.
Wherein the first device is a device other than the second device in a network co-located with the second device; alternatively, the first device is a device selected according to the functions of all devices in a network in which the first device is co-located with the second device, for example, the first device is a biometric-enabled device.
Alternatively, the devices in the network may be mutually trusted devices. For example, the network may be a home network, where devices in the home network are mutually trusted devices. As another example, the network may also be a working network, where the devices in the working network are mutually trusted devices. The device in the network is not only a device connected to the network, but also a device added to the network by scanning a two-dimensional code (identification code), which may be preset.
The first device is a device except the second device in the same group in the same network as the second device; alternatively, the first device is a device selected according to the functions of all devices in the same group in the same network as the second device, for example, the first device is a device with a biometric function.
Alternatively, a plurality of groups may be set in advance in the network, and the devices in each of the plurality of groups may be devices that can trust each other. For example, the network may be a home network, in which a home group and a guest group may be preset, the home group including the first device and the second device, and the first device and the second device in the home group being devices that trust each other; the devices in the visitor group and the devices in the family group are devices which are not trusted with each other, but the devices in the access group and the devices in the family group can interact with each other through non-private information. The devices in the home group may be not only devices connected to the home network but also devices added to the home network by scanning the two-dimensional code, and the devices in the visitor group are only devices connected to the home network.
The data output level of the first device may be understood as a level at which the data on the first device is shared with other devices. The outbound level of the data of the first device is set relative to the device requesting the data, and the outbound levels of the data of the first device may be different and may be the same for different devices requesting the data. When the device requesting data is the first device itself, the outbound rank of the data of the first device is the first rank. When the device requesting the data is not the first device itself, the outbound ranking of the data of the first device is determined by what form of registration information the device requesting the data (e.g., the second device) accesses the data on the first device. If the device requesting data accesses the first device by using the account number used in the first device, regarding the device requesting data, the data outbound grade of the first device is a second grade; if the device requesting data accesses the first device by using the original data of the biological characteristics corresponding to the account number used in the first device, regarding the device requesting data, the data of the first device is of a third level; if the device requesting the data does not access the first device using the same account number or biometric of the original data as the first device, the outbound rating of the data of the first device is a fourth rating for the device requesting the data.
Illustratively, as shown in the left right front triangle of fig. 3, when a user uses a device, the data generated in the device can be used by the device, for the device itself, the data of the device is of a first level, for example, the user a logs in the device 1 through the account a or the face (face-bound account a), the data generated in the process of using the device 1 is associated with the account a, and when the user a uses the account a or the face again to log in the device 1, the program in the device 1 can use or access all the data associated with the account a and the data unrelated to any account. When a user acquires data related to the user in a first device through a second device, the data of the first device can be classified into at least two levels: a second level, a third level, and a fourth level. Specifically, when the user uses a second device through the same account as the first device, for the second device, the data outbound level of the first device is a second level; when a user uses a second device through the same original data of a first biological characteristic as a first device, the data of the first device is at a third level for the second device, wherein the original data of the first biological characteristic is the original data of any biological characteristic of the user; when the user does not use the second device through the same account number and the original data of the biological characteristics as the first device, the data of the first device is of a fourth level for the second device.
The data output end level of the equipment is a first level, a second level, a third level and a fourth level from high to low in sequence.
For example, as shown in fig. 1, the user only uses the mobile phone 111 through the account a, and the data stored in the mobile phone 111 through the account a by the user is not allowed to be sent out, that is, the data stored in the mobile phone 111 through the account a by the user is not shared with other devices. The user uses the mobile phone 101, the vehicle 102, the tablet computer 103 and the watch 104 through the account number B, respectively, and the outgoing level of the data stored on the mobile phone 101 by the user through the account number B is a second level; the outbound rating of the data stored by the user on the vehicle 102 via account B is a second rating; the outgoing grade of the data stored on the tablet computer 103 by the user through the account B is a second grade; the outgoing level of the data stored by the user on the watch 104 via account B is a second level. The user uses the mobile phone 121, the watch 122 and the sound 123 through the original data of the first biological feature of the user, and the output end level of the data stored on the mobile phone 121 through the original data of the first biological feature of the user is a third level; the outgoing level of the data stored by the user on the watch 122 through the original data of the first biometric characteristic of the user is a third level; the outgoing level of the data stored on the sound 123 by the user through the original data of the first biometric feature of the user is the third level. If the user does not use any account number and does not use any original data of the biometric features of the user, the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 and the vehicle 136 are used, the output level of the data of the television 131 is the fourth level; the outgoing level of the data of the mobile phone 132 is a fourth level; the data of the tablet computer 133 is output at the fourth level; the data of the watch 134 is output at a fourth level; the output level of the data of the sound 135 is a fourth level; the outbound rating of the data for vehicle 136 is a fourth rating.
Further, when the user uses a plurality of devices using raw data of different biometrics, the above-described third level of refinement may be divided into at least two of the following according to the accuracy or false break rate of the raw data of the recognized biometrics: a first sub-level, a second sub-level, and a third sub-level. The data output end level of the equipment is sequentially a first sub-level, a second sub-level and a third sub-level from high to low.
Determining the data output level of the equipment as a first sub-level under the condition that a user uses a plurality of equipment through the 3D face, the fingerprint, the iris or the DNA of the user, namely when the same user uses the same 3D face, the fingerprint, the iris or the DNA on the plurality of equipment; for example, when the user a uses the device a by using the fingerprint of the user a, and uses another device (for example, the device C) by using the fingerprint of the user a, the outgoing level of the data stored on the device a by the user a by using the fingerprint of the user a is the first sub-level; the outgoing level of the data stored on the device a by the user a's fingerprint is also the first sub-level.
When a user uses a plurality of devices through the 2D face or vein of the user, namely the same user uses the same 2D face or vein on a plurality of devices, determining the output level of the data of the device as a second sub-level; for example, when the user a uses the device a through the 2D face of the user a, another device (e.g., the device C) is also used through the 2D face of the user a, and the outbound rank of the data stored on the device a by the user a through the 2D face of the user a is a second sub-rank; the outgoing level of the data stored on the device C by the user a through the 2D face of the user a is also the second sub-level.
When the user uses a plurality of devices by the voice or signature of the user, that is, the same user uses the same voice or signature on a plurality of devices, the data of the device is determined to be the third sub-level. For example, when the user a uses the device a by the voice of the user a, and uses another device (for example, the device C) by the voice of the user a, the outgoing level of the data stored on the device a by the voice of the user a is the third sub-level; the outgoing level of the data stored on the device C by the user a through the voice of the user a is the third sub-level.
When a first user uses a second device, the first user needs to acquire data in the first device, and then needs to determine an outbound level of the data of the first device, where the outbound level of the data of the first device is relative to the second device. In the following, how to determine the egress level of data of the first device is described in detail in two cases. In the following, taking the example that the first device and the second device belong to the same home network, the home network may communicate in a wifi mode or a bluetooth mode.
Case 1: the second device determines an outbound ranking of the data of the first device.
(1) In the case where the registration information input by the first user includes raw data of the biometric feature of the first user, and the second device is a device that can acquire the biometric feature but does not have a biometric function, the second device can only acquire the biometric feature, and the second device needs to complete the biometric function by means of another device.
As shown in fig. 4, the specific step 220 may include steps 220a to 223 a.
In step 220a, the second device sends the first device the raw data of the biometric characteristic of the first user.
In step 221a, the first device identifies the raw data of the biometric characteristic of the first user, and determines whether an account corresponding to the raw data of the biometric characteristic of the first user is available.
In the event that the first device determines that an account corresponding to the raw data of the biometric of the first user is not available, step 222a is performed.
In step 222a, the first device sends a first instruction to the second device, where the first instruction is used to indicate that the first device does not obtain an account corresponding to the original data of the biometric characteristic of the first user. And after the second equipment receives the first instruction, determining that the outbound grade of the data of the first equipment is a fourth grade.
For example, as shown in fig. 1, when the television 131 (the television 131 is an example of the second device) acquires a facial image of a user using the television 131 through a camera of the television 131, and identifies the user using the television 131, the television 131 sends the acquired facial image of the user using the television 131 to the mobile phone 132 (the mobile phone 132 is an example of the first device), the mobile phone 132 compares the original data of the facial image of the user using the television 131 sent by the television 131 with the feature template stored in the database of the mobile phone 132, and in the case that the original data of the facial image of the user using the television 131 does not match the feature template of the owner 3 stored in the database of the mobile phone 132, the mobile phone 132 does not obtain an account corresponding to the facial image of the user using the television 131, the mobile phone 132 sends the first instruction to the television 131, and after the television 131 receives the first instruction, television 131 may determine that the outbound rating of the data of cell phone 132 is a fourth rating.
When the first device identifies the original data of the biological characteristic of the first user to obtain an identification result, the first device may obtain an account corresponding to the original data of the biological characteristic of the first user. In case the first device determines that an account corresponding to the original data of the biometric characteristic of the first user is obtained, steps 222a' and 223a are performed.
In step 222a', the first device sends first information to the second device, where the first information is used to indicate an account corresponding to the raw data of the biometric characteristic of the first user determined by the first device.
Step 223a, the second device determines the outbound level of the data of the first device according to the first information sent by the first device.
Specifically, when the second device determines that the account corresponding to the original data of the biometric characteristic of the first user determined by the first device is stored in the second device, the second device determines that the data of the first device is in the second level. And under the condition that the second device determines that the account corresponding to the original data of the biological characteristics of the first user determined by the first device is not stored, the second device determines that the data of the first device is of a third level.
Further, when it is determined that the outbound rank of the data of the first device is the third rank, the second device may further determine, according to a specific form of the registration information of the first user input by the first user, which sub-rank of the third rank the outbound rank of the data of the first device is. Specifically, when the registration information of the first user input by the first user is a 3D face, a fingerprint, an iris, or DNA of the first user, the second device determines that the outbound rating of the data of the first device is a first sub-rating; when the registration information of the first user input by the first user is the 2D face or vein of the first user, the second device determines that the outbound grade of the data of the first device is a second sub-grade; and when the registration information of the first user input by the first user is the voiceprint or the signature of the first user, the second device determines that the outbound grade of the data of the first device is a third subgrade.
For example, as shown in FIG. 1, when a sound 123 (the sound 123 is an example of a second device) collects a voice of a user using the sound 123 through a microphone of the sound 123, when the user using the sound box 123 is identified, the sound box 123 transmits the collected sound of the user using the sound box 123 to the mobile phone 121 (the mobile phone 121 is an example of a first device), and when the mobile phone 121 compares the original data of the sound of the user using the sound box 123 transmitted by the sound box 123 with the feature template stored in the database of the mobile phone 121 and the original data of the sound of the user using the sound box 123 matches the feature template of the account number C stored in the database of the mobile phone 121, the mobile phone 121 obtains an account C corresponding to the voice of the user using the audio 123, the mobile phone 121 transmits the account C to the audio 123, and the audio 123 determines that the account C is stored therein, and then the audio 123 determines that the output level of the data of the mobile phone 121 is the second level. When the sound 123 determines that the account C is not stored in the sound 123, the sound 123 determines that the outgoing level of the data of the mobile phone 101 is the third level. Further, the sound 123 may also determine that the outgoing level of the data of the cell phone 101 is a third sub-level of the third level.
(2) In the case where the registration information input by the first user includes raw data of the biometric feature of the first user, and the second device is a biometric-enabled device, the second device may perform a biometric function.
As shown in fig. 5, the specific step 220 may include steps 220b to 224b and steps 221c to 224 c.
In step 220b, the second device identifies the raw data of the biometric characteristic of the first user, and determines whether the account corresponding to the raw data of the biometric characteristic of the first user is available to the second device.
Specifically, in the case where the second device obtains an account corresponding to the original data of the biometric characteristic of the first user, steps 221b to 224b are performed. In the case where the second device does not obtain an account corresponding to the original data of the biometric characteristic of the first user, steps 221c to 224c are performed.
Step 221b, the second device sends second information to the first device, where the second information is used to instruct the second device to identify the original data of the biometric characteristic of the first user to obtain an account corresponding to the original data of the biometric characteristic of the first user.
In step 222b, the first device searches whether an account indicated by the second information exists on the first device, and executes step 223 b.
Step 223b, the first device sends third information to the second device, where the third information is used to indicate whether the account indicated by the second information exists on the first device.
And step 224b, the second device determines the data outbound grade of the first device according to the third information.
Specifically, the second device determines that the data outbound grade of the first device is a second grade under the condition that the third information indicates that the account indicated by the second information exists on the first device; and under the condition that the third information indicates that the account indicated by the second information does not exist on the first equipment, the second equipment determines that the data export grade of the first equipment is a fourth grade.
For example, as shown in fig. 1, the cell phone 121 (the cell phone 121 is an example of the second device) may identify a fingerprint of a user using the cell phone 121 and obtain an account number C corresponding to the fingerprint of the user of the cell phone 121. When the mobile phone 121 transmits the account number C to the watch 122 (the watch 122 is an example of a first device), the watch 122 determines that the account number C is stored in the watch 122, the watch 122 transmits information that the account number C is stored in the watch 122 to the mobile phone 121, and the mobile phone 121 determines that the output level of the data of the watch 122 is the second level. When the mobile phone 121 transmits the account number C to the sound device 135 (the sound device 135 is another example of the first device), the sound device 135 determines that the account number C is not stored in the sound device 135, the sound device 135 transmits information that the account number C is not stored in the watch 122 to the mobile phone 121, and the mobile phone 121 determines that the output level of the data of the sound device 135 is the fourth level.
Step 221c, the second device sends the raw data of the biometric characteristic of the first user to the first device.
In step 222c, the first device identifies the raw data of the biometric characteristic of the first user, and determines whether an account corresponding to the raw data of the biometric characteristic of the first user is available.
And under the condition that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user, the first device sends a third instruction to the second device, wherein the third instruction is used for indicating that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user, and the second device determines that the outbound grade of the data of the first device is a fourth grade according to the third instruction. If the first device obtains the account corresponding to the original data of the biometric characteristic of the first user, steps 223c to 224c are performed.
In step 223c, the first device sends fourth information to the second device, where the fourth information is used to indicate an account corresponding to the raw data of the biometric characteristic of the first user determined by the first device.
And 224c, the second device determines that the outbound grade of the data of the first device is the third grade according to the fourth information.
Further, the second device may determine, according to the fourth information and the specific form of the registration information of the first user input by the first user, which sub-level of the third level the outgoing level of the data of the first device is. Specifically, when the registration information of the first user input by the first user is a 3D face, a fingerprint, an iris, or DNA of the first user, the second device determines that the outbound rating of the data of the first device is a first sub-rating; when the registration information of the first user input by the first user is the 2D face or vein of the first user, the second device determines that the outbound grade of the data of the first device is a second sub-grade; and when the registration information of the first user input by the first user is the voice or the signature of the first user, the second device determines that the outbound grade of the data of the first device is a third sub-grade.
(3) Under the condition that the registration information input by the first user is the account number of the first user, registering the account number of the first user in the second equipment, and sending the account number of the first user to the first equipment by the second equipment; the first device determines whether an account of the first user exists, and the first device sends a second instruction to the second device, wherein the second instruction is used for indicating whether the account of the first user exists on the first device. Under the condition that the account of the first user is stored on the first equipment, the second equipment determines that the outbound grade of the data of the first equipment is a second grade; and under the condition that the account of the first user is not stored on the first equipment, the second equipment determines that the outbound grade of the data of the first equipment is a fourth grade.
In case 1, the second device may send the outbound rank of the data of the first device determined by the second device to the first device.
Case 2: the first device determines an outbound ranking of data of the first device.
(1) In the case where the registration information input by the first user includes raw data of the biometric feature of the first user, and the second device is a device that can acquire the biometric feature but does not have a biometric function, the second device can only acquire the biometric feature, and the second device needs to complete the biometric function by means of another device.
Optionally, the number of the accounts of the first user may be one or more.
As shown in fig. 6, the specific step 220 may further include steps 220d to 222 d.
In step 220d, the second device sends the original data of the biometric characteristic of the first user and all the accounts stored in the second device to the first device.
In step 221d, the first device identifies the raw data of the biometric characteristic of the first user and determines whether an account corresponding to the raw data of the biometric characteristic of the first user is available.
Step 222d, the first device determines the outbound rank of the data of the first device.
Specifically, when the first device determines that an account corresponding to the original data of the biological feature of the first user exists in all accounts stored in the second device, the first device determines that the output level of the data of the first device is a second level; when the first device determines that an account corresponding to the original data of the biological feature of the first user does not exist in all accounts stored in the second device, the first device determines that the output level of the data of the first device is a third level. And under the condition that the first device does not obtain the account corresponding to the original data of the biological characteristics of the first user, the first device determines that the data of the first device has a fourth level.
Further, when it is determined that the outbound level of the data of the first device is the third level, the first device may further determine, according to a specific form of the registration information of the first user sent by the second device, which sub-level of the third level the outbound level of the data of the first device is. Specifically, when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user, the first device determines that the outbound rating of the data of the first device is a first sub-rating; when the registration information of the first user is the 2D face or vein of the first user, the first device determines that the output level of the data of the first device is a second sub-level; and when the registration information of the first user is the voice or the signature of the first user, the second device determines that the outbound grade of the data of the first device is a third sub-grade.
(2) In the case where the input registration information of the first user includes raw data of the biometric feature of the first user, and the second device is a biometric-enabled device, the second device may perform a biometric function.
As shown in fig. 7, the specific step 220 may further include steps 220e to 222e, and steps 221f and 222 f.
In step 220e, the second device identifies the raw data of the biometric characteristic of the first user and determines whether an account corresponding to the raw data of the biometric characteristic of the first user is available.
In the case where the second device obtains an account corresponding to the original data of the biometric characteristic of the first user, the method 200 further includes 221e and step 222 e. In the case where the second device does not obtain an account corresponding to the original data of the biometric characteristic of the first user, the method 200 further includes 221f and step 222 f.
Step 221e, the second device sends fifth information to the first device, where the fifth information is used to instruct the second device to obtain an account corresponding to the original data of the biometric characteristic of the first user.
Step 222e, the first device determines the data outbound grade of the first device according to the fifth information.
Specifically, under the condition that the first device determines that the account corresponding to the original biological feature data of the first user, which is sent by the second device, exists in the first device, the first device determines that the output level of the data of the first device is a second level; and under the condition that the first device determines that the account corresponding to the original data of the biological feature of the first user, which is sent by the second device, does not exist in the first device, the first device determines that the output level of the data to the first device is a fourth level.
Step 221f, the second device sends sixth information to the first device, the sixth information being indicative of raw data of the biometric characteristic of the first user.
In step 222f, the first device determines whether to obtain an account corresponding to the original data of the biological characteristic of the first user according to the sixth information.
Step 223f, determining an outbound rating of the data of the first device.
Specifically, when the first device does not obtain an account of the original data of the biometric characteristic of the first user, the first device determines that the outbound rating of the data of the first device is a fourth rating. In the case where the first device determines the account corresponding to the raw data of the biometric characteristic of the first user, steps 224f through 226f are performed.
In step 224f, the first device sends seventh information to the second device, where the seventh information is used to indicate an account corresponding to the raw data of the biometric characteristic of the first user determined by the first device.
Step 225f, the second device determines whether an account corresponding to the original data of the biological feature of the first user determined by the first device exists on the second device.
In step 226f, the second device sends eighth information to the first device, where the eighth information is used to indicate whether an account corresponding to the original data of the biometric characteristic of the first user determined by the first device exists on the second device.
Under the condition that an account corresponding to the original data of the biological characteristics of the first user determined by the first device is stored on the second device, the first device determines that the data export grade of the first device is a second grade; and under the condition that the account corresponding to the original data of the biological characteristics of the first user determined by the first device is not stored on the second device, the first device determines that the data export grade of the first device is a third grade.
Further, when the first device determines that the outbound rank of the data of the first device is the third rank, the first device may further determine, according to a specific form of the registration information of the first user sent by the second device, which sub-rank of the third rank the outbound rank of the data of the first device is. Specifically, when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user, the first device determines that the outbound rating of the data of the first device is a first sub-rating; when the registration information of the first user is the 2D face or vein of the first user, the first device determines that the output level of the data of the first device is a second sub-level; and when the registration information of the first user is the voice or the signature of the first user, the second device determines that the outbound grade of the data of the first device is a third sub-grade.
(3) Under the condition that the registration information input by the first user is the account number of the first user, registering the account number of the first user in the second device, and sending the account number of the first user to the first device by the second device; the method comprises the steps that first equipment determines whether an account of a first user exists, and under the condition that the account of the first user exists in the first equipment, the first equipment determines that the data export grade of the first equipment is a second grade; and under the condition that the account of the first user is not stored in the first equipment, the first equipment determines that the data outbound grade of the first equipment is a fourth grade.
In case 2, the first device may send the outbound rank of the data of the first device determined by the first device to the second device.
In the above cases 1 and 2, the number of the first devices in the home network may be one or more, and the outbound ranking of the data of the first devices may be determined in the above manner. When the second device sends the original data of the biological characteristics to the first device, the original data of the biological characteristics can be sent to all devices except the second device in the home network, and all devices except the second device in the home network are the first devices; the second device may select a device capable of having a biometric identification function according to the performance of the device in the home network, send the raw data of the biometric characteristic to the device having the biometric identification function, and after obtaining an account corresponding to the raw data of the biometric characteristic, then confirm whether the account exists in the devices other than the second device in the home network, thereby completing the data export level of the devices other than the second device (i.e., the first device) in the home network.
When the outbound rank of the data of the first device is determined, the second device may further perform the following step 230.
In step 230, the second device obtains a first data request message of the first user, where the first data request message is used to request to share the first data of the first user.
Illustratively, the second device may recognize a request for the first data by the first user through a voice recognition function; alternatively, the second device may also obtain the first user's request for the first data through input of the first user.
The registration information input by the user on the device, and the data generated by using the device can be classified into high-influence personal data, medium-influence personal data, low-influence personal data and non-personal data according to the risk degree of the data. The high-impact personal data may include, for example, precise location data and/or health data, wherein precise location data may be understood as latitude and longitude coordinates or trajectories. For example, the accurate location data may be real-time accurate location data of the user while the user is using the device. The data of the influence may include general location data and/or video data. The general location data may be a CELL Identity (CELL ID) where the terminal device is located or a Basic Service Set Identifier (BSSID) of the wireless fidelity WI-FI connected to the terminal device. Typically, location data cannot be located directly to latitude and longitude coordinates, but may approximate information identifying the user's location. The general location data may be historical location data of the user when using the device. Illustratively, the place of interest to the user, e.g., the place where the user likes to eat, the place where the user likes to entertain. The low impact data may include logistics data, scheduling data, and/or preference data; the non-personal data may include device capability data and/or device status data.
Risk of personal data of high influence > risk of personal data of low influence > risk of personal data of non-personal data. Wherein, the high-impact personal data can be understood as that the part of personal data has the highest risk degree impact on the user, i.e. the part of personal data has the highest risk degree; the medium influence personal data can be understood as that the part of personal data has higher risk degree influence on the user, namely the part of personal data has higher risk degree; low impact personal data may be understood as that part of personal data is less risky to the user, i.e. that part of personal data is less risky; non-personal data is understood to mean that this part of the data is not relevant to the user, but some data of the device itself.
Optionally, in the embodiment of the present application, the risk level may also be replaced by a privacy level, and the risk may also be replaced by a privacy level.
When a user uses the device through the registration information input by the user and generates data on the device, the device can label the data on the device according to the risk degree of the data. For example, tagging accurate location data with highly affected personal data; tagging general location data with personal data of a medium influence; labeling favorite data of a user with personal data with low influence; the device capability data is tagged with non-personal data.
For example, as shown in FIG. 3, for a device requesting data, the higher the outbound rank of data for the device being requested, the higher the highest risk that the device requesting data may access the data. Specifically, in the case that the outbound rank of the data of the device requested for data is the second rank, the type of data in the device that can access the requested data by the device requesting for data is the second type, the data that can access the highest risk is personal data with medium influence, and the data of the second type may include personal data with medium influence, personal data with low influence, and non-personal data. In a case where the outbound rank of the data of the device requested for data is the third rank, the type of data in the device that the device requesting for data can access the requested data is the third type, the data that can access the highest risk is low-impact personal data, and the third type of data may include low-impact personal data and non-personal data. In a case where the outbound rank of the data of the device requested for data is a fourth rank, the type of data in the device requested for data accessible to the device requested for data being the fourth type may include non-personal data. It will be appreciated that when the device requesting the data is the device being requested, then all data types are accessible, i.e. the first type, and the highest risk data that can be accessed is high impact personal data, the first type of data comprising high impact personal data, medium impact personal data, low impact personal data and non-personal numbers.
Further, at the third level, the following two or more can be subdivided into: in the case of the first sub-level, the second sub-level, and the third sub-level, the data types corresponding to the outgoing levels of the data of the device that has requested the data may be different. Specifically, in the case where the outbound ranking of the data of the device requested the data is the first sub-ranking, the data types of the device that the device requesting the data can access the requested data include photo data, recorded video data, device capability data, and/or device status data, such as photos taken by the user or video recorded by the user. In the case that the outbound level of the data of the device requested for data is the second sub-level, the data type of the device that the device requesting for data can access the requested data includes logistics data, schedule data, device capability data, and/or device status data, for example, courier transportation data of the user. In the case that the data-out level of the device requesting the data is the third sub-level, the data type of the device requesting the data, which can access the requested data, includes favorite data, viewed video data, device capability data, and/or device status data, for example, a type of song the user likes to listen to or a singer the user likes to listen to; as another example, the user's athletic preferences; as another example, a video recording viewed by the user.
The second device may be a device that requests data, and the first device may be a device that is requested for data.
Step 240 may also be included after the second device obtains the first user's request for the first data.
At step 240, it is determined whether the first device shares the first data. Step 240 is specifically described in two ways below.
Mode 1: the second device determines whether the first device shares the first data.
As shown in fig. 8, the specific step 240 may include steps 241a to 244 a.
Step 241a, the second device determines whether the first data belongs to data of a data type corresponding to the outbound rank of the data of the first device, and the second device executes step 242a when the first data belongs to the data of the data type corresponding to the outbound rank of the data of the first device; and under the condition that the first data does not belong to the data of the data type corresponding to the data outbound grade of the first equipment, the second equipment does not send the first data request message to the first equipment.
In step 242a, the second device sends a first data request message to the first device.
In a case that the number of the first devices is multiple, the second device may determine to send the first data request message to at least one of the multiple first devices according to a preset rule. For example, the preset rule may be a first device, among the plurality of first devices, whose distance from the second device is smaller than a first threshold; alternatively, the preset rule may be the first device that the second device requests data more frequently than a second threshold; alternatively, the preset rule may be the first device whose confidence level is greater than a third threshold.
In step 243a, the first device searches whether the first device stores the first data.
Under the condition that the first device does not store the first data, the first device does not share the first data with the second device; in the case that at least one first device stores first data, step 244a is performed.
Whether the first device has the first data or not, specifically, whether the first device has the first data associated with the account of the first user or not.
In step 244a, the first device shares the first data with the second device.
Optionally, after the first device shares the first data with the second device, when the second device acquires a second data request message of the first user, where the second data request message is used to request to share the second data, and the second data and the first data are data of a data type corresponding to an outbound level of the data of the first device, and a time between a time when the second device acquires the first data request message of the first user and a time when the second device acquires the second data request message of the first user is less than or equal to a first time, at this time, the second device directly sends the second data request message of the first user to the first device, and when the first device stores the second data, the second device shares the second data. Therefore, the experience of different personalities can be effectively provided for the user.
Mode 2: the first device determines whether the first device shares the first data.
As shown in fig. 8, the specific step 240 may include steps 241b to 244 b.
Step 241b, the second device, the first device, sends the first data request message.
In a case where the first device is plural, the second device may determine to transmit the first data request message to at least one of the plural first devices according to a preset rule. For example, the preset rule may be a first device, among the plurality of first devices, whose distance from the second device is smaller than a first threshold; alternatively, the preset rule may be the first device that the second device requests data more frequently than a second threshold; alternatively, the preset rule may be the first device whose confidence level is greater than a third threshold.
Step 242b, the first device determines, according to the first data request message, whether the first data belongs to data of a data type corresponding to the outbound rank of the data of the first device.
Specifically, under the condition that the first data does not belong to the data of the data type corresponding to the data outbound level of the first device, the first device does not share the first data with the second device. If the first data belongs to the data type corresponding to the outbound rank of the data of the first device, the method further includes step 243 b.
In step 243b, the first device searches whether the first device stores the first data.
Specifically, under the condition that the first device does not store the first data, the first device does not share the first data with the second device; in the case that the first device stores the first data, step 244b is also performed.
Whether the first device has the first data or not, specifically, whether the first device has the first data associated with the account of the first user or not.
In step 244b, the first device shares the first data to the second device.
Optionally, after the first device shares the first data with the second device, when the second device acquires a second data request message of the first user, where the second data request message is used to request to share the second data, and the second data and the first data are data of a data type corresponding to an outbound level of the data of the first device, and a time between a time when the second device acquires the first data request message of the first user and a time when the second device acquires the second data request message of the first user is less than or equal to a first time, the second device sends the second data request message of the first user to the first device, and when the first device stores the second data, the second device shares the second data with the second device. Therefore, the experience of different personalities can be effectively provided for the user.
For example, as shown in fig. 1 and fig. 9, when the user 1 uses the vehicle 136 through the account a (the vehicle 136 is an example of a second device), the vehicle 136 may send the account a of the user 1 to one or more devices through a network, where the one or more devices may be devices that are both connected to the vehicle 136 on the same network, for example, the one or more devices may be devices as in fig. 1, and here, the description is given by taking a plurality of devices as a mobile phone 111 (the mobile phone 111 is an example of a first device) and a mobile phone 101 (the mobile phone 101 is another example of a first device), respectively. After the mobile phone 111 receives the account a of the user 1, the mobile phone 111 determines that the account a of the user 1 is stored in the mobile phone 111, the mobile phone 111 determines that the outbound rating of the data of the mobile phone 111 is the second rating, and the outbound rating of the data of the mobile phone 111 is relative to the vehicle 136, so that the mobile phone 111 sends the outbound rating of the data of the mobile phone 111 to the vehicle 136. The vehicle 136 acquires the data request message of the user 1, where the data request message of the user 1 is used to request sharing of a place where the user 1 likes entertainment, because the place where the user 1 likes entertainment belongs to data of a data type corresponding to an outgoing level of data of the mobile phone 111, the vehicle 136 sends the data request message of the user 1 to the mobile phone 111, and after receiving the data request message of the user 1, the mobile phone 111 shares the place where the user 1 likes entertainment to the vehicle 136 in a case where the place where the user 1 likes entertainment is stored in the mobile phone 111. After the mobile phone 101 receives the account a of the user 1, if the mobile phone 101 determines that the account a of the user 1 does not exist in the mobile phone 101, the mobile phone 101 determines that the outbound rating of the data of the mobile phone 101 is the fourth rating, and if the outbound rating of the data of the mobile phone 101 is relative to the vehicle 136, the mobile phone 101 sends the outbound rating of the data of the mobile phone 101 to the vehicle 136. The vehicle 136 acquires the data request message of the user 1, where the request message of the user 1 is used to request sharing of the place where the user 1 likes entertainment, because the place where the user 1 likes entertainment does not belong to the data of the data type corresponding to the outbound rating of the data of the mobile phone 101, the vehicle 136 may not send the data request message of the user 1 to the mobile phone 101, that is, the vehicle 136 may only obtain the place where the user 1 likes entertainment on the mobile phone 111, so that the driver of the vehicle 136 may drive the vehicle 136 to the destination according to the place where the user 1 likes entertainment on the mobile phone 111.
For example, as shown in fig. 1 and 10, when the user 2 uses the television 131 through the account B (the television 131 is an example of a second device), the television 131 sends the account B of the user 2 to one or more devices through a network, where the one or more devices may be devices that are both connected to the same home network as the television 131, for example, the one or more devices may be devices as in fig. 1, and here, description is given taking a plurality of devices, which are the tablet computer 103 (the tablet computer 103 is an example of a first device) and the sound 123 (the sound 123 is another example of a first device), as examples. After the tablet computer 103 receives the account B of the user 2, the tablet computer 103 determines that the account B of the user 2 is stored in the tablet computer 103, the tablet computer 103 sends the account B of the user 2 stored in the tablet computer 103 to the television 131, the television 131 determines that the export level of the data of the tablet computer 103 is the second level, and the television 131 sends the export level of the data of the tablet computer 103 to the tablet computer 103 if the export level of the data of the tablet computer 103 is relative to the television 131. The television 131 acquires a data request message of the user 2, the data request message of the user 2 is used for requesting to share the historical song list data of the user 2, and the television 131 sends the data request message of the user 2 to the tablet computer 103, because the historical song list data of the user 2 belongs to data of a data type corresponding to the output level of the data of the tablet computer 103, the historical song list data of the user 2 is shared to the television 131 under the condition that the historical song list data of the user 2 is stored on the tablet computer 103. When the sound 123 receives the account B of the user 2, the sound 123 determines that the account B of the user 2 is not stored in the sound 123, the sound 123 sends an instruction that the account B of the user 2 is not stored in the sound 123 to the television 131, the television 131 determines that the output level of the data of the sound 123 is the fourth level, and the television 131 sends the output level of the data of the sound 123 to the sound 123 when the output level of the data of the sound 123 is relative to the television 131. The tv 131 obtains the data request message of the user 2, where the data request message of the user 2 is used to request to share the historical song list data of the user 2, and the tv 131 sends the data request message of the user 2 to the stereo 123, because the historical song list data of the user 2 does not belong to the data type corresponding to the output level of the data of the stereo 123, the stereo 123 does not share the historical song list data of the user 2 to the tv 131. When the user 3 uses the television 131 by voice, the television 131 does not have an account corresponding to the voiceprint of the user 3, the television 131 cannot recognize the voiceprint of the user 3, and the television 131 sends the voice of the user 3 to one or more devices through a network, where the one or more devices may be devices that are connected to the same home network as the television 131, for example, the one or more devices may be devices as in fig. 1, and the description is given by taking a tablet computer 103 and a sound 123 as examples of the multiple devices. When the tablet computer 103 receives the voice of the user 3, the tablet computer 103 does not recognize the voiceprint of the user 3, and it is determined that the account corresponding to the voiceprint of the user 3 does not exist in the tablet computer 103, the tablet computer 103 determines that the outbound rank of the data of the tablet computer 103 is the fourth rank, the outbound rank of the data of the tablet computer 103 is relative to the television 131, the tablet computer 103 sends the outbound rank of the data of the tablet computer 103 to the television 131, the television 131 acquires the data request message of the user 3, the data request message of the user 3 is used for requesting to share the historical song list data of the user 3, the television 131 determines that the historical song list data of the user 3 does not belong to the data of the data type corresponding to the outbound rank of the data of the tablet computer 103, and the television 131 does not send the data request message of the user 3 to the tablet computer 103. When the sound 123 receives the voice of the user 3, the sound 123 recognizes the voiceprint of the user 3, determines that an account corresponding to the voiceprint of the user 3 is stored in the sound 123, the sound 123 determines that the outbound rating of the data of the sound 123 is a third rating, and the user 3 uses the voiceprint to use the television 131, so that the outbound rating of the data of the sound 123 is a third sub-rating, the outbound rating of the data of the sound 123 is relative to the television 131, the sound 123 transmits the outbound rating of the data of the sound 123 to the television 131, the television 131 acquires a data request message of the user 3, the data request message of the user 3 is used for requesting to share the history song menu data of the user 3, the television 131 determines that the history song menu data of the user 3 belongs to the data type corresponding to the outbound rating of the data of the sound 123, the television 131 transmits the data request message of the user 3 to the sound 123, and in the case that the history song menu data of the user 3 is stored in the sound 123, the stereo 123 shares the history song list data of the user 3 to the television 131. Thus, when the user 2 uses the television 111 through the account B, the television 111 receives the historical song menu data stored on the tablet computer 103 by the user 2 through the account B, and when the user 3 uses the television 111 through voice, the historical song menu data stored on the audio 123 by the user 3 can be accessed.
For example, as shown in fig. 1, specifically, the user 3 may use the mobile phone 121 through a face image of the user 3, a fingerprint of the user 3, or a voice of the user 3; the user 3 can use the watch 122 by the face image of the user 3 and the voice of the user 3; the user 3 can use the sound 123 through the voice of the user 3, wherein the data stored in the mobile phone 121 by the user 3 through the original data of the biological characteristics of the user 3 is stored according to the account C used by the user 3; the data stored in the watch 122 by the user 3 through the raw data of the biometric features of the user 3 is stored according to the account C used by the user 3; the data stored in the audio 123 by the user 3 through the raw data of the biometrics characteristic of the user 3 is stored in accordance with the account C used by the user 3.
Referring to fig. 1 and 11, when the user 3 uses the vehicle 102 (the vehicle 102 is an example of a second device) by the voice of the user 3, the vehicle 102 may send the original voice of the user 3 and the account B to one or more devices through the network, and the vehicle 102 cannot recognize the voiceprint of the user 3 but can recognize the voice of the user 3, that is, the vehicle 102 cannot recognize the identity of the voice, and can recognize the content of the voice. One or more of the devices may be devices that are both connected to the same network as the vehicle 102, for example, the one or more devices may be devices as in fig. 1, and the description is given here by way of example of one device being the audio 123 (the audio 123 is an example of a first device). When the sound 123 receives the voice of the user 3 and the account B, the sound 123 determines that the account B corresponding to the voiceprint of the user 2 stored in the sound 123, the sound 123 determines that the output level of the data of the sound 123 is the second level, the outbound rating of the data of the audio 123 is relative to the vehicle 102, the audio 123 transmits the outbound rating of the data of the audio 123 to the vehicle 102, the vehicle 102 acquires the data request message of the user 3, the data request message of the user 3 is used to request to share the historical song list data of the user 3, and the vehicle 102 determines that the historical song list data of the user 3 belongs to the data of the data type corresponding to the output level of the data of the sound 123, then the vehicle 102 sends the data request message of the user 3 to the sound 123, in a case where the history song list data of the user 3 is stored in the audio 123, the audio 123 shares the history song list data of the user 3 with the vehicle 102. When the user 3 also uses the vehicle 102 through the fingerprint of the user 3, the vehicle 102 may identify the fingerprint of the user 3, obtain an account corresponding to the fingerprint of the user 3 as an account B, and send the account B to one or more devices through a network, where the one or more devices may be devices that are connected to the same network as the vehicle 102, for example, the one or more devices may be devices as shown in fig. 1, where an example is described where one device is the mobile phone 101 (the mobile phone 101 is another example of the first device). After the mobile phone 101 receives the account B, the mobile phone 101 determines that the account B is stored in the mobile phone 101, the mobile phone 101 determines that the outbound rating of the data of the mobile phone 101 is the second rating, the outbound rating of the data of the mobile phone 101 is relative to the vehicle 102, the mobile phone 101 sends the outbound rating of the data of the mobile phone 101 to the vehicle 102, the vehicle 102 obtains a data request message of the user 3, the request message is used for requesting to share the place of the user 3 that likes exercise, and the vehicle 102 determines that the place that likes exercise belongs to the data of the data type corresponding to the outbound rating of the data of the mobile phone 101, then the vehicle 102 sends the data request message of the user 3 to the mobile phone 101, and under the condition that the place of the user 3 that likes exercise is stored in the mobile phone 101, the mobile phone 101 sends the place of the user 3 that likes exercise to the vehicle 102. Thus, when the user 3 uses the vehicle 102 through the original data of different biological characteristics, the vehicle 102 will not only receive the historical song list data stored on the audio 123 by the user 3, but also the vehicle 102 will receive the place where the user 3 likes to exercise stored on the mobile phone 101 by the user 3, so that the vehicle 102 can play the favorite song of the user 3 according to the historical song list of the user 3; the vehicle 102 may also be in a fitness-like place of the user 3 stored in the mobile phone 101 according to the original fingerprint of the user 3, so that the vehicle 102 is driven to the fitness-like place of the user 3.
Referring to fig. 1 and 12, when the user 3 uses the tablet pc 103 (the tablet pc 103 is an example of a second device) through the original voice of the user 3, the tablet pc 103 performs voiceprint recognition on the original voice of the user 3, and obtains an account corresponding to the voiceprint of the user 3 as an account B, and then the tablet pc 103 sends the account B to one or more devices through a network, where the one or more devices may be devices that are both connected to the same home network as the tablet pc 103, for example, the one or more devices may be devices as in fig. 1, and here, description is given by taking an example that one device is a watch 122 (the watch 122 is an example of a first device). After the watch 122 receives the account B, the watch 122 determines that the account B does not exist in the watch 122, the watch 122 determines that the outbound rank of the data of the watch 122 is the fourth rank, the tablet computer 103 sends the data request message of the user 3 to the watch 122, and the watch 122 does not have data associated with the account B of the user 3, so that the watch 122 does not share the data to the tablet computer 103. When the user 3 uses the tablet computer 103 through the original 2D face of the user 3, the tablet computer 103 may identify the original 2D face of the user 3, and obtain an account corresponding to the original 2D face of the user 3 as an account B, and the tablet computer 103 sends the account B to one or more devices through a network, where the one or more devices may be devices that are connected to the same home network as the tablet computer 103, for example, the one or more devices may be devices as shown in fig. 1, and here, description is given taking a device as a mobile phone 121 (the mobile phone 121 is another example of a first device). After the mobile phone 121 receives the account B, the mobile phone 121 determines that the account B is not stored in the mobile phone 121, the mobile phone 121 determines that the outbound level of the data of the mobile phone 121 is the second level, the mobile phone 121 sends the outbound level of the data of the mobile phone 121 to the tablet pc 103, the tablet pc 103 obtains a data request message of the user 3, the data request message of the user 3 is used for requesting to share the schedule plan data of the user 3, the tablet pc 103 determines that the schedule plan data belongs to data of a data type corresponding to the outbound level of the data of the mobile phone 121, the tablet pc 103 sends the data request message of the user 3 to the mobile phone 121, and the mobile phone 121 shares the schedule plan data of the user 3 to the tablet pc 103 under the condition that the schedule plan data of the user 3 is stored in the mobile phone 121.
For another example, as shown in fig. 1, when one or more users use the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136, they are in the guest state, that is, the one or more users do not use the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136 through any account number or through any raw data of biometric features, so that personal data (for example, historical watching video) of the one or more users do not exist on the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136, and the television 131, the mobile phone 132, the tablet computer 133, the watch 134, the sound 135 or the vehicle 136 can only share non-personal data of each device, that is, device capability data and/or device state data of the device. For example, in the process of storing data generated by the television 131, each user and the data stored on the television 131 by each user are not correspondingly stored by the one or more users, only non-personal data generated by all users using the television 131 are stored, and the television 131 only shares device capability data or device status data of the television 131 with other devices.
The method 200 may further include step 250.
And 250, the second device stores the first data shared by the first device.
The method provided by the embodiment of the present application is described in detail above with reference to fig. 2 to 12. Hereinafter, the apparatus provided in the embodiment of the present application will be described in detail with reference to fig. 13 to 14. It should be understood that the description of the apparatus embodiment and the description of the method embodiment correspond to each other, and therefore, for the sake of brevity, some contents that are not described in detail may be referred to as the above method embodiment.
Fig. 13 shows a schematic structural diagram of an electronic device 1300 provided in an embodiment of the present application.
In an implementation manner, the electronic device 1300 may be the first device in the method 200, and the electronic device 1300 may perform the steps performed by the first device in the method 200, which may specifically refer to the description of the method 200 and will not be described herein again.
In another implementation manner, the electronic device 1300 may be the second device in the method 200, and the electronic device 1300 may perform the steps performed by the second device in the method 200, which may specifically refer to the description of the method 200 and is not described herein again.
The electronic device 1300 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, an Artificial Intelligence (AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited by the embodiments of the present application.
The electronic device 1300 may include a processor 1310, an external memory interface 1320, an internal memory 1321, a Universal Serial Bus (USB) interface 1330, a charging management module 1340, a power management module 1341, a battery 1342, an antenna 1, an antenna 2, a mobile communication module 1350, a wireless communication module 1360, an audio module 1370, a speaker 1370A, a receiver 1370B, a microphone 1370C, an earphone interface 1370D, a sensor module 1380, keys 1390, a motor 1391, an indicator 1392, a camera 1393, a display 1394, and a Subscriber Identity Module (SIM) card interface 1395, and the like. The sensor module 1380 may include a pressure sensor 1380A, a gyro sensor 1380B, an air pressure sensor 1380C, a magnetic sensor 1380D, an acceleration sensor 1380E, a distance sensor 1380F, a proximity light sensor 1380G, a fingerprint sensor 1380H, a temperature sensor 1380J, a touch sensor 1380K, an ambient light sensor 1380L, a bone conduction sensor 1380M, and the like.
It is to be understood that the illustrated architecture of the embodiments of the invention is not to be construed as a specific limitation on the electronic device 1300. In other embodiments of the present application, the electronic device 1300 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 1310 may include one or more processing units, such as: the processor 1310 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1310 for storing instructions and data. In some embodiments, the memory in the processor 1310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 1310. If the processor 1310 needs to reuse the instructions or data, it may call directly from the memory. Avoiding repeated accesses reduces the latency of the processor 1310, thereby increasing the efficiency of the system.
In some embodiments, processor 1310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 1310 may include multiple sets of I2C buses. The processor 1310 may be coupled to the touch sensor 1380K, the charger, the flash, the camera 1393, etc. via different I2C bus interfaces, respectively. For example: the processor 1310 may be coupled to the touch sensor 1380K via an I2C interface, such that the processor 1310 and the touch sensor 1380K communicate via an I2C bus interface to implement touch functionality of the electronic device 1300.
The I2S interface may be used for audio communication. In some embodiments, processor 1310 may include multiple sets of I2S buses. Processor 1310 may be coupled to audio module 1370 via an I2S bus to enable communication between processor 1310 and audio module 1370. In some embodiments, the audio module 1370 may transmit audio signals to the wireless communication module 1360 through the I2S interface, enabling answering calls through bluetooth headsets.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 1370 and the wireless communication module 1360 may be coupled through a PCM bus interface. In some embodiments, the audio module 1370 may also transmit audio signals to the wireless communication module 1360 through the PCM interface, so as to receive phone calls through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 1310 with the wireless communication module 1360. For example: the processor 1310 communicates with a bluetooth module in the wireless communication module 1360 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 1370 may transmit the audio signal to the wireless communication module 1360 through the UART interface, so as to realize the function of playing music through the bluetooth headset.
The MIPI interface can be used to connect the processor 1310 with peripheral devices such as the display 1394 and the camera 1393. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, the processor 1310 and the camera 1393 communicate through a CSI interface, enabling the capture functionality of the electronic device 1300. The processor 1310 and the display screen 1394 communicate via the DSI interface to implement the display function of the electronic device 1300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 1310 with the camera 1393, the display 1394, the wireless communication module 1360, the audio module 1370, the sensor module 1380, and so on. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 1330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 1330 may be used to connect a charger to charge the electronic device 1300, or to transmit data between the electronic device 1300 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It is to be understood that the interfacing relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the electronic apparatus 1300. In other embodiments of the present application, the electronic device 1300 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charge management module 1340 is used to receive charging input from the charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1340 may receive charging inputs from a wired charger via the USB interface 1330. In some wireless charging embodiments, the charging management module 1340 may receive wireless charging input through a wireless charging coil of the electronic device 1300. The charging management module 1340 can also supply power to the electronic device through the power management module 1341 while charging the battery 1342.
The power management module 1341 is used to connect the battery 1342, the charging management module 1340 and the processor 1310. The power management module 1341 receives input from the battery 1342 and/or the charging management module 1340, and provides power to the processor 1310, the internal memory 1321, the display 1394, the camera 1393, and the wireless communication module 1360. The power management module 1341 may also be used to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In some other embodiments, the power management module 1341 may also be disposed in the processor 1310. In other embodiments, the power management module 1341 and the charge management module 1340 can be disposed in the same device.
The wireless communication function of the electronic device 1300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1350, the wireless communication module 1360, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 1300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1350 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the electronic device 1300. The mobile communication module 1350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 1350 can receive electromagnetic waves from the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 1350 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be disposed in the processor 1310. In some embodiments, at least some of the functional modules of the mobile communication module 1350 may be disposed in the same device as at least some of the modules of the processor 1310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 1370A, the receiver 1370B, and the like), or displays an image or video through the display screen 1394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 1310, and may be located in the same device as the mobile communication module 1350 or other functional modules.
The wireless communication module 1360 may provide solutions for wireless communication applied to the electronic device 1300, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 1360 may be one or more devices that integrate at least one communication processing module. The wireless communication module 1360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on the electromagnetic wave signal, and transmits the processed signal to the processor 1310. Wireless communication module 1360 may also receive signals to be transmitted from processor 1310, frequency modulate, amplify, and convert to electromagnetic radiation via antenna 2.
In some embodiments, the antenna 1 of the electronic device 1300 is coupled to the mobile communication module 1350 and the antenna 2 is coupled to the wireless communication module 1360 such that the electronic device 1300 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 1300 implements the display function via the GPU, the display screen 1394, and the application processor, etc. The GPU is a microprocessor for image processing, connected to the display screen 1394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1310 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1394 is used for displaying images, video, and the like. The display screen 1394 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, electronic device 1300 can include 1 or N display screens 1394, N being a positive integer greater than 1.
The electronic device 1300 can implement a shooting function through the ISP, the camera 1393, the video codec, the GPU, the display 1394, the application processor, and the like.
The ISP is used to process data fed back by the camera 1393. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 1393.
The camera 1393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 1300 may include 1 or N cameras 1393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 1300 selects at a frequency bin, the digital signal processor is used to perform a fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 1300 may support one or more video codecs. As such, electronic device 1300 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 1300 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 1320 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 1300. The external memory card communicates with the processor 1310 through the external memory interface 1320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 1321 may be used to store computer-executable program code, including instructions. The internal memory 1321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 1300, and the like. In addition, the internal memory 1321 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one of a magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 1310 executes various functional applications and data processing of the electronic device 1300 by executing instructions stored in the internal memory 1321 and/or instructions stored in a memory provided in the processor.
The electronic device 1300 may implement audio functions through the audio module 1370, the speaker 1370A, the receiver 1370B, the microphone 1370C, the earphone interface 1370D, and the application processor. Such as music playing, recording, etc.
The audio module 1370 is used to convert digital audio information into an analog audio signal output and also used to convert an analog audio input into a digital audio signal. The audio module 1370 may also be used to encode and decode audio signals. In some embodiments, the audio module 1370 may be disposed in the processor 1310, or some functional modules of the audio module 1370 may be disposed in the processor 1310.
The speaker 1370A, also called a "horn", is used to convert an audio electrical signal into an acoustic signal. The electronic device 1300 may listen to music through the speaker 1370A or listen to a hands-free conversation.
The receiver 1370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 1300 receives a call or voice information, it can receive voice by placing the receiver 1370B close to the human ear.
The microphone 1370C, also called "microphone", converts a sound signal into an electrical signal. When making a call or transmitting voice information, the user can input a voice signal into the microphone 1370C by making a sound near the microphone 1370C by the mouth of the user. The electronic device 1300 may be provided with at least one microphone 1370C. In other embodiments, the electronic device 1300 may be provided with two microphones 1370C to implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1300 may further include three, four, or more microphones 1370C to collect sound signals, reduce noise, identify sound sources, and perform directional recording.
The headphone interface 1370D is used to connect wired headphones. The headset interface 1370D may be the USB interface 1330, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1380A is used to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 1380A may be disposed on display 1394. The pressure sensors 1380A may be of a wide variety, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 1380A, the capacitance between the electrodes changes. The electronic device 1300 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 1394, the electronic apparatus 1300 detects the intensity of the touch operation based on the pressure sensor 1380A. The electronic apparatus 1300 may also calculate the touched position from the detection signal of the pressure sensor 1380A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyroscope sensor 1380B may be used to determine a motion pose of the electronic device 1300. In some embodiments, the angular velocity of the electronic device 1300 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensors 1380B. The gyro sensor 1380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 1380B detects a shake angle of the electronic device 1300, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 1300 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 1380B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 1380C is used to measure air pressure. In some embodiments, the electronic device 1300 calculates altitude, aiding in positioning and navigation from barometric pressure values measured by the barometric pressure sensor 180C.
The magnetic sensor 1380D includes a hall sensor. The electronic device 1300 may detect the opening and closing of the flip holster using the magnetic sensor 1380D. In some embodiments, when the electronic device 1300 is a flip, the electronic device 1300 may detect the opening and closing of the flip according to the magnetic sensor 1380D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 1380E may detect the magnitude of acceleration of the electronic device 1300 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 1300 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1380F for measuring distance. The electronic device 1300 may measure distance by infrared or laser. In some embodiments, shooting a scene, the electronic device 1300 may utilize the distance sensor 1380F to range for fast focus.
The proximity light sensor 1380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 1300 emits infrared light to the outside through the light emitting diode. The electronic device 1300 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 1300. When insufficient reflected light is detected, the electronic device 1300 may determine that there are no objects near the electronic device 1300. The electronic device 1300 can utilize the proximity light sensor 1380G to detect that the user holds the electronic device 1300 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 1380G may also be used in a holster mode, a pocket mode automatically unlocking and locking the screen.
The ambient light sensor 1380L is used to sense ambient light brightness. The electronic device 1300 can adaptively adjust the display 1394 brightness based on the perceived ambient light level. The ambient light sensor 1380L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 1380L may also cooperate with the proximity light sensor 1380G to detect whether the electronic device 1300 is in a pocket to prevent accidental touches.
The fingerprint sensor 1380H is used to collect a fingerprint. The electronic device 1300 may utilize the collected fingerprint characteristics to implement fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint answering incoming calls, and the like.
The temperature sensor 1380J is used to detect temperature. In some embodiments, the electronic device 1300 implements a temperature processing strategy using the temperature detected by the temperature sensor 1380J. For example, when the temperature reported by the temperature sensor 1380J exceeds a threshold, the electronic device 1300 performs a reduction in performance of a processor located near the temperature sensor 1380J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 1300 heats the battery 1342 when the temperature is below another threshold to avoid an abnormal shutdown of the electronic device 1300 due to low temperatures. In other embodiments, electronic device 1300 performs a boost on the output voltage of battery 1342 when the temperature is below a further threshold to avoid an abnormal shutdown due to low temperatures.
The touch sensor 1380K is also referred to as a "touch device". The touch sensor 1380K may be disposed on the display screen 1394, and the touch sensor 1380K and the display screen 1394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1380K is used to detect a touch operation applied thereto or therearound. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation can be provided through the display screen 1394. In other embodiments, the touch sensor 1380K may be disposed on the surface of the electronic device 1300 at a different location than the display 1394.
The bone conduction sensor 1380M may acquire a vibration signal. In some embodiments, the bone conduction transducer 1380M may acquire a vibration signal of the body's voice vibrating bone mass. The bone conduction sensor 1380M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, a bone conduction sensor 1380M may also be provided in the headset, incorporated into a bone conduction headset. The audio module 1370 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 1380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 1380M, so as to realize the heart rate detection function.
The keys 1390 include a power-on key, volume key, etc. The keys 190 may be mechanical keys. Or may be touch keys. The electronic device 1300 may receive a key input, generate a key signal input related to user settings and function control of the electronic device 1300.
Motor 1391 may generate a vibration cue. The motor 1391 may be used for both incoming call vibration cues and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1391 may also respond to different vibration feedback effects in response to touch operations applied to different areas of the display screen 1394. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 1392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 1395 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 1300 by being inserted into and pulled out of the SIM card interface 1395. The electronic device 1300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 1395 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 1395 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 1395 may also be compatible with different types of SIM cards. The SIM card interface 1395 is also compatible with external memory cards. The electronic device 1300 realizes functions such as a call and data communication by the interaction between the SIM card and the network. In some embodiments, the electronic device 1300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 1300 and cannot be separated from the electronic device 1300.
The software system of the electronic device 1300 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention exemplarily illustrates a software structure of the electronic device 1300 by using an Android system with a layered architecture as an example.
Fig. 14 is a schematic diagram of a software structure of an electronic device 1300 according to an embodiment of the present invention.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 14, the application package may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 14, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions for the electronic device 1300. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following is an exemplary description of the workflow of the software and hardware of the electronic device 1300 in connection with capturing a photo scene.
When touch sensor 1380K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through a camera 1393.
The present application further provides a computer-readable medium, on which a computer program is stored, where the computer program is executed by a computer to implement the method in any of the above method embodiments.
The embodiment of the present application further provides a computer program product, and when being executed by a computer, the computer program product implements the method in any of the above method embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A method for data sharing, comprising:
the method comprises the steps that a first device obtains registration information of a first user from a second device, wherein the registration information of the first user comprises an account number of the first user or original data of biological characteristics of the first user;
the first equipment determines the data outbound grade of the first equipment according to the registration information of the first user; the data of the first equipment corresponds to different data types in the outbound grade, and the data of the different data types have different highest risks;
the first device acquires a first data request message from the second device, wherein the first data request message is used for requesting to share first data of the first user;
and the first equipment determines that the first data belongs to the data of the data type corresponding to the data output level of the first equipment, and sends the first data to the second equipment.
2. The method of claim 1, wherein in the case that the registration information of the first user includes raw data of the biometric characteristic of the first user, the first device determining the outbound ranking of the data of the first device from the registration information of the first user comprises:
the first device identifies the original data of the biological characteristics of the first user and determines whether an account corresponding to the original data of the biological characteristics of the first user is obtained;
determining that the data of the first device is at the fourth level when the first device determines that the account corresponding to the original data of the biological feature of the first user is not obtained;
determining whether an account corresponding to the original data of the biological feature of the first user obtained by the first device exists in all accounts stored in the second device under the condition that the account corresponding to the original data of the biological feature of the first user is obtained by the first device;
determining that the data of the first device is of the second level when the account corresponding to the original data of the biological feature of the first user obtained by the first device exists in the second device;
and determining that the data of the first device is of the third level when the account corresponding to the original data of the biological feature of the first user obtained by the first device does not exist in the second device.
3. The method of claim 2, wherein after the determining that the outbound ranking of the data of the first device is the third ranking, the method further comprises:
determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user;
determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or
And determining that the data of the first device is output at a third sub-level in a third level when the registration information of the first user is the voice or the signature of the first user.
4. The method according to claim 1, wherein in a case that the registration information of the first user includes an account of the first user, the determining, by the first device, the outbound ranking of the data of the first device according to the registration information of the first user includes:
the first equipment determines whether an account of the first user exists or not;
determining that the data export grade of the first equipment is a second grade under the condition that the account of the first user is stored in the first equipment;
and determining that the data of the first device is of a fourth level when the account of the first user is not stored in the first device.
5. The method according to any one of claims 2 to 4, wherein the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, equipment capability data, and/or equipment status data; and/or the presence of a gas in the gas,
the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, favorite data, equipment capacity data and/or equipment state data; and/or the presence of a gas in the gas,
the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
6. The method of claim 3, wherein the type of data corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type comprises photo data, recorded video data, device capability data, and/or device status data; and/or the presence of a gas in the gas,
the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the presence of a gas in the gas,
and the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
7. The method according to any one of claims 1 to 6, further comprising:
and the first equipment sends the data outbound grade of the first equipment to the second equipment.
8. A method of acquiring data, comprising:
the method comprises the steps that a second device obtains registration information of a first user, which is input by the first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user;
the second equipment sends the registration information of the first user to the first equipment;
the second device receives first information sent by the first device, wherein the first information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device;
the second equipment determines the data outbound grade of the first equipment according to the first information;
the second device acquires a first data request message of the first user, wherein the first data request message is used for requesting to share first data of the first user;
the second equipment determines that the first data belongs to data of a data type corresponding to the data output level of the first equipment; data of different said data types have different highest risks;
and the second equipment sends the first data request message and receives the first data sent by the first equipment.
9. The method of claim 8, wherein the second device determining, according to the first information, an outbound ranking of data of the first device comprises:
the second equipment determines whether the second equipment stores an account corresponding to the original data of the biological characteristics of the first user determined by the first equipment;
determining that the data export level of the first device is the second level when the account corresponding to the original data of the biological feature of the first user determined by the first device is stored in the second device;
and determining that the data of the first device is at the third level when the account corresponding to the original data of the biological feature of the first user determined by the first device is not stored in the second device.
10. The method of claim 9, wherein after the determining that the outbound ranking of the data of the first device is the third ranking, the method further comprises:
determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user;
determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or
And determining that the data of the first device is output at a third sub-level in a third level when the registration information of the first user is the voice or the signature of the first user.
11. The method according to claim 9 or 10, wherein the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, schedule data, preference data, equipment capability data, and/or equipment status data; and/or the presence of a gas in the gas,
the data type corresponding to the third level is a third type, and the data corresponding to the third type includes video data, logistics data, schedule data, preference data, equipment capability data and/or equipment state data.
12. The method of claim 10, wherein the type of data corresponding to the first sub-level is a first sub-type, and the data corresponding to the first sub-type comprises photo data, recorded video data, device capability data, and/or device status data; and/or the presence of a gas in the gas,
the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the presence of a gas in the gas,
and the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
13. The method according to any one of claims 8 to 12, further comprising:
and the second equipment sends the data outbound grade of the first equipment to the first equipment.
14. A method of acquiring data, comprising:
the method comprises the steps that a second device obtains registration information of a first user, wherein the registration information of the first user comprises original data of biological characteristics of the first user;
the second device identifies the original data of the biological characteristics of the first user and determines whether the second device can obtain an account corresponding to the original data of the biological characteristics of the first user;
when the second device obtains an account corresponding to the original data of the biological characteristics of the first user, the second device sends second information to the first device, wherein the second information is used for indicating the second device to obtain the account corresponding to the original data of the biological characteristics of the first user; and
the second device receives third information sent by the first device, wherein the third information is used for indicating whether the first device stores an account number of which the second device obtains the original data of the biological characteristics of the first user;
the second equipment determines the data outbound grade of the first equipment according to the third information;
the second device acquires a data request message of the first user, wherein the data request message is used for requesting the first device to share first data stored on the first device by the first user;
the second equipment determines that the first data belongs to data of a data type corresponding to an outbound grade of the data of the first equipment, and the data of different data types have different highest risks;
and the second equipment sends a data request message of the first user to the first equipment and receives first data sent by the first equipment.
15. The method of claim 14, wherein the second device determining the outbound rank of the data of the first device according to the third information comprises:
determining that the data export grade of the first device is a second grade under the condition that the first device stores an account number corresponding to the original data of the biological characteristics of the first user, which is obtained by the second device;
and determining that the data of the first device is of a fourth level when the first device does not store the account corresponding to the original data of the biological characteristics of the first user obtained by the second device.
16. The method of claim 14, further comprising:
under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user, the second device sends the registration information of the first user to the first device;
the second device receives a third instruction sent by the first device, wherein the third instruction is used for indicating that the first device does not obtain an account corresponding to the original data of the biological characteristics of the first user;
and the second equipment determines that the outbound grade of the data of the first equipment is a fourth grade according to the third instruction.
17. The method of claim 14, further comprising:
under the condition that the second device does not obtain an account corresponding to the original data of the biological characteristics of the first user, the second device sends the registration information of the first user to the first device;
the second device receives fourth information sent by the first device, wherein the fourth information is used for indicating an account corresponding to the original data of the biological characteristics of the first user determined by the first device;
and the second equipment determines that the data output end grade of the first equipment is a third grade according to the fourth information.
18. The method of claim 17, wherein after the determining that the outbound ranking of the data of the first device is the third ranking, the method further comprises:
determining that the outbound rating of the data of the first device is a first sub-rating in a third rating when the registration information of the first user is a 3D face, a fingerprint, an iris, or DNA of the first user;
determining that the data of the first device is at a second sub-level in a third level when the registration information of the first user is the 2D face or vein of the first user; or
And determining that the data of the first device is output at a third sub-level in a third level when the registration information of the first user is the voice or the signature of the first user.
19. The method according to any one of claims 15 to 17, wherein the data type corresponding to the second level is a second type, and the data corresponding to the second type includes general location data, video data, logistics data, scheduling data, preference data, equipment capability data, and/or equipment status data; and/or the presence of a gas in the gas,
the data type corresponding to the third level is a third type, and the data corresponding to the third type comprises video data, logistics data, schedule data, favorite data, equipment capacity data and/or equipment state data; and/or the presence of a gas in the gas,
the data type corresponding to the fourth level is a fourth type, and the data corresponding to the fourth type comprises equipment capability data and/or equipment state data.
20. The method of claim 18, wherein the type of data corresponding to the first sub-level is a first sub-type, and wherein the data corresponding to the first sub-type comprises photo data, recorded video data, device capability data, and/or device status data; and/or the presence of a gas in the gas,
the data type corresponding to the second sub-level is determined to be a second sub-type, and the data corresponding to the second sub-type comprises logistics data, schedule data, equipment capability data and/or equipment state data; and/or the presence of a gas in the gas,
and the data type corresponding to the third sub-level is determined as a third sub-type, and the data corresponding to the third sub-type comprises favorite data, watched video data, equipment capability data and/or equipment state data.
21. The method according to any one of claims 14 to 20, further comprising:
and the second equipment sends the data outbound grade of the first equipment to the first equipment.
22. A terminal device, comprising: a processor coupled with a memory;
the memory is used for storing a computer program;
the processor is configured to execute a computer program stored in the memory to cause the apparatus to perform the method of any one of claims 1 to 21.
CN202010076673.0A 2020-01-23 2020-01-23 Data sharing method and device Active CN111339513B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010076673.0A CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device
PCT/CN2020/128996 WO2021147483A1 (en) 2020-01-23 2020-11-16 Data sharing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010076673.0A CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device

Publications (2)

Publication Number Publication Date
CN111339513A true CN111339513A (en) 2020-06-26
CN111339513B CN111339513B (en) 2023-05-09

Family

ID=71181431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010076673.0A Active CN111339513B (en) 2020-01-23 2020-01-23 Data sharing method and device

Country Status (2)

Country Link
CN (1) CN111339513B (en)
WO (1) WO2021147483A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021147483A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Data sharing method and apparatus

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156194A1 (en) * 2011-12-19 2013-06-20 Fujitsu Limited Secure recording and sharing system of voice memo
CN105100708A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Request processing method and device
CN106534280A (en) * 2016-10-25 2017-03-22 广东欧珀移动通信有限公司 Data sharing method and device
US9742760B1 (en) * 2014-06-16 2017-08-22 TouchofModern, Inc. System and method for improving login and registration efficiency to network-accessed data
CN107103245A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 The right management method and device of file
US20180182389A1 (en) * 2016-12-27 2018-06-28 Amazon Technologies, Inc. Messaging from a shared device
CN108600793A (en) * 2018-04-08 2018-09-28 北京奇艺世纪科技有限公司 a kind of hierarchical control method and device
CN108833357A (en) * 2018-05-22 2018-11-16 中国互联网络信息中心 Information inspection method and device
CN108985089A (en) * 2018-08-01 2018-12-11 清华大学 Internet data shared system
CN108985255A (en) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 Data processing method, device, computer readable storage medium and electronic equipment
CN109035937A (en) * 2018-08-29 2018-12-18 芜湖新使命教育科技有限公司 Authorize shared Network Education System
CN109299047A (en) * 2018-09-21 2019-02-01 深圳市九洲电器有限公司 Distributed system data sharing method and device, data sharing distributed system
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN109885999A (en) * 2019-01-29 2019-06-14 努比亚技术有限公司 A kind of account register method, terminal and computer readable storage medium
CN110198362A (en) * 2019-05-05 2019-09-03 华为技术有限公司 A kind of method and system for adding smart home device in contact person
JP2019159974A (en) * 2018-03-15 2019-09-19 オムロン株式会社 Authentication device, authentication method and authentication program
CN110287036A (en) * 2019-05-09 2019-09-27 华为技术有限公司 A kind of collaborative share methods, devices and systems

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339513B (en) * 2020-01-23 2023-05-09 华为技术有限公司 Data sharing method and device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130156194A1 (en) * 2011-12-19 2013-06-20 Fujitsu Limited Secure recording and sharing system of voice memo
US9742760B1 (en) * 2014-06-16 2017-08-22 TouchofModern, Inc. System and method for improving login and registration efficiency to network-accessed data
CN105100708A (en) * 2015-06-26 2015-11-25 小米科技有限责任公司 Request processing method and device
CN107103245A (en) * 2016-02-23 2017-08-29 中兴通讯股份有限公司 The right management method and device of file
CN106534280A (en) * 2016-10-25 2017-03-22 广东欧珀移动通信有限公司 Data sharing method and device
US20180182389A1 (en) * 2016-12-27 2018-06-28 Amazon Technologies, Inc. Messaging from a shared device
JP2019159974A (en) * 2018-03-15 2019-09-19 オムロン株式会社 Authentication device, authentication method and authentication program
CN108600793A (en) * 2018-04-08 2018-09-28 北京奇艺世纪科技有限公司 a kind of hierarchical control method and device
CN108833357A (en) * 2018-05-22 2018-11-16 中国互联网络信息中心 Information inspection method and device
CN108985089A (en) * 2018-08-01 2018-12-11 清华大学 Internet data shared system
CN108985255A (en) * 2018-08-01 2018-12-11 Oppo广东移动通信有限公司 Data processing method, device, computer readable storage medium and electronic equipment
CN109035937A (en) * 2018-08-29 2018-12-18 芜湖新使命教育科技有限公司 Authorize shared Network Education System
CN109299047A (en) * 2018-09-21 2019-02-01 深圳市九洲电器有限公司 Distributed system data sharing method and device, data sharing distributed system
CN109325742A (en) * 2018-09-26 2019-02-12 平安普惠企业管理有限公司 Business approval method, apparatus, computer equipment and storage medium
CN109885999A (en) * 2019-01-29 2019-06-14 努比亚技术有限公司 A kind of account register method, terminal and computer readable storage medium
CN110198362A (en) * 2019-05-05 2019-09-03 华为技术有限公司 A kind of method and system for adding smart home device in contact person
CN110287036A (en) * 2019-05-09 2019-09-27 华为技术有限公司 A kind of collaborative share methods, devices and systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S VIMAL ET AL.: "A Survey on Various File Sharing Methods in P2P Networks", 《2017 THIRD INTERNATIONAL CONFERENCE ON SCIENCE TECHNOLOGY ENGINEERING & MANAGEMENT (ICONSTEM)》 *
郑丽娇: "一种分布式家庭娱乐系统的协作体系设计及实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021147483A1 (en) * 2020-01-23 2021-07-29 华为技术有限公司 Data sharing method and apparatus

Also Published As

Publication number Publication date
CN111339513B (en) 2023-05-09
WO2021147483A1 (en) 2021-07-29

Similar Documents

Publication Publication Date Title
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
US11868463B2 (en) Method for managing application permission and electronic device
CN113722058B (en) Resource calling method and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
US20230070358A1 (en) File Sharing Method, System, and Related Device
CN114095599B (en) Message display method and electronic equipment
CN114650363A (en) Image display method and electronic equipment
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN113821767A (en) Application program authority management method and device and electronic equipment
WO2022160991A1 (en) Permission control method and electronic device
CN111615820B (en) Method and equipment for performing domain name resolution by sending key value to GRS server
CN114995715B (en) Control method of floating ball and related device
CN111431968B (en) Cross-device distribution method of service elements, terminal device and storage medium
CN111249728B (en) Image processing method, device and storage medium
CN111339513B (en) Data sharing method and device
CN114006698B (en) token refreshing method and device, electronic equipment and readable storage medium
CN114338642B (en) File transmission method and electronic equipment
CN113950045B (en) Subscription data downloading method and electronic equipment
CN115701018A (en) Method for safely calling service, method and device for safely registering service
CN114254334A (en) Data processing method, device, equipment and storage medium
CN113867851A (en) Electronic equipment operation guide information recording method, electronic equipment operation guide information acquisition method and terminal equipment
CN114205318B (en) Head portrait display method and electronic equipment
CN114625292A (en) Icon setting method and electronic equipment
CN114826636A (en) Access control system and related method and apparatus
CN115941220A (en) Cross-device authentication method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant