CN114697960B - Method and system for connecting external camera - Google Patents

Method and system for connecting external camera Download PDF

Info

Publication number
CN114697960B
CN114697960B CN202110740201.5A CN202110740201A CN114697960B CN 114697960 B CN114697960 B CN 114697960B CN 202110740201 A CN202110740201 A CN 202110740201A CN 114697960 B CN114697960 B CN 114697960B
Authority
CN
China
Prior art keywords
camera
image
display screen
key
capability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110740201.5A
Other languages
Chinese (zh)
Other versions
CN114697960A (en
Inventor
白帆
曹辉
张新颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114697960A publication Critical patent/CN114697960A/en
Application granted granted Critical
Publication of CN114697960B publication Critical patent/CN114697960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/06Authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/36User authentication by graphic or iconic representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup

Abstract

The embodiment of the application provides a method and a system for connecting an external camera. Before authentication, the master device and the slave device can confirm a proper authentication mode according to own capability or currently available capability, and then establish trust connection. Thus, when a certain capability or service of the master device or the slave device is unavailable or currently occupied, the master device and the slave device can also determine to perform authentication by adopting other authentication modes. Here, in the authentication process using the other authentication methods, the master device or the slave device does not need to provide the unavailable or currently occupied capability or service, so as to avoid blocking the authentication process.

Description

Method and system for connecting external camera
Technical Field
The application relates to the field of intelligent wearing, in particular to a method and a system for connecting an external camera.
Background
The electronic equipment such as the mobile phone can establish secure connection with other electronic equipment in an authentication code authorization mode, and then obtain services provided by the equipment. However, the current authorized manner is relatively single. This makes it impossible for the electronic device to complete authorization with other electronic devices in many cases.
Disclosure of Invention
The application provides a method and a system for connecting an external camera. By implementing the method, the master device with the camera and the display screen can find other slave devices with the camera in the network, acquire the capability information of the slave devices, and negotiate a proper authorization mode with the slave devices through the capability information, so that authentication is completed.
In a first aspect, the present application provides a method of connecting an external camera, the method being applied to a first device having a processor, a camera, and a display screen, the method comprising: receiving capability information of a second device, wherein the second device is an electronic device in the same network with the first device; if the capability information indicates that the second device is provided with a camera but not provided with an available display screen, the first device displays a first graphic code, the first graphic code corresponds to a first key, and the first graphic code is used for the second device to determine the first key; receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using a first secret key; analyzing the image by using the first key to obtain an analyzed image; and displaying the analyzed image.
By implementing the method provided in the first aspect, when the electronic device of the first device type is connected to other electronic devices in the network, a proper mode for acquiring the use permission of the electronic device can be judged according to the capability information of the electronic device, so that the problem that some electronic devices cannot be used temporarily because of not having a display screen or the display screen, and authorization authentication before two electronic devices is blocked is avoided.
With reference to some embodiments of the first aspect, in some embodiments, the method further includes: if the capability information indicates that the second device is provided with a camera and an available display screen, the first device scans a second graphic code to obtain a second key, the second graphic code is generated by the second device and is displayed on the display screen of the second device; receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using a second secret key; and analyzing the image by using the second key to obtain an analyzed image.
When the method provided by the embodiment is implemented and the connected electronic device is a device with an available display screen, the first device and the electronic device can display verification information through the display screen of the first device and the electronic device when performing authorization authentication, and meanwhile, the first device can scan the verification information displayed on the electronic device, so that the authorization authentication is completed.
With reference to some embodiments of the first aspect, in some embodiments, the capability information includes: the capabilities and the use status of the capabilities provided by the device.
By implementing the method provided by the embodiment, the first device can judge whether the connected electronic device is provided with a display screen, a camera and other devices according to the capability of the device in the capability information, and further, the first device can judge whether the display screen and the camera device can be called by the first device according to the using state of the capability in the capability information.
With reference to some embodiments of the first aspect, in some embodiments, the capability information indicates that the second device is provided with a camera but not with a usable display screen, specifically includes: the capability of the device indicates that the second device is provided with a camera but not a display screen; or, the capability of the device indicates that the second device has a camera and a display, but the use state of the capability indicates that the display cannot be invoked.
With reference to some embodiments of the first aspect, in some embodiments, before receiving the capability information of the second device, the method further includes: the first event is detected and capability information is requested from the second device.
By implementing the method provided by the embodiment, the first device can determine whether to connect with other external cameras according to a preset event. When detecting that the external camera needs to be connected, the first device can acquire the capability information of other electronic devices in the network, and further judge the specific mode of carrying out authorization authentication with the electronic device.
With reference to some embodiments of the first aspect, in some embodiments, the first event includes: starting a camera of the first equipment; or, user operation on the first control; or, the user's requirement for camera capabilities of the first device is beyond the capabilities of the first device.
When the first device detects the event, the first device may consider that the user currently has a need to connect to other external cameras, so that the first device may acquire capability information of other electronic devices in the network.
With reference to some embodiments of the first aspect, in some embodiments, a requirement of a user for a camera capability of the first device is beyond a capability range of the first device, and specifically includes: the zoom range provided by the camera of the first device cannot meet the requirements of the user; or, the modes which can be provided by the camera of the first device and adapt to different shooting scenes cannot meet the requirements of users.
With reference to some embodiments of the first aspect, in some embodiments, displaying the parsed image specifically includes: displaying the parsed image in a first area, the first area comprising: all or part of the area originally displaying the image acquired by the first equipment camera.
By implementing the method provided by the embodiment, the first device can select to display the image acquired by the second device in the whole area where the image acquired by the camera of the first device is originally displayed, namely, the image acquired by the second device is used for replacing the image acquired by the camera of the first device. In addition, the first device can display the image collected by the second device in the partial area where the image collected by the camera of the first device is originally displayed, namely, the image collected by the camera of the first device and the image collected by the camera of the second device are displayed simultaneously.
In a second aspect, the present application provides a method of connecting an external camera, the method being applied to a second device having a processor, a camera, the method comprising: transmitting capability information of a second device to a first device, the first device being an electronic device in the same network as the second device; scanning a first graphic code to obtain a first key, wherein the first graphic code is generated by a first device after judging that a second device is provided with a camera and does not have an available display screen according to capability information, and is displayed on the display screen of the first device; processing an image acquired by the second equipment in real time through the camera by using the first secret key to acquire a processed image; and sending the processed image to the first device.
Implementing the method provided in the second aspect, when the second device does not have an available display screen, the first device that invokes the second device may agree with the second device: the first device displays a graphic code containing authentication information and the second device scans the graphic code. Then, the second device scans the graphic code displayed by the first device to obtain verification information, and further determines a key for carrying out safe image data transmission with the first device.
With reference to some embodiments of the second aspect, in some embodiments, the method further includes: displaying a second graphic code, wherein the second graphic code corresponds to a second key, and the second graphic code is generated after the first device judges that the second device is provided with a camera and an available display screen according to the capability information and is displayed on the display screen of the second device; and processing the image acquired by the second equipment in real time through the camera by using the second secret key to obtain a processed image.
The method provided by the embodiment is implemented, when the second device has an available display screen, the second device can also display a graphic code containing verification information. In this way, the first device may scan the second device's graphical code, determine the key, and obtain authorization to use the second device's camera.
With reference to some embodiments of the second aspect, in some embodiments, the capability information includes: the capabilities and the use status of the capabilities provided by the device.
By implementing the method provided by the embodiment, the second device can inform the other electronic devices of the capability of the device through the capability of the device in the capability information, and reflect whether the capability can be invoked or not through the use state of the capability in the capability information.
With reference to some embodiments of the second aspect, in some embodiments, the first device determines, according to capability information of the second device, that the second device has a camera but does not have a usable display screen, including: the capability of the device indicates that the second device is provided with a camera but not a display screen; or, the capability of the device indicates that the second device has a camera and a display, but the use state of the capability indicates that the display cannot be invoked.
In a third aspect, the present application provides an electronic device, including: one or more processors, memory; the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions, the one or more processors for invoking the computer instructions to cause the electronic device to perform:
Receiving capability information of a second device, wherein the second device is an electronic device in the same network with the first device; if the capability information indicates that the second device is provided with a camera but not provided with an available display screen, the first device displays a first graphic code, the first graphic code corresponds to a first key, and the first graphic code is used for the second device to determine the first key; receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using a first secret key; analyzing the image by using the first key to obtain an analyzed image; and displaying the analyzed image.
With reference to some embodiments of the third aspect, in some embodiments, the one or more processors are further to invoke computer instructions to cause the electronic device to perform:
if the capability information indicates that the second device is provided with a camera and an available display screen, the first device scans a second graphic code to obtain a second key, the second graphic code is generated by the second device and is displayed on the display screen of the second device; receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using a second secret key; and analyzing the image by using the second key to obtain an analyzed image.
With reference to some embodiments of the third aspect, in some embodiments, the capability information includes: the capabilities and the use status of the capabilities provided by the device.
With reference to some embodiments of the third aspect, in some embodiments, the capability information indicates that the second device is provided with a camera but not with a usable display screen, specifically includes: the capability of the device indicates that the second device is provided with a camera but not a display screen; or, the capability of the device indicates that the second device has a camera and a display, but the use state of the capability indicates that the display cannot be invoked.
With reference to some embodiments of the third aspect, in some embodiments, the one or more processors are further to invoke computer instructions to cause the electronic device to perform: the first event is detected and capability information is requested from the second device.
With reference to some embodiments of the third aspect, in some embodiments, the first event includes: starting a camera of the first equipment; or, user operation on the first control; or, the user's requirement for camera capabilities of the first device is beyond the capabilities of the first device.
With reference to some embodiments of the third aspect, in some embodiments, a requirement of a user for a camera capability of the first device is beyond a capability range of the first device, and specifically includes: the zoom range provided by the camera of the first device cannot meet the requirements of the user; or, the modes which can be provided by the camera of the first device and adapt to different shooting scenes cannot meet the requirements of users.
With reference to some embodiments of the third aspect, in some embodiments, the one or more processors invoke computer instructions to cause the electronic device to display the parsed image specifically includes: displaying the parsed image in a first area, the first area comprising: all or part of the area originally displaying the image acquired by the first equipment camera.
In a fourth aspect, the present application provides an electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method as described in the second aspect and any possible implementation of the second aspect.
In a fifth aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
In a sixth aspect, the present application provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
It will be appreciated that the electronic device provided in the third aspect, the electronic device provided in the fourth aspect, the computer storage medium provided in the fifth aspect, and the computer program product provided in the sixth aspect are all configured to perform the methods provided in the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
FIG. 1 is a system diagram provided by an embodiment of the present application;
FIGS. 2A-2F are a set of user interfaces provided by embodiments of the present application;
3A-3B are a set of user interfaces provided by embodiments of the present application;
FIGS. 4A-4D are a set of user interfaces provided by embodiments of the present application;
fig. 5 is a flowchart of a method for connecting an external camera according to an embodiment of the present application;
FIG. 6 is a flow chart of a negotiation authentication channel provided by an embodiment of the present application;
FIG. 7 is an authentication flow chart provided by an embodiment of the present application;
FIG. 8 is another authentication flow chart provided by an embodiment of the present application;
fig. 9 is a hardware diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
With the development of smart phones, the capabilities of the phones are becoming more comprehensive and powerful, but in many scenarios, there are inherent limitations to the capabilities of the phones. At this time, the mobile phone needs to obtain better service and experience by connecting with an external device.
For example, the shooting capability of a native camera of a mobile phone has limitations. In particular, the field of view of the native camera depends on the orientation of the handset held by the user. The capabilities of the native camera are determined when the user purchases the handset. The capabilities herein refer to physical capabilities of the camera and software capabilities of the image signal processing module, including an optical zoom range, a digital zoom range, a shutter time, an aperture adjustment range, a video frame rate, and the like of the camera.
In order to enable a user to use more cameras with different shooting angles and more excellent capabilities, the mobile phone can be connected with and use other external cameras, so that richer camera use experience is provided for the user using the mobile phone.
Currently, the mobile phone can establish a trusted connection with other external cameras by exchanging identity verification codes (Personal Identification Number, PIN), and then acquire image acquisition services provided by the external cameras. The process of establishing a trusted connection described above may be referred to as authentication.
However, the conventional authentication method is relatively single. Both devices performing authentication often perform authentication according to a preset authentication method. For example, during authentication of a cell phone with an internet television, the internet television may display a PIN on the screen. After seeing the PIN, the user can input the PIN on the mobile phone, and then the mobile phone can receive the PIN input by the user. Then, the mobile phone and the internet television can exchange the PIN of the two parties, and further, the mobile phone and the internet television can verify whether the PIN of the other party is consistent with the PIN generated or collected by the mobile phone and the internet television.
Thus, the two devices (mobile phone and internet television) performing authentication can only perform authentication according to the preset authentication rules, that is, different authentication modes cannot be negotiated according to the capability of each device or the capability available at present. Thus, when a party is unavailable or occupied with a capability or service that needs to be provided under a preset authentication scheme, the preset authentication scheme is difficult to be performed, i.e., authentication is blocked.
In order to increase the diversity of authentication and enable the electronic equipment performing authentication to negotiate an authentication mode according to the actual capabilities of both parties, the embodiment of the application provides a method and a system for connecting external equipment. The method involves a master device and a slave device. The master device refers to an electronic device that requests connection to an external device. A slave device refers to an electronic device that establishes a connection with a master device in response to a request from the master device and provides resources, capabilities, or services to the master device.
Before authentication, the master device and the slave device can confirm a proper authentication mode according to own capability or currently available capability, and then establish trust connection. Thus, when a certain capability or service of the master device or the slave device is unavailable or currently occupied, the master device and the slave device can also determine to perform authentication by adopting other authentication modes. Here, in the authentication process using the other authentication methods, the master device or the slave device does not need to provide the unavailable or currently occupied capability or service, so as to avoid blocking the authentication process.
In the following, embodiments of the present application will specifically describe a system architecture of a master device to slave device connection in conjunction with the distributed device virtual system 10 shown in fig. 1.
The system 10 may include a first device, a second device, and a distributed virtualization platform (Device Virtualization Kit, DV kit).
The first device is an electronic device with a processor, a camera and a display screen. The first device may be a cell phone, a tablet computer, a notebook computer, an internet television, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personaldigital assistant, PDA), etc., and the specific type of the first device is not limited in this application.
The second device is an electronic device provided with a camera. The second device may also be provided with a display screen. The second device may be an electronic device such as a mobile phone, a tablet computer, a notebook computer, an internet television, etc.
DV kit is a platform for realizing the cooperation of multiple virtual device capabilities. The platform can abstract the capability or the service provided by the second device into a functional module, and then provide the service of the first device using the functional module so as to meet the requirement of the first device.
In the embodiment of the present application, software and hardware modules for implementing services provided by the DV kit may be deployed in the first device and the second device, that is, part of the software and hardware modules in the first device and the second device together form the DV kit.
In other alternative embodiments, the DV kit may be stand alone, e.g., with software and hardware modules that implement the services provided by the DV kit deployed on one server. The present application is not limited in this regard.
The first device and the second device may establish a wireless communication connection. The wireless communication connection may be, for example, a near-range connection such as a high-fidelity wireless communication (wireless fidelity, wi-Fi) connection, a Bluetooth (Bluetooth) connection, an infrared connection, an NFC connection, a ZigBee connection, or a far-range connection (a long-range connection includes, but is not limited to, a mobile network supporting 2g,3g,4g,5g, and subsequent standard protocols). For example, the first device and the second device may log onto the same user account (e.g., hua as an account) and then make a remote connection through a server (e.g., hua as a provided distributed virtualization platform server).
Based on the DV kit, the first device may obtain the capability of the second device under the condition of establishing a connection with the second device, so as to meet the requirement of the first device. For example, the second device may be provided with photographing capabilities. The first device may discover the second device when the first device requests use of the external camera. Then, the first device may call the photographing service provided by the second device, and further perform photographing using the camera of the second device.
Fig. 2A-2F illustrate a set of user interfaces where a first device connects and uses camera capabilities of a second device in a talk scenario. The method for connecting and invoking the second device by the first device provided in the embodiment of the present application will be described below with reference to fig. 2A to 2F.
Fig. 2A shows a user interface 21 for a first device to display a video call. As shown in fig. 2A, the user interface 21 includes a status bar 211, a window 212, a window 213, a control 214, a control 215, and a control 216.
Status bar 211 may include: one or more signal strength indicators (e.g., signal strength indicator 2111, signal strength indicator 2112), wireless high-fidelity (wireless fidelity, wi-Fi) signal strength indicator 2113, battery status indicator 2114, time indicator 2115) of a mobile communication signal (also may be referred to as a cellular signal).
Window 212 may be used for images of the counterpart contacts captured by other electronic devices. Window 213 may be used to display images captured by the first device's native camera. In general, window 213 may first display an image captured by a front-facing camera of the first device.
Control 214 may be used to switch the camera used by the first device, i.e. to switch the image displayed in window 213. Control 215 may be used to hang up the video call shown in user interface 21. Control 216 may be used to display other operations provided by the first device to adjust the video call. Such operations include, but are not limited to, muting, video recording, beautifying, using dynamic stickers, and the like.
The first device may detect a user operation on control 214, in response to which the first device may display user interface 22 shown in fig. 2B. The user operation is, for example, a click operation.
As shown in fig. 2B, the user interface 22 is displayed with a dialog window 221. The dialogue window 221 may display options provided by the first device to switch cameras, including: switch to front/rear camera (option 222) and switch to external camera (option 223).
When the image displayed in the window 213 is an image captured by a front camera of the first device, the first device may display the image captured by a rear camera of the first device in the window 213 in response to a user operation acting on the option 222. Conversely, when the image displayed in the window 213 is an image captured by the front camera of the first device, the first device may display the image captured by the front camera of the first device in the window 213 in response to a user operation acting on the option 222.
The first device may detect a user operation on option 223, in response to which the first device may query other electronic devices in the network that are capable of capturing images. When other electronic devices with shooting capabilities are queried, the first device may display the user interface 23 shown in fig. 2C.
As shown in fig. 2C, the user interface 23 may include device options 231, 232, 233. The device options 231, 232, 233 may respectively indicate other electronic devices queried by the first device. The electronic device at least comprises a camera and a display screen.
Taking the device option 231 as an example, the device option 231 may indicate that the device type discovered by the first device is an electronic device of a television (internet television). The electronic device is provided with a display screen and a camera. The device option 231 may display an icon 2311 of the electronic device, a device name ("Phone-C"), a distance from the first device ("6 m"), and capabilities the electronic device has, i.e., shooting capabilities and display capabilities ("display screen, camera"). The icons and distances described above are optional. In addition, device option 231 may include other more information, which is not limited in this embodiment of the present application.
The device options 232 and 233 are respectively electronic devices of other device types that are searched by the master device, and are not described herein.
In some embodiments, the first device may also display the discovered electronic device classification. The criteria of its division are for example whether a connection is established with the first device. As shown in user interface 23, user interface 23 may include a historic device partition 234 and a connectable device partition 235.
The electronic devices displayed in the historian section 234 include: an electronic device that has once established a connection with a first device. This means that the electronic device in the history device partition 234 was authenticated with the first device. Therefore, when the first device is connected again to the electronic device once the connection has been established, the authentication process can be omitted.
The electronic devices displayed in the connectable device partition 235 may include: a particular service may be provided for the first device but no connection has been established with the first device. Therefore, when the first device establishes a connection with the electronic device, the first device needs to authenticate with the electronic device, and confirms that the first device and the second device are in mutual trust relationship.
In other embodiments, the user interface 23 may also categorize discovered electronic devices by the capabilities the device has. For example, the user interface may include a display capability section, a shooting capability section, and the like. The electronic device included in the display capability partition has hardware such as a display screen, and can provide a display service. The electronic device included in the shooting capability partition has hardware such as a camera, and can provide shooting services.
In other embodiments, in the user interface 23, the first device may also be presented in a partitioned manner with capabilities of the electronic device when displaying other electronic devices that have been discovered.
For example, the user interface 23 may include regions of region A, region B, region C, and the like. The area A can be used for displaying the electronic equipment with the camera; region B may be used to present an electronic device having a display screen; region C may be used to show an electronic device with a speaker, etc. Then, a cell Phone "Phone-C" with camera capability may be displayed in area a.
Therefore, a user can determine which electronic device with a camera is connected according to the detected capability of the electronic device, so that the shooting requirement of the user can be better met. Based on the user's selection, the first device may determine a particular form of authentication with the selected electronic device based on the capabilities of the electronic device.
The first device may detect a user operation on a device option, in response to which the first device may establish a connection with the electronic device indicated by the device option. At this time, the selected electronic device may be referred to as a second device. Further, the first device may invoke the second device.
Specifically, the above-mentioned process of establishing a connection may be classified into a direct connection and an authenticated connection according to the history of the connection between the first device and the second device.
If the second device was connected to the first device and the photographing service provided by the second device is used, the first device may be directly connected to the second device. Thus, repeated authentication operation can be avoided, user operation is increased, and user experience is reduced.
If the second device has not been connected to the first device, the first device may make an authenticated connection with the second device. In the process of authentication connection, the first device and the second device need to acquire the identity of the other party and determine that the other party is a device trusted by itself. The following embodiments will specifically describe the above negotiation process, and will not be expanded here.
For example, the first device may detect a user operation on device option 232, in response to which the first device may display user interface 24 shown in fig. 2D. At this time, the electronic device indicated by the device option 232 may be referred to as a second device. Fig. 2D also shows a second device.
Next, the user interface 24 displayed by the first device will first be described. The user interface 24 may include a two-dimensional code 241, a cancel control 242.
The two-dimensional code 241 may be used to reveal a PIN generated by the first device. The second device scans the two-dimensional code 241 to obtain the PIN of the first device, thereby obtaining verification information for authentication. For example, the PIN generated by the first device may be "123456". The PIN may be processed by the two-dimensional code as shown by the two-dimensional code 241. The second device scans the two-dimensional code 241 to obtain "123456", and further, the first device and the second device can exchange PINs of both parties to perform verification.
Furthermore, it is understood that the two-dimensional code is not limited to the two-dimensional code, and in other embodiments, the two-dimensional code may be replaced by other graphic codes. The graphic code refers to a graphic which can be displayed on an electronic display screen and is used for being scanned or photographed to identify character information, such as a one-dimensional code and other dynamic or static graphic codes. The character information includes numerals, URL (uniform resource locator) links, letters, and the like. The present application is not limited in this regard.
The cancel control 242 may be used to interrupt the authentication flow. That is, when a user operation is detected on cancel control 242, the first device may close user interface 24, thereby interrupting the authentication flow. Optionally, cancel control 242 may include a timer. When the timer set times out, the first device may close the user interface 24 if the first device has not completed authentication.
The user interface 24 also displays a prompt, such as "Monitor-B is being connected, please aim the two-dimensional code at the camera that blinks the indicator light," etc. The prompt information can prompt the user to perform correct authentication operation, thereby completing authentication.
After the first device displays the user interface 24, the second device identifies authentication information included in the user interface 24. After the second device obtains the verification information, the first device and the second device may exchange authentication information (PIN). When the authentication information of the two parties is consistent, that is, after the authentication is passed, the second device may provide a photographing service for the first device, and the first device may display the user interface 25 shown in fig. 2E.
As shown in fig. 2E, user interface 25 may include window 251. At this time, the image displayed in the window 251 is an image acquired by the second device. Meanwhile, the user interface 25 may further include a prompt window 252. Prompt window 252 may display a prompt that the first device is connected to the second device, such as "connected Monitor-B". The user may confirm through the prompt that the first device is connected to the second device.
At this time, the first device establishes a connection with the second device and successfully uses the photographing service provided by the second device.
If the authentication information of the two parties is inconsistent, namely authentication fails, the second equipment can refuse to provide shooting service for the first equipment, namely the first equipment can not call a camera of the second equipment to collect images. At this point, in the user interface 25 shown in fig. 2E, the window 251 still displays the image captured by the native camera of the first device. Meanwhile, the prompt window 252 may prompt the user for a connection failure, e.g., "connection failure," of the second device.
Optionally, the first device may use the photographing service provided by the second device while also retaining the photographing service provided by its own camera. In this way, the first device may provide a richer shooting experience for the user.
Specifically, as shown in FIG. 2D, user interface 24 may also include selectable items 244. The first device may detect a user operation on selectable item 244, in response to which the first device may display a tick mark on selectable item 244. In this way, the first device may display images captured by its own native camera while displaying images captured from the chemical device 200.
After the first device and the second device complete authentication after the selectable item 244 is checked, the first device may display the user interface 26 shown in fig. 2F. At this time, the window displaying the image acquired by the first device may be divided into two parts: window 451, window 452. Window 451 may display an image captured by a native camera of the first device. The window 452 may display an image captured by the second device. Therefore, the user can be connected with the external camera, the image acquired by the external camera can be used, and the image acquired by the original camera can be used at the same time, so that more video fields of view are provided for video call.
Optionally, windows 451, 452 may also include delete control 464, delete control 465, respectively. The delete control may be used to close the window. Taking delete control 464 as an example, when a user operation is detected on delete control 464, the first device may close window 451 in response to the operation. Thus, the user can turn off the original camera or the external camera at any time.
In other embodiments, the first device may also repeat the process shown in fig. 2A-2D, thereby connecting a plurality of second devices. In this way, the first device can obtain images acquired by the plurality of cameras. For example, in a live broadcast scene, a user may connect a plurality of cameras in the process of live broadcasting using the first device, and further, the first device may send an image collected by a native camera and an image collected by the plurality of cameras to a live broadcast server. Thus, a user watching a live broadcast can watch the live broadcast from different viewing angles at the same time, thereby obtaining a better live broadcast experience.
2A-2F, in a talk scenario, a first device may discover and connect to other external cameras. When the external camera has only shooting capability, the first device can display contents such as two-dimensional codes, characters, images and the like containing verification information. The external camera can scan the screen of the first device so as to acquire the verification information, establish trust connection with the first device and provide shooting service for the first device.
When the second device is provided with the camera and the display screen, the first device can also adopt a mode that the first device scans the second device in the process of authenticating with the second device.
In this embodiment of the present application, when the second device has the display capability, the first device may further consider a working state of the display screen of the second device when the second device negotiates an authentication mode. When the display screen of the second device is busy, the first device and the second device can perform authentication by adopting the method shown in fig. 2D, i.e. the second device scans the first device. When the display screen of the second device is idle, the display screen of the second device can display the content such as the two-dimensional code, the text or the image carrying the verification information, and accordingly, the first device can scan the display screen of the second device to acquire the verification information.
The display screen being busy means that the display screen display is in an immersive screen display state. The immersion picture display state includes: displaying video-on-demand, displaying game visuals, displaying video-call interfaces, and so forth. Conversely, when the display is not in the immersive screen display state, the display may be referred to as idle.
Fig. 3A illustrates a user interface where a first device scans for a second device to complete authentication.
Refer to the user interface shown in fig. 2C. The first device may detect a user operation on device option 233. At this time, the electronic device "TV-Sup>A" (i.e., the second device) indicated by the device option 233 is provided with both the camerSup>A and the display. Therefore, the method for authenticating the first device and the second device may be that the first device scans the second device or that the second device scans the first device.
At this time, the first device may determine what authentication method is specifically adopted according to the camera of the second device and the displayed working state.
When the display screen of the second device is idle, the first device may display the user interface 31 shown in fig. 3A in response to a user operation acting on the device option 233, and at the same time, the second device may display the user interface 32 shown in fig. 3A.
As shown in fig. 3A, the user interface 31 may include a preview window 311, a return control 312. The preview box 311 may be used to display the image captured by the native camera of the first device. The return control 312 may be used for the first device to turn off the native camera, exiting the scan state. The user interface 32 may include a two-dimensional code 321. Referring to the description of the user interface 24, the two-dimensional code 321 carries authentication information (PIN) generated by the second device.
After the second device displays the user interface 32 including the two-dimensional code 321, the first device may acquire an image including the two-dimensional code 321. Through identification and analysis, the first device may obtain the verification information generated by the second device from the two-dimensional code 321. Then, the first device and the second device can exchange the verification information generated or collected by themselves to perform authentication.
When authentication is successful, the first device may display the user interface 25 shown in fig. 2E. At this time, the display image in the window 251 is acquired by the camerSup>A of the second device ("TV-Sup>A"). Meanwhile, the prompt window 252 may display "connected TV-Sup>A" indicating that the electronic device that is now establishing Sup>A connection with the first device is "TV-Sup>A".
Likewise, if authentication fails, the image acquired by the native camera of the first device is still displayed in window 251. The prompt window 252 displays a prompt typeface of "connection failure".
Fig. 3B illustrates a set of user interfaces where the second device scans for the first device to complete authentication.
When the display screen of the second device is busy, the first device may display the user interface 33 shown in fig. 3B in response to a user operation acting on the device option 233, and at the same time, the second device may display the user interface 34 shown in fig. 3B.
As shown in fig. 3B, the user interface 33 may include a two-dimensional code 331. The user interface 33 may also include other information or controls, and reference may be made specifically to the description of the user interface 24, which is not repeated here. The user interface 34 may include a window 341. Window 341 may display a video on demand by the user, a game, or an interface where the user is engaged in a video call.
After the first device displays the two-dimensional code 331, the second device may acquire an image including the two-dimensional code 331. Through identification and analysis, the second device can acquire the verification information generated by the first device from the two-dimensional code 331. Then, the first device and the second device can exchange the verification information generated or collected by themselves to perform authentication.
When authentication is successful, the first device may display the user interface 25 shown in fig. 2E. At this time, the display image in the window 251 is acquired by the camerSup>A of the second device ("TV-Sup>A"). Meanwhile, the prompt window 252 may display "connected TV-Sup>A" indicating that the electronic device that is now establishing Sup>A connection with the first device is "TV-Sup>A". Likewise, if authentication fails, the image acquired by the native camera of the first device is still displayed in window 251. The prompt window 252 displays a prompt typeface of "connection failure".
When the connected second device has both the camera and the display screen, the first device can further determine a proper authentication mode according to the working states of the camera and the display screen of the second device, so that the first device can successfully complete authentication and connection with the second device, and meanwhile, tasks of the second device, such as video playing, game picture displaying and the like, which are being executed by the current value of the second device are not affected.
In addition, when the second device to be connected is an electronic device that has once established a connection with the first device, the first device and the second device can be directly connected without performing an authentication operation when being connected again.
For example, when a user operation on the device option 231 is detected, the first device may directly display the user interface 25 shown in fig. 2E in response to the operation. Of course, the image displayed by the window 251 in the user interface 25 at this time is an image captured by the electronic device corresponding to the device option 231, i.e., an image captured by Phone-C. At the same time, the reminder displayed in reminder window 252 is accordingly replaced with "connected Phone-C".
The user interfaces shown in figures 3A-3B in conjunction with figures 2A-2F. In the video call scene, a user can control the first device to discover and connect with the external camera at any time. In addition, in the process of connecting the external camera, the first device can determine the optimal authentication mode according to the capability and the specific working state of the second device, so that the authentication can be successfully completed, and the influence on the ongoing task of the second device is reduced as much as possible.
The user scenario in which the first device actively discovers the external camera and provides a better shooting experience for the user will be described below in connection with the user interfaces shown in fig. 4A-4D.
When the first device detects that the native camera focal length is an endpoint value, the first device may search for an external camera in the vicinity or in the same network. When the focus of the original camera is the endpoint value, the shooting effect of the original camera is likely to not meet the requirement of a user, at this time, if the first device can be connected with other external cameras, shooting is performed through the external cameras, more shooting options can be provided for the user, and therefore the shooting requirement of the user is better met.
Fig. 4A illustrates a user interface 41 in which the first device detects a user operation to adjust the focus. The user interface 41 may include a preview window 411. The preview window 411 may display an image captured by a native camera currently used by the first device.
The first device may detect a user operation to adjust the focal length of the camera acting on the preview window 411, in response to which the first device may look up an external camera in the vicinity or in the same network, including other types of electronic devices with shooting capabilities. The above-described user operation may be an operation of expanding the thumb and index finger outward as shown in the preview window 411, or an operation of clicking the focus control slide as shown in fig. 4B, or the like. The embodiments of the present application are not limited in this regard.
Upon discovering an external camera in the vicinity or in the same network, the first device may display the user interface 43 shown in fig. 4C. As shown in fig. 4C, the user interface 43 may display shooting-enabled device options found by the first device, such as "TV-Sup>A", "Monitor-B", "Phone-C", etc., and the description of fig. 2C may be specifically referred to, and will not be repeated here.
The first device may then detect a user operation by the user acting on any device option. Accordingly, the first device may negotiate an authentication manner with the electronic device corresponding to the device option, and complete authentication.
For example, the first device may detect a user operation acting on "Monitor-B" in response to which the first device may display the user interface 44 shown in fig. 4D. As shown in fig. 4D, user interface 44 may include preview window 441. At this point, preview window 441 may display the image captured by "Monitor-B". For example, a camera of "Monitor-B" may have a longer focal length and may take a more distant scene.
The process of determining which authentication method is adopted by the first device and the Monitor-B, and the process of performing authentication by adopting the method can be described with reference to fig. 2A-2F and fig. 3A-3B, and will not be described again here.
In other embodiments, in the process of selecting the shooting mode of the first device by the user, when the shooting mode provided by the first device cannot meet the shooting requirement of the user, the first device may also actively search for other electronic devices with cameras. Then, the first device may negotiate an authentication manner with the second device (in the other electronic devices with cameras, the user determines the electronic device to be connected for use), so as to obtain the shooting capability of the second device.
For example, when it is not detected that the user makes the photographing action within the preset time, the first device may determine that the photographing mode provided by the present device cannot meet the user requirement, so that the user fails to make the photographing action late. At this time, the first device may also actively search for other electronic devices with cameras, so as to provide better shooting service for the user, and meet the shooting requirement of the user.
Fig. 4A-4D illustrate a scenario in which the first device actively discovers an external camera according to a preset trigger condition. The first device may actively discover the external camera by implementing the methods shown in fig. 4A-4D. Then, the user can change the camera that is currently using according to the shooting demand of self, and then obtains better shooting effect, promotes user's shooting experience.
In the video call scenario shown in fig. 2A to 2F and fig. 3A to 3B, the first device may also actively discover the scenario of the external camera according to a preset trigger condition. For example, upon detecting a click operation on window 213 (fig. 2A), the first device may swap the content displayed in window 213, i.e., display the image captured by the native camera of the first device in a larger window. The first device may then detect the focus operation shown in fig. 4A in window 212, in response to which the first device may look up an external camera in the vicinity or in the same network, and then display the user interface 24 shown in fig. 2C (i.e., show the query results).
The process of establishing a connection between a first device and a second device and authenticating with the second device will be described below in connection with fig. 5.
S101: in response to a user's operation to connect to other devices, the first device discovers other electronic devices in the network.
First, the first device may detect an operation of the user to connect to the other device, and in response to the operation, the first device may acquire capability information of the other electronic device in the network.
The operation of connecting the user with other devices is a preset operation. In the application scenario described in the foregoing description of the embodiment of the present application, referring to the user interfaces shown in fig. 2A to 2F, fig. 3A to 3B, and fig. 4A to 4D, the operation of connecting the other devices by the user is, for example, a user operation acting on the external camera 223 shown in fig. 2B, or an operation of adjusting the focal length described in fig. 4A (fig. 4B), or the like.
Upon detecting the above, the first device may send a broadcast to other electronic devices in the network. In response to the broadcast, other electronic devices in the network may send their own capability information to the first device. The capability information includes: device capabilities, use status of capabilities. The device capability is used to describe the software and hardware capabilities, such as shooting capability, display capability, etc., that the electronic device has. The usage status of a capability is used to indicate whether the capability can currently be invoked by other electronic devices. It will be appreciated that the capability information may further include: device type, device status, system version, port number, and other information.
Then, the first device can judge whether the electronic device can provide the service required by the user for the device according to the capability information, and what mode is adopted to call the service provided by the device, namely, an authentication mode.
First, according to the device capabilities of the electronic device recorded in the capability information, the first device may determine whether the electronic device is an electronic device that meets the first device requirement.
Specifically, in the embodiment of the present application, the first device needs to call the camera of the other electronic device to meet the shooting requirement of the user. Therefore, the first device can confirm whether the electronic device is provided with the camera or not through the capability information. If the device capability in the capability information indicates that the electronic device has a camera, the electronic device is an electronic device meeting the requirement of the first device.
After determining that the electronic device is an electronic device meeting the requirement of the first device, the first device may display the electronic device for a user to select. In other embodiments, the first device may also display all of the discovered electronic devices.
Referring to the user interface shown in fig. 2C, the first device may query 3 electronic devices in the network: phone-C, monitor-B, TV-A. The 3 electronic devices are all electronic devices with shooting capability, and can provide shooting service for the first device. The first device may display options indicating the 3 electronic devices, such as device option 231, device option 232, device option 233.
Optionally, the device options may further include capabilities of the 3 electronic devices: the Phone-C is provided with Sup>A display screen and Sup>A camerSup>A, the Monitor-B is provided with Sup>A camerSup>A, and the TV-A is provided with Sup>A display screen and Sup>A camerSup>A. Further, the first device may also display the usage status of the capabilities of the electronic devices. For example, the display screen of TV-A is occupied, etc. The embodiments of the present application are not limited in this regard.
The first device may detect a user operation on either device option, in response to which the first device may begin to acquire rights to use the camera of the second device. The electronic device that the user selects to invoke may be referred to herein as a second device. For example, referring to the user interface shown in FIG. 2C, the first device may detect a user operation of the user on device option 232 (Monitor-B), in response to which the first device may begin to obtain rights to use the camera of Monitor-B. At this point Monitor-B may be referred to as a second device.
S102: the first device determines whether the second device is a trusted electronic device.
Authentication modules are preset in the first equipment and the second equipment. The authentication module may be used for the first device to obtain authorization to use the capabilities of the second device. The second device may grant the first device rights to use the device through the authentication module.
The authentication module performs authentication in two parts: firstly, confirming the authority, and secondly, authenticating the authority. The validation authority refers to: the first device queries whether the second device is a trusted electronic device, i.e. whether the first device has been authenticated with the second device, and whether rights to use the second device have been obtained. Authentication authorization refers to: for an untrusted second device, the first device may authenticate with the second device, making the other party a self-trusted device, thereby obtaining authorization to use the capabilities of the second device.
In acquiring authorization to use the capabilities of the second device, the first device first determines whether the second device is a self-trusted electronic device.
Specifically, the first device may be preset with a connection record table. The table may record one or more other electronic devices trusted by the first device. Whether or not there is a agreed key can be used to determine whether or not the electronic device is an electronic device trusted by the first device. When two electronic devices have a agreed key before, the two electronic devices can use the key for secure data transmission. Table 1 exemplarily shows a connection record table of the first device.
TABLE 1
Device name Key(s)
Phone-C 111111
Monitor-B null
TV-A null
…… ……
The "device name" in table 1 may represent the electronic device discovered by the first device. The "key" may record a key used by the first device for secure data transfer between the electronic devices. When the determined value is recorded in the key, the value is the key of both parties. If the value of the key is null, the first device and the electronic device have no trust relationship. It will be appreciated that the values in table 1 are exemplary and should not be construed as limiting the embodiments of the present application.
Through the connection record table shown in table 1, the first device can determine whether the second device to which the user wants to connect is a trusted device. Specifically, when the second device has a secret key agreed with the first device, the second device is a device trusted by the first device, whereas when the second device does not have a secret key agreed with the first device, the second device is not a device trusted by the first device.
For a trusted second device, the first device may use the key recorded in the connection record table for secure data transfer. At this point, the first device obtains authorization to use the capabilities of the second device. Conversely, for an untrusted second device, the first device needs to authenticate with the second device to obtain authorization to use the capabilities of the electronic device.
For example, referring to the user interface shown in fig. 2C, when the user selects to connect to Phone-C (second device), and uses the camera of the electronic device, the first device may determine whether Phone-C is a trusted electronic device through the above connection record table. The key of Phone-C recorded in Table 1 is "111111". This means that the Phone-C is an electronic device trusted by the first device, and that the first device can then perform a secure data transfer with the second device via the above-mentioned key, i.e. the first device obtains authorization to use the Phone-C shooting capability.
The key for Monitor-B recorded in Table 1 is "null". This means that Monitor-B is not an electronic device trusted by the first device, at this time, if the user chooses to connect to Monitor-B, in response to the above operation by the user, the first device needs to perform authentication authorization with Monitor-B, and establish a trust relationship, so as to obtain authorization to use the shooting capability of Monitor-B (the second device).
Alternatively, the electronic device displayed in the "history device" may represent an electronic device that has established a trust relationship with the first device; an electronic device displayed in the "connectable device" may represent an electronic device that does not establish a trust relationship with the first device, i.e. an electronic device that is not trusted by the first device.
It will be appreciated that the connection record table may also include further information such as the physical address of the electronic device, the logical address, the time the key was agreed upon, whether the key was agreed upon is aged, etc. The embodiments of the present application are not limited in this regard.
In particular, whether the agreed key is expired can be used to determine to prevent key aging. When the usage time of the agreed key exceeds the preset time, the authentication module may consider that the key is aged, which may reduce the security of data transmission. Therefore, in order to avoid the above situation, when the agreed key ages, the authentication module may require the electronic device agreed with the above key to re-determine the new key, and use the new key to perform data transmission, so as to ensure the security of the data.
S103: the first device performs authentication and authorization with the second device that is not trusted, establishes trust concerns, and obtains authorization to use the capabilities of the second device.
When a user selects a connected electronic device (second device) to be an electronic device that is not trusted with the first device, the first device needs to authenticate with the electronic device to obtain permission to use the capabilities of the electronic device.
First, the first device may determine, according to the capability information sent by the second device, a manner of authentication with the second device. The authentication mode comprises the following steps: a first authentication method and a second authentication method.
The first authentication method refers to: the first device displays a graphic code containing authentication information (PIN), the second device scans the graphic code to obtain the authentication information, and then the first device and the second device determine a secret key agreed by both parties by using the authentication information. The second authentication method refers to: the second device displays a graphic code containing authentication information (PIN), the first device scans the graphic code to obtain the authentication information, and then the first device and the second device determine a secret key agreed by both parties by using the authentication information.
Fig. 6 illustrates a flow chart of a first device determining an authentication mode based on capability information of a second device.
First, the first device may determine whether the second device has a display according to the device capabilities recorded in the capability information. If the second device does not have a display screen, the first device may confirm whether the second device has a camera. If the second device is provided with a camera, the first device can confirm that the second device is authenticated and authorized by adopting a first authentication mode. If the second device does not have a camera, the first device and the second device may negotiate other authentication methods, which is not limited in this application.
If the second device has a display, the first device may further confirm whether the display of the second device can be invoked according to the usage status of the capabilities recorded in the capability information. If the display can be invoked, the first device can confirm authentication with the second device using the second authentication method.
If the second device has a display but the display cannot be invoked, the first device may confirm whether the second device has a camera. When the second device is provided with the camera, the first device can confirm that the second device adopts the first authentication mode for authentication. Similarly, if the second device does not have a camera, the first device and the second device may negotiate other authentication methods.
Here, the display may not be invoked including: the display breaks and the display is occupied. The display being occupied refers to a state in which the display is performing displaying an immersive screen. The state of displaying the immersive screen described above is, for example: play video-on-demand, display game visuals, and the like.
After confirming the authentication mode with the second device, the first device and the second device can perform authentication and authorization according to a preset authentication mode (a first authentication mode or a second authentication mode).
First, taking the first authentication method as an example, fig. 7 exemplarily shows a procedure in which the first device and the second device perform authentication authorization in the first authentication method (the method in which the second device scans the graphic code presented by the first device). The above-described process includes S201-S206.
S201, the first device generates an identity verification code (PIN).
After the first device and the second device determine to authenticate using the first authentication method, the first device may generate a Personal Identification Number (PIN). The PIN may be randomly generated, or may be generated according to a preset rule. A PIN corresponds to a key. The key may be recorded at the authentication module. The first device or the second device may confirm a key uniquely corresponding to the PIN in the authentication module through the PIN.
For example, assume that the PIN generated by the first device is specifically "123456". The authentication module, upon detecting that the first device generated the PIN, may determine that a key "111111" corresponds uniquely to the PIN. The first device may then determine the key "111111" via the PIN "123456".
S202, the first device displays a graphic code containing the PIN.
After the first device determines the key through the PIN, the first device needs to transmit the key to the second device, so as to determine that the key is a key agreed by the first device and the second device.
At this time, the first device may convert the PIN into a graphic code, and display the graphic code on the display screen. Such as the two-dimensional code used in the user interface 24. The two-dimensional code is not limited to the two-dimensional code, and in other embodiments, the two-dimensional code may be replaced by other graphic codes. The graphic code refers to a graphic which can be displayed on an electronic display screen and is used for being scanned or photographed to identify character information, such as a one-dimensional code and other dynamic or static graphic codes. The character information includes numerals, URL (uniform resource locator) links, letters, and the like. The present application is not limited in this regard.
S203, the second device enters a scanning mode.
The second device may invoke its own camera and may enter a scan mode while the first device displays the graphics code. The scanning mode refers to a mode adopted when the second device reads bar codes, two-dimensional code information or performs image recognition.
In some embodiments, the second device may save the previous work content before entering the scan mode. For example, when the second device is a monitoring camera, the second device may save the screen of the previous monitoring shot before entering the scan mode.
After entering the scanning mode, the second device may acquire successive image frames via the camera.
S204, the second device scans the graphic code displayed by the first device to obtain the PIN generated by the first device.
The user may aim the first device with the graphic code displayed at the camera of the second device. At this time, the image frame acquired by the second device may include the graphic code described above.
The camera of the second device may be provided with an image recognition module. The image recognition module of the second device may recognize the acquired image frames. When the graphic code is included in the image frame, the image recognition module of the second device may recognize the graphic code. The second device may then obtain the PIN generated by the first device via the graphics code described above.
In some embodiments, the image capture of the second device does not have image recognition capabilities. At this time, the second device may send the image collected by the camera to the DV kit, and the image recognition capability provided by the DV kit completes the task of recognizing the graphic code.
Specifically, in the framework described in fig. 1, in the embodiment of the present application, the DV kit is distributed and deployed on the first device and the second device. That is, the DV kit module is preset in the second device. Thus, the second device may send the image captured by the camera to the DV kit module. The DV kit module may identify the graphics code contained in the image in the module. The DV kit module may then obtain the PIN generated by the first device, and in turn, the second device may obtain the PIN.
S205, the second device determines the key according to the PIN generated by the first device.
After identifying the graphic code displayed by the first device and obtaining the PIN generated by the first device, the second device can determine the key corresponding to the PIN by using the PIN. Specifically, the second device may query its own authentication module for the PIN. Based on the unique correspondence of the PIN and the key, the second device may determine the key corresponding to the PIN. The secret key is the secret key agreed by the first equipment and the second equipment.
Referring to the example in S201, the second device scanning the graphic code may obtain the PIN "123456" generated by the first device. Looking up the authentication module, the second device may determine a key "111111" that uniquely corresponds to the PIN "123456".
At this time, the first device and the second device complete authentication authorization. The first device obtains rights to use the capabilities of the second device.
Meanwhile, the first device may record the above-mentioned agreed key in a location of the connection record table corresponding to the second device. When the same device is next connected, the first device may use the key directly, i.e. confirm that the device is a trusted device, thereby directly obtaining authorization.
S206, the first device and the second device transmit the image based on the secret key agreed by the two parties.
When authentication is completed, the first device may send a request to the second device to use the image, and in response to the request, the second device may send the image acquired by its own camera to the first device. In the process that the second device sends the image collected by the camera to the first device, the second device firstly encrypts the collected image by using the agreed secret key. The second device may then send the encrypted image to the first device.
The first device may parse the received encrypted image using the agreed key to obtain an image captured by the camera of the original second device. Further, the first device may use the image described above.
Referring to the user interfaces shown in fig. 2E and 4D, the first device may display the image transmitted by the second device in a preset area for displaying a photographing screen. For example, in a video call scenario (fig. 2E), the first device may display the image sent by the second device in a preset window 251. For example, in a photographed scene (fig. 4D), the first device may display the image transmitted by the second device described above in the preview window 441.
The process by which the first device and the second device authenticate in accordance with the second authentication mode (the mode in which the first device scans the graphic code presented by the second device) is described below in connection with fig. 8. As shown in fig. 8, the above-described process includes S301 to S306.
After the first device and the second device determine to authenticate using the second authentication method, the second device may generate a Personal Identification Number (PIN). The PIN also uniquely corresponds to a key.
After the second device generates the PIN, the second device needs to transmit the key to the second device, so as to determine that the key is a key agreed by the first device and the second device. At this point, the second device may convert the PIN to a graphical code that is displayed on a display screen, referring to user interface 32 in FIG. 3A.
The first device may invoke its own camera and may enter a scan mode while the second device displays the graphic code. After entering the scan mode, the first device may acquire successive image frames via the camera. When the acquired image frame includes a graphic code displayed by the second device, the first device may identify the graphic code, and further, the first device may obtain a PIN generated by the second device.
Based on the PIN, the first device may determine a key uniquely corresponding to the PIN, i.e., a key agreed upon by the first device with the second device. Based on the key, the first device and the second device can perform secure image transmission. Thus, the second device may transmit the image acquired by itself to the first device, which may receive the image transmitted by the second device.
Specific description of S301 to S306 may refer to S201 to S206 shown in fig. 7, and will not be repeated here.
In the embodiments of the present application:
the graphical code displayed on the first device that includes authentication information (PIN) may be referred to as a first graphical code, for example, the two-dimensional code 241 shown in the user interface 24 of fig. 2D may be referred to as a first graphical code. The first graphic code further includes a two-dimensional code 331 shown in the user interface 33 in fig. 3B. The key corresponding to the authentication information in the first graphic code may be referred to as a first key, for example, a key "111111" uniquely corresponding to the PIN "123456" exemplified in S201.
The graphic code including the authentication information (PIN) displayed on the second device may be referred to as a second graphic code, and for example, the two-dimensional code 321 shown in the user interface 32 of fig. 3A may be referred to as a second graphic code. The key corresponding to the authentication information in the second graphic code may be referred to as a second key.
The camera-initiated event of the first device may be referred to as a first event, e.g., the event that the first device displays the user interface shown in fig. 2A may be referred to as a camera-initiated first event of the first device; an event in which the first device displays the user interface shown in fig. 4A may be referred to as a camera-initiated first event of the first device. The first device detecting a user operation acting on the external camera 223 in the user interface 22 may be referred to as a first event. The event shown in fig. 4A in which the user's requirement for the focal length of the camera of the first device is beyond the focal length range of the first device may be referred to as a first event.
Fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The hardware configuration shown in fig. 9 may be the hardware configuration of the first device or the hardware configuration of the second device. It will be appreciated that the first device and the second device may also include more or fewer structural modules.
As shown in fig. 9, the first device may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
Wherein the controller may be a neural hub and a command center of the first device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the connection relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the first device. In other embodiments of the present application, the first device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to detect a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 detects inputs from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the external memory 120, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters.
The wireless communication function of the first device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and detecting electromagnetic wave signals. Each antenna in the first device may be operable to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on the first device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (lownoise amplifier, LNA), etc. The mobile communication module 150 may detect electromagnetic waves by the antenna 1, perform processes such as filtering, amplifying, and the like on the detected electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the detected electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio output device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied on the first device. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 detects an electromagnetic wave via the antenna 2, modulates the electromagnetic wave signal, filters the electromagnetic wave signal, and transmits the processed signal to the processor 110. The wireless communication module 160 may also detect a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves for radiation via the antenna 2. Illustratively, the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, or the like.
In some embodiments, the antenna 1 of the first device is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the first device can communicate with the network and other devices through wireless communication technology. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
In the embodiment of the present application, through the services and support provided by the mobile communication module 150 and/or the wireless communication module 160, the first device may discover other external electronic devices and authenticate with the other electronic devices, so as to invoke the capabilities of the other electronic devices and use the services provided by the other electronic devices.
During the use of the image captured by the second device by the first device, the image captured by the second device is transmitted to the first device through the mobile communication module 150, and/or the wireless communication module 160.
The first device implements display functions via a GPU, a display screen 194, and an application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrixorganic light emittingdiode), a flexible light-emitting diode (FLED), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the first device may include 1 or N display screens 194, N being a positive integer greater than 1.
In the present embodiment, the user interfaces illustrated in FIGS. 2A-2F, 3A-3B, and 4A-4D are implemented depending on the GPU, display 194.
The first device may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the first device may include 1 or N cameras 193, N being a positive integer greater than 1.
In embodiments of the present application, the first device may display the image captured by the native camera on the display screen 194. The above process relies on passing through the ISP, camera 193, video codec, GPU, display 194, among others.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the first device selects a frequency bin, the digital signal processor is configured to fourier transform the frequency bin energy, and so on.
Video codecs are used to compress or decompress digital video. The first device may support one or more video codecs. In this way, the first device may play or record video in multiple encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. The application such as intelligent cognition of the first device can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
In the embodiment of the application, when the first device (or the second device) recognizes the two-dimensional code, the image and the text which are presented by the second device (or the first device) and contain the verification information, the first device (or the second device) can realize the function of extracting the verification information through the functions of image recognition, text understanding and the like provided by the NPU.
The first device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The first device may listen to music, or to hands-free conversations, through speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the first device picks up a phone call or voice message, the voice may be picked up by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The first device may be provided with at least one microphone 170C. In other embodiments, the first device may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the first device may also be provided with three, four or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be the USB interface 130 or 3.5mm open mobile electronic platform (open mobile terminal platform, OMTP) standard interface, american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. When a touch operation is applied to the display 194, the first device detects the intensity of the touch operation according to the pressure sensor 180A. The first device may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The gyro sensor 180B may be used to determine a motion gesture of the first device. In some embodiments, the angular velocity of the first device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, the first device calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The first device may detect the opening and closing of the flip holster using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the first device in various directions (typically three axes). The magnitude and direction of gravity can be detected when the first device is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The first device may measure the distance by infrared or laser light. In some embodiments, the scene is photographed and the first device may range using the distance sensor 180F to achieve fast focusing.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The first device emits infrared light outwards through the light emitting diode. The first device detects infrared reflected light from nearby objects using a photodiode and can determine whether an object is nearby the first device. The ambient light sensor 180L is used to sense ambient light level. The first device may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The first device can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access the application lock, fingerprint photographing, fingerprint incoming call answering and the like. The temperature sensor 180J is for detecting temperature.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194.
In an embodiment of the present application, the first device detects that the user operation of the user on fig. 2A-2F, 3A-3B, and 4A-4D depends on the touch sensor 180K. The above-described operations include a touch operation, a double click operation, a long press operation, and the like, which are applied to a control, and also include a gesture operation, such as the gesture operation shown in fig. 4A, 4B, and the like, which is applied to a certain area.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The first device may detect a key input, generating a key signal input related to user settings and function control of the first device.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (e.g., time alert, detected information, alarm clock, game, etc.) may also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be contacted and separated from the first device by inserting into the SIM card interface 195 or extracting from the SIM card interface 195. The first device may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The first equipment interacts with the network through the SIM card to realize the functions of communication, data communication and the like. In some embodiments, the first device employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the first device and cannot be separated from the first device.
The foregoing is a specific description of an embodiment of the present application using a first device as an example. It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the first apparatus. The first device may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
By implementing the method provided by the embodiment of the application, when electronic equipment such as a mobile phone and the like can be connected with an external camera and the use permission of the external camera is acquired, a proper authentication mode can be negotiated according to the specific capability of the electronic equipment to which the external camera belongs, and then trust connection is established. Thus, when a certain capability or service of the master device or the slave device is unavailable or currently occupied, the master device and the slave device can also determine to perform authentication by adopting other authentication modes. In the authentication process by adopting other authentication modes, the master device or the slave device does not need to provide the unavailable or currently occupied capacity or service, so that the authentication process is prevented from being blocked.
The term "User Interface (UI)" in the description and claims of the present application and in the drawings is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, the interface source code is analyzed and rendered on the terminal equipment, and finally the interface source code is presented as content which can be identified by a user, such as a picture, characters, buttons and the like. Controls (controls), also known as parts (widgets), are basic elements of a user interface, typical controls being toolbars (toolbars), menu bars (menu bars), text boxes (text boxes), buttons (buttons), scroll bars (scrollbars), pictures and text. The properties and content of the controls in the interface are defined by labels or nodes, such as XML specifies the controls contained in the interface by nodes of < Textview >, < ImgView >, < VideoView >, etc. One node corresponds to a control or attribute in the interface, and the node is rendered into visual content for a user after being analyzed and rendered. In addition, many applications, such as the interface of a hybrid application (hybrid application), typically include web pages. A web page, also referred to as a page, is understood to be a special control embedded in an application program interface, and is source code written in a specific computer language, such as hypertext markup language (hyper text markup language, GTML), cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., and the web page source code may be loaded and displayed as user-recognizable content by a browser or web page display component similar to the browser function. The specific content contained in a web page is also defined by tags or nodes in the web page source code, such as GTML defines elements and attributes of the web page by < p >, < img >, < video >, < canvas >.
A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (18)

1. A method of connecting an external camera, the method being applied to a first device having a processor, a camera, a display screen, the method comprising:
receiving capability information of a second device, wherein the second device is an electronic device in the same network with the first device;
if the capability information indicates that the second device is provided with a camera but is not provided with an available display screen, the first device displays a first graphic code, wherein the first graphic code corresponds to a first key, and the first graphic code is used for the second device to determine the first key;
receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using the first key;
Analyzing the image by using the first key to obtain an analyzed image;
displaying the analyzed image;
if the capability information indicates that the second device is provided with a camera and an available display screen, the first device scans a second graphic code to obtain a second key, and the second graphic code is generated by the second device and displayed on the display screen of the second device;
receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using the second secret key;
and analyzing the image by using the second key to obtain an analyzed image.
2. The method of claim 1, wherein the capability information comprises: the capabilities and the use status of the capabilities provided by the device.
3. The method according to claim 2, wherein the capability information indicates that the second device is provided with a camera but not with a usable display screen, in particular comprising:
the capability of the device indicates that the second device is provided with a camera but not a display screen;
or, the capability of the device indicates that the second device has a camera and a display screen, but the use state of the capability indicates that the display screen cannot be invoked.
4. A method according to any of claims 1-3, characterized in that before receiving the capability information of the second device, the method further comprises: a first event is detected and the capability information is requested from the second device.
5. The method of claim 4, wherein the first event comprises:
starting a camera of the first device;
or, user operation on the first control;
or, the requirement of the user on the camera capability of the first device is beyond the capability range of the first device.
6. The method of claim 5, wherein the user's requirement for camera capabilities of the first device is beyond the capabilities of the first device, specifically comprising:
the zoom range provided by the camera of the first device cannot meet the requirements of the user;
or, the modes which can be provided by the camera of the first device and adapt to different shooting scenes cannot meet the requirements of users.
7. The method according to claim 1, wherein displaying the parsed image specifically comprises: displaying the parsed image in a first area, the first area comprising: and all the areas originally displaying the images acquired by the first equipment camera, or part of the areas originally displaying the images acquired by the first equipment camera.
8. A method of connecting an external camera, the method being applied to a second device having a processor, a camera, the method comprising:
transmitting capability information of the second device to a first device, wherein the first device is an electronic device in the same network with the second device;
when the capability information indicates that the second device is provided with a camera but not provided with an available display screen, scanning a first graphic code to obtain a first key, wherein the first graphic code is displayed on the display screen of the first device by the first device;
processing an image acquired by the second equipment in real time through a camera by using the first key to obtain a processed image;
transmitting the image processed by the first key to the first device;
displaying a second graphic code when the capability information indicates that the second device is provided with a camera and an available display screen, wherein the second graphic code corresponds to a second key, and the second graphic code is displayed on the display screen of the second device by the second device;
processing an image acquired by the second equipment in real time through a camera by using the second secret key to obtain a processed image;
And sending the image processed by the second key to the first device.
9. The method of claim 8, wherein the capability information comprises: the capabilities and the use status of the capabilities provided by the device.
10. The method according to claim 8, wherein the lack of an available display screen, in particular, comprises:
the display screen is not provided;
or, a display screen is provided, but the display screen cannot be invoked.
11. An electronic device, the electronic device comprising: one or more processors, memory; the memory is coupled with the one or more processors, the memory is for storing computer program code, the computer program code comprising computer instructions for invoking the computer instructions to cause the electronic device to perform:
receiving capability information of a second device, wherein the second device is an electronic device in the same network with the first device;
if the capability information indicates that the second device is provided with a camera but is not provided with an available display screen, the first device displays a first graphic code, wherein the first graphic code corresponds to a first key, and the first graphic code is used for the second device to determine the first key;
Receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using the first key;
analyzing the image by using the first key to obtain an analyzed image;
displaying the analyzed image;
if the capability information indicates that the second device is provided with a camera and an available display screen, the first device scans a second graphic code to obtain a second key, and the second graphic code is generated by the second device and displayed on the display screen of the second device;
receiving an image sent by the second equipment, wherein the image is acquired by the second equipment in real time through a camera and is processed by using the second secret key;
and analyzing the image by using the second key to obtain an analyzed image.
12. The electronic device of claim 11, wherein the capability information comprises: the capabilities and the use status of the capabilities provided by the device.
13. The electronic device of claim 12, wherein the capability information indicates that the second device is provided with a camera but not a display screen available, and specifically comprises:
The capability of the device indicates that the second device is provided with a camera but not a display screen;
or, the capability of the device indicates that the second device has a camera and a display screen, but the use state of the capability indicates that the display screen cannot be invoked.
14. The electronic device of any one of claims 11-13, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: a first event is detected and the capability information is requested from the second device.
15. The electronic device of claim 14, wherein the first event comprises:
starting a camera of the first device;
or, user operation on the first control;
or, the requirement of the user on the camera capability of the first device is beyond the capability range of the first device.
16. The electronic device of claim 15, wherein the user's requirement for camera capabilities of the first device is beyond the capabilities of the first device, specifically comprising:
the zoom range provided by the camera of the first device cannot meet the requirements of the user;
Or, the modes which can be provided by the camera of the first device and adapt to different shooting scenes cannot meet the requirements of users.
17. The electronic device of claim 11, wherein the one or more processors invoke the computer instructions to cause the electronic device to display the parsed image, specifically comprising:
displaying the parsed image in a first area, the first area comprising: and all the areas originally displaying the images acquired by the first equipment camera, or part of the areas originally displaying the images acquired by the first equipment camera.
18. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the method of any one of claims 1-7 to be performed.
CN202110740201.5A 2020-12-31 2021-06-30 Method and system for connecting external camera Active CN114697960B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011645224 2020-12-31
CN202011645224X 2020-12-31

Publications (2)

Publication Number Publication Date
CN114697960A CN114697960A (en) 2022-07-01
CN114697960B true CN114697960B (en) 2024-01-02

Family

ID=82135583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110740201.5A Active CN114697960B (en) 2020-12-31 2021-06-30 Method and system for connecting external camera

Country Status (1)

Country Link
CN (1) CN114697960B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483654A (en) * 2009-02-09 2009-07-15 北京华大智宝电子系统有限公司 Method and system for implementing authentication and data safe transmission
CN102571702A (en) * 2010-12-22 2012-07-11 中兴通讯股份有限公司 Key generation method, system and equipment in Internet of things
CN110062362A (en) * 2018-01-19 2019-07-26 尹寅 A kind of Bluetooth pairing connection method of intelligent glasses
CN110086634A (en) * 2019-05-16 2019-08-02 济南浪潮高新科技投资发展有限公司 A kind of system and method for intelligent video camera head safety certification and access
CN111914268A (en) * 2020-07-05 2020-11-10 中信银行股份有限公司 Encryption device, control method thereof, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10691788B2 (en) * 2017-02-03 2020-06-23 Ademco Inc. Systems and methods for provisioning a camera with a dynamic QR code and a BLE connection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483654A (en) * 2009-02-09 2009-07-15 北京华大智宝电子系统有限公司 Method and system for implementing authentication and data safe transmission
CN102571702A (en) * 2010-12-22 2012-07-11 中兴通讯股份有限公司 Key generation method, system and equipment in Internet of things
CN110062362A (en) * 2018-01-19 2019-07-26 尹寅 A kind of Bluetooth pairing connection method of intelligent glasses
CN110086634A (en) * 2019-05-16 2019-08-02 济南浪潮高新科技投资发展有限公司 A kind of system and method for intelligent video camera head safety certification and access
CN111914268A (en) * 2020-07-05 2020-11-10 中信银行股份有限公司 Encryption device, control method thereof, and computer-readable storage medium

Also Published As

Publication number Publication date
CN114697960A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN113885759B (en) Notification message processing method, device, system and computer readable storage medium
US11683850B2 (en) Bluetooth reconnection method and related apparatus
CN114173204B (en) Message prompting method, electronic equipment and system
EP4113415A1 (en) Service recommending method, electronic device, and system
CN113497909B (en) Equipment interaction method and electronic equipment
US20220179827A1 (en) File Sharing Method of Mobile Terminal and Device
WO2020216098A1 (en) Method for providing forwarding service across electronic apparatuses, apparatus, and system
CN112130788A (en) Content sharing method and device
CN117014859A (en) Address book-based device discovery method, audio and video communication method and electronic device
US20230208790A1 (en) Content sharing method, apparatus, and system
US20230362296A1 (en) Call method and apparatus
CN113973398B (en) Wireless network connection method, electronic equipment and chip system
EP3883299A1 (en) Method for smart home appliance to access network and related device
CN113676879A (en) Method, electronic device and system for sharing information
CN114356195B (en) File transmission method and related equipment
WO2021027623A1 (en) Device capability discovery method and p2p device
US20230350629A1 (en) Double-Channel Screen Mirroring Method and Electronic Device
CN115119048B (en) Video stream processing method and electronic equipment
CN114697960B (en) Method and system for connecting external camera
CN117425227A (en) Method and device for establishing session based on WiFi direct connection
CN114489876A (en) Text input method, electronic equipment and system
CN114071055B (en) Method for rapidly joining conference and related equipment
EP4290375A1 (en) Display method, electronic device and system
WO2023016347A1 (en) Voiceprint authentication response method and system, and electronic devices
EP4339770A1 (en) Screen sharing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant