WO2022001191A1 - 一种摄像头的调用方法、电子设备和摄像头 - Google Patents

一种摄像头的调用方法、电子设备和摄像头 Download PDF

Info

Publication number
WO2022001191A1
WO2022001191A1 PCT/CN2021/081092 CN2021081092W WO2022001191A1 WO 2022001191 A1 WO2022001191 A1 WO 2022001191A1 CN 2021081092 W CN2021081092 W CN 2021081092W WO 2022001191 A1 WO2022001191 A1 WO 2022001191A1
Authority
WO
WIPO (PCT)
Prior art keywords
type
module
application
message
interface
Prior art date
Application number
PCT/CN2021/081092
Other languages
English (en)
French (fr)
Inventor
张新功
吕斐
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2022581629A priority Critical patent/JP2023532741A/ja
Priority to EP21834685.6A priority patent/EP4161060A4/en
Priority to US18/003,652 priority patent/US20230254575A1/en
Publication of WO2022001191A1 publication Critical patent/WO2022001191A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Definitions

  • the present application relates to the field of electronic device control, and in particular, to a method for invoking a camera, an electronic device and a camera.
  • the camera of the electronic device can be remotely invoked by another electronic device to implement corresponding functions. For example, after a remote housekeeping application is installed on both the mobile device and the large screen, the camera on the large screen can be remotely called by the mobile device through the remote housekeeping application to realize the remote housekeeping function.
  • the camera of the electronic device can only be called exclusively by one application. If another application wants to call the camera at this time, the other application can call the camera only after the current application exits. . Therefore, how to implement multiple applications to call the camera has become our requirement.
  • the present application proposes a method for invoking a camera, an electronic device and a camera.
  • the camera of the electronic device in the process of remote invocation, can be invoked by at least two applications, and even the camera can satisfy the invocation of at least two applications at the same time, which improves the use efficiency and user experience.
  • a camera is provided.
  • the camera is connected with the electronic device through the first interface, and the camera includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, and when the computer program is stored by one or more
  • the processor executes, the camera performs the following steps: receiving a first message containing an application ID or an application sub-function ID; in response to the first message, when it is detected that the type corresponding to the application ID or the application sub-function ID is the first type , along the first path, output the first processing result of the first message type through the first interface; when it is detected that the type corresponding to the application ID or application sub-function ID is the second type, along the second path or the third path, pass
  • the first interface outputs a second processing result of the second message type; a second message containing another application ID or another application sub-function ID is received; in response to the second message, when another application ID or another application ID is detected When the type corresponding to the sub-function ID is
  • the other application sub-function may be another sub-function under one application, or may be a sub-function under another application.
  • the camera is connected to the electronic device through an interface, and the camera can implement a dynamic invocation method based on the type of application, and can satisfy the invocation request of at least two applications, at least one application sub-function, and at least two application sub-functions.
  • the problem of exclusive use of the camera is solved, the use efficiency is improved, and the user experience is improved.
  • the camera further performs the following steps: in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the second type, along the second path or the third path, through the first An interface outputs the fourth processing result of the second message type.
  • the processing method when the message is of the second type is provided.
  • the camera further performs the following steps: in response to the first message, when it is detected that the type corresponding to the application ID or the application sub-function ID is the third type, along the a path, outputting the first processing result of the first message type through the first interface; and outputting the second processing result of the second message type through the first interface along the second path or the third path; the third type is the first type + The second type; in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the third type, along the first path, output the first message type of the first message type through the first interface.
  • the processing method when the message is of the third type is provided; and when another application or another application sub-function calls the camera, the processing method when the message is of the third type .
  • the camera further includes: one or more sensor modules, a video input module, a video processing subsystem module, an artificial intelligence module, a video encoding module, and a video graphics system module ;
  • the sensor module is used to collect images and output the collected images to the video input module;
  • the video input module is used to preprocess the images collected by the sensor module;
  • the video processing subsystem module is used to preprocess the video input module The resulting image is subjected to noise reduction processing;
  • the artificial intelligence module is used to perform artificial intelligence recognition on the image processed by the video processing subsystem module, and output the artificial intelligence event of the first message type through the first interface;
  • the video graphics system module is used to The image processed by the video processing subsystem module is zoomed, and the zoomed image is output to the video encoding module;
  • the video encoding module is used to zoom the image processed by the video processing subsystem module or the video graphics system module.
  • the image is encoded, a video stream is generated, and the video stream of the
  • the first path includes a sensor module, a video input module, a video processing subsystem module, and an artificial intelligence module;
  • the second path includes a sensor module, a video input module, a video The processing subsystem module, the video graphics system module and the video coding module;
  • the third path includes the sensor module, the video input module, the video processing subsystem module and the video coding module.
  • the first type is an artificial intelligence type
  • the second type is a video stream type
  • the third type is an artificial intelligence type+video stream type
  • the first message type is Socket message type
  • the second message type is UVC message type
  • the first interface is a USB interface.
  • a camera is provided.
  • the camera is connected with the electronic device through the first interface and the second interface, and the camera includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, when the computer program is
  • the camera is made to perform the following steps: receiving a first message containing an application ID or an application sub-function ID; in response to the first message, detecting that the type corresponding to the application ID or application sub-function ID is: In the case of the first type, along the first path, output the first processing result of the first message type through the first interface; when it is detected that the type corresponding to the application ID or application sub-function ID is the second type, follow the second path or the first message type.
  • Three paths output the second processing result of the second message type through the second interface; receive a second message containing another application ID or another application sub-function ID; in response to the second message, when another application ID is detected Or when the type corresponding to the ID of another application sub-function is the first type, output the third processing result of the first message type through the first interface along the first path.
  • the other application sub-function may be another sub-function under one application, or may be a sub-function under another application.
  • the camera is connected to the electronic device through two interfaces, and the camera can implement a dynamic invocation method based on the type of application, and can satisfy the invocation request of at least two applications, at least one application sub-function, and at least two application sub-functions. Without changing the internal structure of the camera, the problem of exclusive use of the camera is solved, the use efficiency is improved, and the user experience is improved.
  • the camera further performs the following steps: in response to the second message, after detecting that the type corresponding to another application ID or another application sub-function ID is the second type, along the second path or the third path, through the first
  • the second interface outputs the fourth processing result of the second message type. In this way, when another application or another application sub-function calls the camera, the processing method when the message is of the second type is provided.
  • the camera further performs the following steps: in response to the first message, when detecting that the type corresponding to the application ID or the application sub-function ID is the third type, along the a path, outputting the first processing result of the first message type through the first interface; and outputting the second processing result of the second message type through the second interface along the second path or the third path; the third type is the first processing result of the first message type.
  • One type + second type in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the third type, output the first message type through the first interface along the first path and output the fourth processing result of the second message type through the second interface along the second path or the third path; the third type is the first type + the second type.
  • the camera further includes: one or more sensor modules, a video input module, a video processing subsystem module, an artificial intelligence module, a video encoding module, and a video graphics system module ;
  • the sensor module is used to collect images and output the collected images to the video input module;
  • the video input module is used to preprocess the images collected by the sensor module;
  • the video processing subsystem module is used to preprocess the video input module The resulting image is subjected to noise reduction processing;
  • the artificial intelligence module is used to perform artificial intelligence recognition on the image processed by the video processing subsystem module, and output the artificial intelligence event of the first message type through the first interface;
  • the video graphics system module is used to The image processed by the video processing subsystem module is zoomed, and the zoomed image is output to the video encoding module;
  • the video encoding module is used to zoom the image processed by the video processing subsystem module or the video graphics system module.
  • the image is encoded, a video stream is generated, and the video stream of the
  • the first path includes a sensor module, a video input module, a video processing subsystem module, and an artificial intelligence module;
  • the second path includes a sensor module, a video input module, a video The processing subsystem module, the video graphics system module and the video coding module;
  • the third path includes the sensor module, the video input module, the video processing subsystem module and the video coding module.
  • the first type is an artificial intelligence type
  • the second type is a video stream type
  • the third type is an artificial intelligence type+video stream type
  • the first message type is Socket message type
  • the second message type is UVC message type
  • at least one of the first interface and the second interface is a USB interface.
  • a method for invoking a camera is provided.
  • the method is applied to a camera, and the camera is connected to an electronic device through a first interface, and the method includes: receiving a first message including an application ID or an application sub-function ID; in response to the first message, when detecting the application ID or application sub-function When the type corresponding to the ID is the first type, output the first processing result of the first message type through the first interface along the first path; when it is detected that the type corresponding to the application ID or application sub-function ID is the second type, follow the In the second path or the third path, the second processing result of the second message type is output through the first interface; the second message containing another application ID or another application sub-function ID is received; in response to the second message, when detecting When the type corresponding to another application ID or another application sub-function ID is the first type, the third processing result of the first message type is output through the first interface along the first path.
  • the method further includes: in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the second type, along the second path or the third path, through the first The interface outputs a fourth processing result of the second message type.
  • the method further includes: in response to the first message, when it is detected that the type corresponding to the application ID or the application sub-function ID is the third type, along the first path, outputting the first processing result of the first message type through the first interface; and outputting the second processing result of the second message type through the first interface along the second path or the third path; the third type is the first type + The second type; in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the third type, along the first path, output the third type of the first message type through the first interface. a processing result; and outputting a fourth processing result of the second message type through the first interface along the second path or the third path; the third type is the first type + the second type.
  • the camera includes: one or more sensor modules, a video input module, a video processing subsystem module, an artificial intelligence module, a video encoding module, and a video graphics system module; Among them, the sensor module is used to collect images and output the collected images to the video input module; the video input module is used to preprocess the images collected by the sensor module; the video processing subsystem module is used to preprocess the video input module.
  • the artificial intelligence module is used to perform artificial intelligence recognition on the image processed by the video processing subsystem module, and output the artificial intelligence event of the first message type through the first interface;
  • the video graphics system module is used to The image processed by the processing subsystem module is zoomed, and the zoomed image is output to the video encoding module;
  • the video encoding module is used to zoom the image processed by the video processing subsystem module or the image processed by the video graphics system module. Encoding is performed, a video stream is generated, and a video stream of the second message type is output through the first interface.
  • the first path includes a sensor module, a video input module, a video processing subsystem module, and an artificial intelligence module
  • the second path includes a sensor module, a video input module, a video The processing subsystem module, the video graphics system module and the video coding module
  • the third path includes the sensor module, the video input module, the video processing subsystem module and the video coding module.
  • the first type is an artificial intelligence type
  • the second type is a video stream type
  • the third type is an artificial intelligence type+video stream type
  • the first message type is Socket message type
  • the second message type is UVC message type
  • the first interface is a USB interface.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect, respectively.
  • the technical effects corresponding to the third aspect and any implementation manner of the third aspect reference may be made to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which will not be repeated here.
  • a method for invoking a camera is provided.
  • the method is applied to a camera, and the camera is connected to an electronic device through a first interface and a second interface, and the method includes: receiving a first message containing an application ID or an application sub-function ID; responding to the first message, when detecting the application ID Or when the type corresponding to the application sub-function ID is the first type, output the first processing result of the first message type through the first interface along the first path; when it is detected that the type corresponding to the application ID or the application sub-function ID is the second type.
  • the third processing result of the first message type is output along the first path through the first interface.
  • the method further includes: in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the second type, along the second path or the third path, through the second The interface outputs a fourth processing result of the second message type.
  • the method further includes: in response to the first message, when it is detected that the type corresponding to the application ID or the application sub-function ID is the third type, along the first path, the first processing result of the first message type is output through the first interface; and along the second path or the third path, the second processing result of the second message type is output through the second interface; the third type is the first type + The second type; in response to the second message, when it is detected that the type corresponding to another application ID or another application sub-function ID is the third type, along the first path, output the third type of the first message type through the first interface. processing result; and outputting a fourth processing result of the second message type along the second path or the third path through the second interface; the third type is the first type + the second type.
  • the camera includes: one or more sensor modules, a video input module, a video processing subsystem module, an artificial intelligence module, a video encoding module, and a video graphics system module; Among them, the sensor module is used to collect images and output the collected images to the video input module; the video input module is used to preprocess the images collected by the sensor module; the video processing subsystem module is used to preprocess the video input module.
  • the artificial intelligence module is used to perform artificial intelligence recognition on the image processed by the video processing subsystem module, and output the artificial intelligence event of the first message type through the first interface;
  • the video graphics system module is used to The image processed by the processing subsystem module is zoomed, and the zoomed image is output to the video encoding module;
  • the video encoding module is used to zoom the image processed by the video processing subsystem module or the image processed by the video graphics system module. Encoding is performed, a video stream is generated, and a video stream of the second message type is output through the second interface.
  • the first path includes a sensor module, a video input module, a video processing subsystem module, and an artificial intelligence module
  • the second path includes a sensor module, a video input module, a video The processing subsystem module, the video graphics system module and the video coding module
  • the third path includes the sensor module, the video input module, the video processing subsystem module and the video coding module.
  • the first type is an artificial intelligence type
  • the second type is a video stream type
  • the third type is an artificial intelligence type + a video stream type
  • the first message type is Socket message type
  • the second message type is UVC message type
  • at least one of the first interface and the second interface is a USB interface.
  • the fourth aspect and any implementation manner of the fourth aspect correspond to the second aspect and any implementation manner of the second aspect, respectively.
  • the technical effects corresponding to the fourth aspect and any implementation manner of the fourth aspect reference may be made to the technical effects corresponding to the second aspect and any implementation manner of the second aspect, which will not be repeated here.
  • an electronic device is provided.
  • the electronic device is connected to the camera through the first interface, and the electronic device includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is stored by one or more
  • the electronic device performs the following steps: when detecting that an application related to the camera is opened, or when detecting that an application sub-function of an application is opened, sending an application ID or application sub-function ID to the camera , the application ID corresponds to the application, or the application sub-function ID corresponds to the application sub-function; the first processing result of the first message type is received through the first interface; and/or the second message type is received through the first interface
  • the second processing result of the camera when it is detected that another application related to the camera is opened, or when it is detected that another application sub-function is opened, send the first application ID or another application sub-function ID to the camera.
  • the electronic device and the camera are connected through an interface, so that the electronic device and the camera cooperate with each other to meet the calling requests of at least two applications, at least one application sub-function, and at least two application sub-functions, without changing the internal structure of the camera.
  • the camera monopoly problem is solved, the use efficiency is improved, and the user experience is improved.
  • the first message type is a Socket message type; the second message type is a UVC message type; and the first interface is a USB interface.
  • the message type and interface are materialized.
  • an electronic device is provided.
  • the electronic device is connected to the camera through the first interface and the second interface, and the electronic device includes: one or more processors; a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory, when the computer program is
  • the one or more processors when executed, cause the electronic device to perform the following steps: upon detecting that an application associated with the camera is opened, or when detecting that an application sub-function of an application is opened, send a message to the camera containing the application ID or The first message of the application sub-function ID, the application ID corresponds to the application, or the application sub-function ID corresponds to the application sub-function; the first processing result of the first message type is received through the first interface; and/or, received through the second interface The second processing result of the second message type; when it is detected that another application related to the camera is opened, or when it is detected that another application sub-function is opened, send a message containing another application ID or another application sub-function to the camera.
  • the electronic device and the camera are connected through two interfaces, so that the electronic device and the camera cooperate with each other to meet the calling requests of at least two applications, at least one application sub-function, and at least two application sub-functions, without changing the internal of the camera.
  • the camera monopoly problem is solved, the use efficiency is improved, and the user experience is improved.
  • the first message type is a Socket message type; the second message type is a UVC message type; and at least one of the first interface and the second interface is a USB interface.
  • the message type and interface are materialized.
  • a method for invoking a camera is provided.
  • the method is applied to an electronic device, and the electronic device is connected to a camera through a first interface.
  • the method includes: when it is detected that an application related to the camera is opened, or when it is detected that an application sub-function of an application is opened, sending a message to the camera a first message containing an application ID or an application sub-function ID, where the application ID corresponds to the application, or the application sub-function ID corresponds to the application sub-function; receiving the first processing result of the first message type through the first interface; and/or, through The first interface receives the second processing result of the second message type; when it is detected that another application related to the camera is opened, or when it is detected that another application sub-function is opened, it sends to the camera another application ID or The second message of another application sub-function ID, another application ID corresponds to another application, or another application sub-function ID corresponds to another application sub-function; the third processing result of the first message type is received through the first interface and/
  • the first message type is a Socket message type; the second message type is a UVC message type; and the first interface is a USB interface.
  • the seventh aspect and any implementation manner of the seventh aspect correspond to the fifth aspect and any implementation manner of the fifth aspect, respectively.
  • the technical effects corresponding to the seventh aspect and any implementation manner of the seventh aspect reference may be made to the technical effects corresponding to the fifth aspect and any implementation manner of the fifth aspect, which will not be repeated here.
  • a method for invoking a camera is provided.
  • the method is applied to an electronic device, and the electronic device is connected to a camera through a first interface and a second interface.
  • the method includes: when it is detected that an application related to the camera is opened, or when it is detected that an application sub-function of an application is opened , send a first message containing an application ID or an application sub-function ID to the camera, the application ID corresponds to the application, or the application sub-function ID corresponds to the application sub-function; receive the first processing result of the first message type through the first interface; and /or, receive a second processing result of the second message type through the second interface; when it is detected that another application related to the camera is opened, or when it is detected that another application sub-function is opened, send a message containing another application to the camera.
  • the first message type is a Socket message type; the second message type is a UVC message type; and at least one of the first interface and the second interface is a USB interface.
  • the eighth aspect and any implementation manner of the eighth aspect correspond to the sixth aspect and any implementation manner of the sixth aspect, respectively.
  • the technical effects corresponding to the eighth aspect and any implementation manner of the eighth aspect reference may be made to the technical effects corresponding to the sixth aspect and any implementation manner of the sixth aspect, which will not be repeated here.
  • a computer-readable storage medium includes a computer program, when the computer program runs on the camera, the camera executes the third aspect, the fourth aspect, and the calling method of the camera in any one of the third aspect and the fourth aspect.
  • the ninth aspect and any implementation manner of the ninth aspect correspond to the third aspect, the fourth aspect, and any implementation manner of the third aspect, and any implementation manner of the fourth aspect, respectively.
  • the technical effects corresponding to the ninth aspect and any implementation manner of the ninth aspect reference may be made to the third aspect, the fourth aspect, and any implementation manner of the third aspect, and any implementation manner of the fourth aspect. The technical effect is not repeated here.
  • a computer-readable storage medium includes a computer program that, when the computer program runs on the electronic device, causes the electronic device to execute the seventh aspect, the eighth aspect, and the method for invoking the camera of any one of the seventh aspect and the eighth aspect.
  • the tenth aspect and any implementation manner of the tenth aspect correspond to the seventh aspect, the eighth aspect, and any implementation manner of the seventh aspect, and any implementation manner of the eighth aspect, respectively.
  • the technical effects corresponding to the tenth aspect and any implementation manner of the tenth aspect reference may be made to the seventh aspect, the eighth aspect, and any implementation manner of the seventh aspect, and any implementation manner of the eighth aspect. The technical effect is not repeated here.
  • a computer system in an eleventh aspect, includes the electronic device of the fifth aspect, the sixth aspect, any one of the fifth aspect, and any one of the sixth aspect, and the first aspect, the second aspect, and any one of the first aspect, the second aspect.
  • the camera described in any one of the above causing the electronic device to perform the seventh aspect, the eighth aspect, and the method of any one of the seventh aspect and the eighth aspect, and causing the camera to perform the third aspect
  • any one of the implementation manners of the eleventh aspect and the eleventh aspect are respectively related to the fifth aspect, the sixth aspect, any one of the implementation manners of the fifth aspect, and any one of the implementation manners of the sixth aspect, the first aspect, the third The second aspect and any one of the first aspect, any one of the implementation manner of the second aspect, the seventh aspect, the eighth aspect, and any one of the seventh aspect, any one implementation manner of the eighth aspect, the third aspect , any one of the fourth aspect and the third aspect, and a combination of any one of the implementation manners of the fourth aspect.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a connection between a camera and an electronic device according to Embodiment 1 of the present application;
  • 5a-5d are schematic flowcharts of a camera calling method according to Embodiment 1 of the present application.
  • FIG. 6 is a schematic structural diagram of a connection between a camera and an electronic device according to Embodiment 2 of the present application;
  • FIGS. 7a-7d are schematic flowcharts of a camera calling method according to Embodiment 2 of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of the objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
  • the first application and the second application may be any application in the electronic device that needs to call the camera.
  • the first application and the second application may be installed by the electronic device before leaving the factory, or may be downloaded by a user during use of the electronic device, which is not limited in this application.
  • the first application and the second application are only used for example, and are not used to limit the specific number of applications.
  • FIG. 1 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the first electronic device 100 can be called by the second electronic device 200 .
  • the first electronic device 100 has a camera (not shown), or the first electronic device 100 is connected to the camera through various interfaces such as a universal serial bus (USB) interface.
  • the second electronic device 200 remotely calls and controls the camera of the first electronic device 100 .
  • the first electronic device 100 and the second electronic device 200 are both installed with the same application, such as a "remote housekeeping" application.
  • the second electronic device 200 first opens its own “remote housekeeping” application, and then sends a call request to the first electronic device 100 through its own “remote housekeeping” application; after receiving the request, the first electronic device 100 opens the first electronic device 100 "Remote housekeeping" application for electronic device 100 .
  • Both the first electronic device 100 and the second electronic device 200 include but are not limited to large screens, laptop computers, desktop computers, palmtop computers (such as tablet computers, smart phones, etc.), smart wearable devices (such as smart bracelets, etc.) , smart watches, smart glasses, smart rings, etc.) and other computing devices.
  • the first electronic device 100 is a large screen equipped with a camera; the second electronic device 200 is a smart phone.
  • the second electronic device 200 may or may not be configured with a camera.
  • the number of the first electronic device 100 and the second electronic device 200 is only one in FIG. 1 , the number of the first electronic device 100 and/or the second electronic device 200 may be multiple.
  • FIG. 2 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • FIG. 2 uses the first electronic device 100 in FIG. 1 as an example to illustrate the structure of the electronic device, those skilled in the art will understand that the structure of the electronic device in FIG. 2 is also applicable to the second electronic device 200 in FIG. 1 . As shown in FIG. 1 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, and a mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and user identification module ( subscriber identification module, SIM) card interface 195, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, and a mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and user
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the USB interface 130 is an interface that conforms to the USB standard specification, specifically a Mini USB interface, a Micro USB interface, a USB Type C interface, etc., and can support USB1.0, USB2.0, USB3.0 and USB4.0 or higher standard USB Specifications, including various USB specifications.
  • the USB interface 130 may include one or more USB interfaces.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • FIG. 3 is a block diagram of a software structure of an electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into five layers, which are, from top to bottom, an application layer, a system framework layer, a system library and runtime layer, a hardware abstraction layer, and a kernel layer.
  • the application layer may include remote housekeeping applications, home camera applications, video calling applications, artificial intelligence (Artificial Intelligence, AI) fitness applications, child mode applications and other programs.
  • AI Artificial Intelligence
  • the remote housekeeping application is used for devices other than the electronic device 100 to turn on the camera on the electronic device 100 by means of remote calling, and obtain video images and/or pictures captured by the camera.
  • the applications included in the application layer shown in FIG. 3 are only illustrative, and are not limited in this application. It can be understood that the applications included in the application layer do not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, compared with the applications included in the application layer shown in FIG. 3 , the electronic device 100 may include more or less applications, and the electronic device 100 may also include completely different applications.
  • the system framework layer provides application programming interfaces (APIs) and programming frameworks for applications in the application layer, including various components and services to support developers' Android development.
  • the system framework layer includes some predefined functions.
  • the system framework layer may include a view system, a window manager, a resource manager, a content provider, and the like.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • a window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on. Content providers are used to store and retrieve data and make these data accessible to applications. The data may include video, images, audio, and the like.
  • the system library and runtime layer includes the system library and the Android Runtime.
  • a system library can include multiple functional modules. For example: browser kernel, 3D graphics library (eg: OpenGL ES), font library, etc.
  • the browser kernel is responsible for interpreting the syntax of the web page (such as an application HTML and JavaScript under the standard general markup language) and rendering (displaying) the web page.
  • the 3D graphics library is used to implement 3D graphics drawing, image rendering, compositing and layer processing, etc.
  • the font library is used to implement the input of different fonts.
  • the Android runtime includes core libraries and a virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • the components included in the system framework layer, system library and runtime layer shown in FIG. 3 do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • HAL Hardware Abstraction Layer
  • HAL includes CameraHAL driver, camera proxy (CameraProxy) driver, display driver, audio driver, etc.
  • CameraHAL driver CameraHAL driver
  • Camera proxy (CameraProxy) driver Camera proxy
  • display driver display driver
  • audio driver etc.
  • the above driving is only a schematic example, and is not limited in this application.
  • HAL is the basis of the system. The final function realization of the system is completed through HAL.
  • both the CameraHAL driver and the CameraProxy driver are used to abstract the camera, so as to hide a specific channel of the camera, so that the application can access (or call) the camera.
  • the CameraHAL driver can communicate with cameras based on the universal serial bus video class (UVC) protocol.
  • the UVC protocol can also be understood as a protocol based on the UVC channel, that is, the camera 400 and the HAL establish a UVC connection (communication connection) through the UVC channel, and transmit messages conforming to the UVC protocol based on the UVC connection.
  • the CameraProxy driver can communicate with the camera based on the remote network driver interface specification (RNDIS) protocol.
  • RNDIS remote network driver interface specification
  • the RNDIS protocol can also be understood as a Socket channel-based protocol, that is, the camera 400 and the HAL establish a Socket connection (communication connection) through the Socket channel, and transmit messages conforming to the RNDIS protocol based on the Socket connection.
  • the UVC channel can be used to transmit control commands and video streams;
  • the Socket channel can be used to transmit information such as AI events and logs.
  • the camera of the electronic device 100 may be an external camera and/or a built-in camera.
  • the external camera can be connected to the USB interface of the electronic device 100 through a USB cable.
  • the built-in camera can be embedded in the electronic device 100, and in the electronic device 100, the built-in camera is connected to a USB interface of the electronic device 100 through a USB cable.
  • FIG. 4 is a schematic structural diagram of a connection between a camera and an electronic device according to Embodiment 1 of the present application.
  • the camera 400 is connected to the USB interface of the electronic device 410 through a USB cable, and further connected to the electronic device 410 .
  • the number of USB ports and their distribution on the side of the electronic device 410 in FIG. 4 are only illustrative examples, and do not limit the scope of the present application.
  • Other types of interfaces such as UART, USART, etc. may also be used for the connection of the camera 400 and the electronic device 410 .
  • the above-mentioned interface may be located on the side of the electronic device 410 , or on the side of the camera 400 , or on both sides of the electronic device 410 and the camera 400 .
  • the number of USB interfaces can be one, two, or even more.
  • the USB interface is located on the hardware layer 411 on the side of the electronic device 410 .
  • the HAL 412 at least includes a CameraProxy driver and a CameraHAL driver.
  • the CameraProxy driver is a proxy program located between the Android application package (android application package, APK) application and the camera, located in the Android HAL layer, and serves as a standard HAL interface definition language (HIDL) service.
  • APK android application package
  • HIDL HAL interface definition language
  • the CameraHAL driver is an agent program located between the APK application and the camera. It is located in the Android HAL layer. It provides standard data structure and interface definition specifications, and defines standard interfaces for system services of different camera hardware. Different camera hardware manufacturers only need to implement For the corresponding interface, the device-related implementation is implemented in the HAL layer and provided in the form of a shared library (.so), so that the device can be used by the Android system.
  • the CameraProxy driver and the CameraHAL driver are used to respectively receive data input from the AI module 423 and the VENC module 424 through two channels.
  • the camera 400 includes an ISP 420 , a sensor module 430 , a CPU 440 and a memory 450 .
  • the ISP 420 is used to process images and video streams, and output the processed video streams and images in two ways.
  • the CPU 440 is only a schematic example, and various microcontrollers such as a Microcontroller Unit (MCU) or devices that function as processors or microcontrollers can be alternative forms of the aforementioned CPUs.
  • MCU Microcontroller Unit
  • the sensor module 430 is the photosensitive element of the camera 400, collects optical signals, converts the collected optical signals into electrical signals, and then transmits the electrical signals to the ISP 420 for processing, and converts them into image or video streams.
  • the ISP 420 includes a video input (video input, VI) module 421, a video process sub-system (VPSS) module 422, an AI module 423, a video encoder (VENC) module 424 and a video graphics system (video graphic system, VGS) module 425.
  • VPSS video process sub-system
  • AI AI module 423
  • VENC video encoder
  • VGS video graphics system
  • the VI module 421 is used for preprocessing the images collected by the sensor module 430, and the preprocessing includes noise reduction, color correction, shading, and the like.
  • the VPSS module 422 is used to perform 3D noise reduction processing on the image processed by the VI module 421.
  • the 3D noise reduction processing of the VPSS module 422 is based on the two-dimensional noise reduction of the VI module 421. Noise reduction.
  • the AI module 423 is configured to perform AI recognition on the image and report AI events.
  • the AI module 423 can identify the features in the image to detect whether it conforms to the specific feature of the AI event, and if it is detected that the specific feature exists in the image, it can determine that there is a corresponding AI event, and report the AI event. .
  • the AI module 423 recognizes the image processed by other modules (including the sensor module 430, the VI module 421, etc.), and detects the characteristics of the child, the AI module 421 can recognize According to the characteristics of the children, it is determined that there is a children's movie viewing event, and the child movie viewing event is reported to the electronic device 410 .
  • the AI module 423 transmits the AI event recognition result to the CameraProxy driver of the HAL 412 through the Socket channel, and then sends the AI event recognition result to the electronic device side.
  • AI events include AI gestures, portrait tracking, child recognition, gesture detection, etc.
  • a socket channel is a socket channel, which is a channel for transmitting data based on the TCP connection protocol.
  • the Socket channel is a channel used by the camera to transmit the AI event recognition result to the USB interface on the side of the electronic device through the USB cable.
  • the VENC module 424 is used to encode the image, generate a video stream (also called video data, video information, etc.), and transmit the video stream to the CameraHAL driver of the HAL 412 through the UVC channel, and then send the video stream to the electronic device 410. side.
  • the UVC channel is the channel through which the camera transmits video data to the USB interface on the electronic device side through the USB cable.
  • the VENC module 424 may perform encoding (also referred to as video encoding) based on multiple images.
  • the VGS module 425 is configured to perform zoom processing on the image, and output the zoomed image to the VENC module 424 .
  • the zoom processing is to perform processing such as enlarging and reducing the image under the condition of ensuring that the image is not distorted.
  • the VI module 421, the VPSS module 422, the AI module 423, the VENC module 424, the VGS module 425, and the sensor module 430 are connected to the CPU 440, respectively.
  • the CPU 440 may be connected to the sensor module 430, the VI module 421, the VPSS module 422, the AI module 423, the VENC module 424, and the VGS module 425 through CNG0, CNG1, CNG2, CNG3, CNG4, and CNG5, respectively.
  • CNG0-CNG5 are used for CPU 440 to provide configuration parameters for each module.
  • the VPSS module 422 after receiving the configuration parameters provided by the CPU 440 to the VPSS module through CNG2, the VPSS module 422 can determine which module or which of the following modules the processing result of the VPSS module is output to according to the provided configuration parameters.
  • modules : AI module 423, VENC module 424, VGS module 425.
  • the AI module 423 can determine whether to start according to the provided configuration parameters. Exemplarily, if the configuration parameter received by the AI module 423 is "0", the AI module 423 determines not to start. Exemplarily, if the configuration parameter received by the AI module 423 is "1", the AI module 423 determines to start.
  • the configuration parameter may also be used only to indicate whether each module is activated, and each module after activation may determine the transmission object of the processing result according to the circuit connection between the modules.
  • VPSS module 422 is connected (eg, electrically connected) to AI module 423 , VGS module 425 , and VENC module 424 .
  • the CPU 440 may instruct the AI module 423 to be activated through configuration parameters, and the VGS module 425 and the VENC module 424 to be deactivated.
  • the VPSS module 422 can transmit the processed result through the three connection circuits according to the connection relationship (ie, the electrical connection, which can also be understood as the actual physical connection relationship).
  • the connection relationship ie, the electrical connection, which can also be understood as the actual physical connection relationship.
  • the AI module 423 in the activated state receives the processing result of the VPSS module 422
  • the VGS module 425 and the VENC module 424 that are not activated do not receive the processing result of the VPSS module 422 .
  • An exemplary illustration is given in the way that an application on the electronic device 410 invokes the camera.
  • an application at the application layer of the electronic device 410 may send an instruction to the CameraHAL driver in the HAL 412 to call the camera 400 .
  • the CameraHAL driver can send UVC commands (or messages) to the camera 400 through the USB cable based on the instructions of the application to call the camera 400 .
  • the UVC command (or message) refers to a command (or message) sent through the UVC channel.
  • the camera 400 starts the CPU 440 and some or all of the modules (such as the sensor module 430, the VI module 421, the VPSS module 422, the VGS module 425 and the VENC module 424) in the camera 400 based on the instructions driven by the CameraHAL, and executes the respective modules.
  • the modules such as the sensor module 430, the VI module 421, the VPSS module 422, the VGS module 425 and the VENC module 424.
  • the VENC module 424 encodes the captured image to generate a video stream.
  • the CameraHAL driver transmits video streams to the application layer.
  • An application in the application layer such as a home camera application, can process the video stream, such as rendering and displaying the video stream.
  • the application layer in the first electronic device includes an AI application, such as an AI fitness application
  • the first electronic device may perform AI processing on the video stream driven by the CameraHAL through the AI fitness application to obtain corresponding AI events.
  • modules are used as the main body of each function realization.
  • the functions of each module are realized by the processing circuit in the ISP, and the description will not be repeated below.
  • USB interfaces there may be multiple USB interfaces in FIG. 4 .
  • the AI module 423 and the VENC module 424 can be respectively connected to the two USB ports of the electronic device 410 through two USB cables.
  • UVC channel and the Socket channel described below are both logical channels, which reflect a type of message transmitted in the USB.
  • connection relationship and processing flow of the modules of the camera shown in this embodiment and subsequent embodiments are only schematic examples. In fact, the internal connections (including hardware connections and logical connections) of cameras produced by different manufacturers may be different.
  • the images processed by the VPSS module can be transmitted to the VENC module for encoding without going through the VGS module. This is not limited.
  • the camera may further include a transmission motor for adjusting the angle and/or position of the camera, such as raising or lowering the camera.
  • a transmission motor for adjusting the angle and/or position of the camera, such as raising or lowering the camera.
  • the CameraHAL driver inputs a first message including an application ID to the camera through the UVC channel.
  • the CameraHAL driver receives a call request message from the application, where the call request message is used to indicate that the application needs to call the camera.
  • the invocation request message carries the application ID.
  • the CameraHAL driver sends a first message to the camera through the UVC channel for requesting to call the camera.
  • the first message carries the application ID.
  • the first message is a UVC message.
  • the UVC message may specifically be a SET_CUR message, and a specified field in the message carries an application ID.
  • the CPU receives the first message, determines the type and the module to be activated according to the application ID, outputs an instruction to the sensor module 430, and sends an instruction to the sensor module 430, the VI module 421, the VPSS module 422, the AI module 423, and the VENC module 424 and the VGS module 425 output their respective configuration parameters, the instructions are used to instruct the sensor module 430 what function to perform, and the configuration parameters are used to configure the sensor module 430, VI module 421, VPSS module 422, AI module 423, VENC module 424 and VGS Module 425.
  • the CPU obtains the application ID in response to the received first message; the memory of the camera is pre-stored with a database, and the database stores the application ID, the type (or type information) corresponding to the application ID, and the corresponding type.
  • Module calling mode the CPU matches the acquired application ID with the pre-stored application ID in the database, and extracts the type corresponding to the successfully matched application ID.
  • the CPU further matches the acquired type with the pre-stored type, and extracts a module invocation mode corresponding to the successfully matched type, where the module invocation mode is used to indicate one or more modules to be activated.
  • the CPU obtains the application ID in response to the received first message; a database such as a data storage matching table is provided in the program run by the CPU, and the data storage matching table stores the application ID, the corresponding application ID.
  • Type or type information
  • the module invocation mode corresponding to the type; the CPU matches the obtained application ID with the application ID in the database, and extracts the type corresponding to the successfully matched application ID and the corresponding module invocation mode, and One or more modules are activated according to the module invocation manner.
  • the database may be updated or modified by means of upgrading, or modification by authorized administrators. In this way, the risk of leakage of the database can be reduced.
  • the CPU may output an instruction to the sensor module through the connection channel CNGO with the sensor module, so as to instruct the sensor module to start up and collect images.
  • the CPU outputs the configuration parameters corresponding to the channels of each module (eg CNG0 to CNG5). Wherein, the configuration parameters are used to perform functions including but not limited to instructing the module to start or not to start.
  • the CPU outputs the configuration parameters corresponding to each module to the sensor module, VI module, VPSS module, AI module, VENC module, and VGS module to instruct some modules to start, and makes each start-up module specify the output object of the processing result, such as making the VPSS module Make it clear that the processing result is to be output to the AI module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the application ID to the VI module;
  • the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the application ID to the VPSS module;
  • VPSS The module performs corresponding processing to obtain the processing result.
  • the sensor module may perform corresponding processing based on the instruction of the CPU. For example, capture images through a camera, and output the captured images to the VI module.
  • the VI module performs corresponding processing on the image from the sensor module based on the configuration parameters sent by the CPU, such as noise reduction processing, and outputs the processed image to the VPSS module.
  • the VPSS module can perform corresponding processing on the image from the VI module, such as 3D noise reduction processing, and obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module determines whether the configuration parameter output by the CPU to the VPSS module indicates that the processing result needs to be output to the AI module. If the type is AI, that is, the application needs the camera to implement the AI function; correspondingly, the configuration parameters output by the CPU to the VPSS module will instruct the VPSS module to output the processing results to the AI module, and the AI module will also receive the configuration instructing it to start. parameter, the VPSS determines that the configuration parameter output by the CPU to the VPSS module indicates that the processing result is output to the AI module, and S105 is executed.
  • the type is a video stream type, that is, the application requires the camera to implement the video stream function; accordingly, the configuration parameters output by the CPU to the VPSS module will instruct the VPSS module to output the processing result to the VGS module or the VENC module, and execute S108.
  • the type is AI type and video stream type, that is, the application needs the camera to realize the AI function and the camera to realize the video stream function; correspondingly, the configuration parameters output by the CPU to the VPSS module will instruct the VPSS module to output the processing results to the AI module and The VENC module, or the AI module and the VGS module, execute S105 to S109 accordingly.
  • the VPSS module outputs the processing result and the application ID to the AI module, and the AI module performs corresponding processing to obtain the processing result.
  • the VPSS module outputs the processing result to the AI module based on the instruction of the configuration parameters sent by the CPU, and the AI module performs corresponding processing on the processing result input by the VPSS, that is, the processed image; Perform AI recognition (or detection) and obtain processing results, which can also be referred to as AI detection results.
  • AI detection results include the presence of AI events and the absence of AI events.
  • the AI module outputs the processing result and the application ID to the CameraProxy driver through the Socket channel.
  • the application ID is used to indicate to which application the processing result is fed back.
  • the AI module can output the acquired AI event and application ID to the CameraProxy driver through the Socket channel based on the configuration parameters sent by the CPU; for example, the AI module sends a Socket message to the CameraProxy driver, and the message carries the AI event and the application ID.
  • the AI module does not detect an AI event, after the AI module performs the AI detection, it may not perform any processing, that is, there is no need to send a Socket message to the CameraProxy driver.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • the VPSS module can further determine whether to output the processing result to the VENC module or the VGS module.
  • the steps of S107 and S104 are in no particular order.
  • the VPSS module can first determine whether the processing result needs to be output to the AI module, or it can first determine whether the processing result needs to be output to the VENC module or the VGS module, and can also determine whether to output the output result at the same time.
  • AI module VENC module or VGS module. This application is not limited.
  • the configuration parameters output by the CPU to the VPSS module indicate that the processing result and the application ID are output to the AI module, then in this step In (S107), based on the configuration parameters, the VPSS module determines that it is not necessary to output the processing result and the application ID to the VENC module or the VGS module.
  • the configuration parameters output by the CPU to the VPSS module instruct to output the processing result and the application ID to the VENC module or VGS module, in this step (S107), the VPSS module determines, based on the configuration parameters, that the processing result and the application ID need to be output to the VENC module or the VGS module.
  • the configuration parameters output by the CPU to the VPSS module indicate that the processing result and the application ID are output.
  • the VPSS module determines, based on the configuration parameters, that the processing result and application ID need to be output to the AI module and the VENC module, or the AI module and the VGS module module.
  • the VPSS module outputs the processing result and the application ID to the VENC module or the VGS module, and the VENC module or the VGS module performs corresponding processing to obtain the processing result.
  • the VENC module encodes the image to generate the video stream.
  • the VGS module zooms the image, and the VGS module outputs the processing result and the application ID to the VENC module based on the indication of the configuration parameters sent by the CPU , the image processed by the VGS module is encoded by the VENC module to generate a video stream.
  • the VENC module outputs the processing result and the application ID to the CameraHAL driver through the UVC channel.
  • the application ID is used to indicate to which application the processing result is fed back.
  • the VENC module can output the generated video stream to the CameraHAL driver through the UVC channel based on the instructions of the configuration parameters sent by the CPU.
  • the VENC module sends a UVC message to the CameraHAL driver, where the UVC message includes the generated video stream.
  • the processing of each module is performed only according to the processing result input by the previous module.
  • the application ID is used to identify which application the processing result corresponds to.
  • the application ID may also be replaced with a sub-function ID under the application.
  • remote housekeeping can be integrated as a sub-function under the "smart screen” application.
  • the "Smart Screen” application includes several sub-functions related to cameras.
  • the remote housekeeping function is just a sub-function under the "Smart Screen” application.
  • the "Smart Screen” application may also include other camera-related sub-functions.
  • the smart screen When the user clicks the remote housekeeping sub-function under the "Smart Screen” application, the smart screen (large screen) calls the camera; when the user clicks other sub-functions related to the camera under the "Smart Screen” application, the smart screen (large screen) ) will also call the camera.
  • the user's mobile phone has a "smart screen” application and a "children mode” application, and the "smart screen” application has a remote housekeeping function.
  • the smart screen will call the camera; after clicking the remote housekeeping function, the smart screen (large screen) will also call the camera.
  • FIGS. 5b-5d and FIGS. 7a-7d the content of this paragraph is also applicable to the embodiments of FIGS. 5b-5d and FIGS. 7a-7d, which will not be repeated below.
  • both the first electronic device and the second electronic device are installed with the same application, and
  • the account of the same application running on the first electronic device is the same as the account of the same application running on the second electronic device, or the account belongs to the same group, such as a family group; the same account on the second electronic device
  • the application When the application is running, it will start the same application on the first electronic device; the first electronic device can be in an off-screen state, that is, no content is displayed, or it can be in a bright-screen state; however, the first electronic device Essentially the same application is launched; the specific steps are as follows:
  • the first application inputs a call request message including the first application ID to the CameraHAL driver.
  • the first application ID sent by the CameraHAL driver is obtained, and a call request message including the first application ID is input to the CameraHAL driver to request to call the camera.
  • the user may remotely trigger the start of the first application through the second electronic device, or the user may directly trigger the start of the first application on the first electronic device, which is not limited in this application.
  • the first application ID may be an ID of the first application, or may be an ID of a sub-function under the first application.
  • the CameraHAL driver inputs the first message including the first application ID to the camera through the UVC channel.
  • the CPU receives the first message, determines the type and the module to be activated according to the first application ID, outputs an instruction to the sensor module, and outputs to the sensor module, VI module, VPSS module, AI module, VENC module, and VGS module
  • the respective configuration parameters, the instructions are used to instruct the sensor module to perform what function, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the first application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the first application ID to the VI module.
  • VPSS module the VPSS module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module outputs the processing result and the first application ID to the AI module, and the AI module performs corresponding processing on the processing result input by the VPSS module to obtain the processing result.
  • the AI module outputs the processing result and the first application ID to the CameraProxy driver through the Socket channel.
  • the CameraProxy driver returns the processing result to the first application.
  • the CameraProxy driver can report the AI event to the first application, so that the first application The application processes the AI events accordingly.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • the VPSS module outputs the processing result to the VENC module or the VGS module, and the VENC module or the VGS module performs corresponding processing to obtain the processing result.
  • the VENC module outputs the processing result to the CameraHAL driver through the UVC channel.
  • S202-S207 and S209-S211 are respectively the same as the contents of S101-S109, and are not repeated here.
  • the CameraHAL driver returns the processing result to the first application.
  • the CameraHAL driver can send the video stream to the first application, so that the first application
  • the application processes the video stream accordingly. For example render and display.
  • the second application inputs a call request message including the second application ID to the CameraHAL driver.
  • the first application may also input a call request message containing the application sub-function ID of another sub-function of the first application to the CameraHAL driver, or the second application may input to the CameraHAL driver a call request message containing the application sub-function ID of the second application.
  • the invocation request message of the sub-function corresponding to the application sub-function ID is used as an example for description in this embodiment.
  • the CameraHAL driver inputs the second message including the second application ID to the camera through the UVC channel.
  • the CPU receives the second message, determines the type and the module to be activated according to the second application ID, outputs an instruction to the sensor module, and outputs to the sensor module, VI module, VPSS module, AI module, VENC module and VGS module
  • the respective configuration parameters, the instructions are used to instruct the sensor module to perform what function, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the VENC module is an exclusive module, that is, it can only execute one video process. If the VENC module has been used by the first application, the second application uses the VENC module only after the first application is finished using it. If the VENC module is not used by any application, the second application can directly use the VENC module.
  • the AI module is an inclusive module, that is, it can execute one or more processes. Regardless of whether the AI module has been called, the second application can directly use the AI module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the second application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the second application ID to the VI module.
  • VPSS module the VPSS module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module outputs the processing result and the second application ID to the AI module, and the AI module performs corresponding processing to obtain the processing result.
  • the AI module outputs the processing result and the second application ID to the CameraProxy driver through the Socket channel.
  • the CameraProxy driver returns the processing result to the second application.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • the VPSS module outputs the processing result and the second application ID to the VENC module or the VGS module, and the VENC module or the VGS module performs corresponding processing to obtain the processing result.
  • the VENC module outputs the processing result and the second application ID to the CameraHAL driver through the UVC channel.
  • the CameraHAL driver returns the processing result to the second application.
  • the calling method of the camera is illustrated as an example.
  • the "remote housekeeping” application uses the camera to shoot or record video at home when calling the camera, so that the user can learn about the situation at home remotely through the first electronic device.
  • the "Kids Mode” application uses the camera to dynamically capture the child's image, and recognizes the child's status through AI recognition, so that the user can remotely learn the child's situation through the first electronic device. Exemplarily, if the child is in a lying state, it is determined that there is a lying state AI event.
  • the steps of the method for remotely calling the camera of the first electronic device by the second electronic device include:
  • the "remote housekeeping" application inputs a call request message including the remote housekeeping application ID to the CameraHAL driver.
  • the "remote housekeeping” application is a “remote housekeeping” application installed on the first electronic device. Both the first electronic device and the second electronic device are installed with a “remote housekeeping” application. After the “remote housekeeping” application is started, it obtains the remote housekeeping application ID, and sends a call request message to the CameraHAL driver to request to call the camera. The message carries the remote housekeeping application ID.
  • the "remote housekeeping” application may be a "remote housekeeping" application on the first electronic device.
  • the "Remote Housekeeping” application specifically includes three sub-functions: AI function, video streaming function, AI function and video streaming function.
  • the "remote housekeeping" application IDs corresponding to different sub-functions are also different.
  • the "remote housekeeping" application IDs corresponding to the AI function, video streaming function, AI function, and video streaming function are ID11, ID12, and ID13, respectively.
  • a selection interface will pop up, allowing the user to select one of the above three functions; according to the user's selection of one of the functions, the corresponding application ID is obtained. For example, if the user selects the video streaming function, the obtained application ID is ID12.
  • the CameraHAL driver inputs the first message including the remote housekeeping application ID to the camera through the UVC channel.
  • the CPU receives the first message, determines the type and the module to be activated according to the remote housekeeping application ID, outputs an instruction to the sensor module, and sends instructions to the sensor module, VI module, VPSS module, AI module, VENC module and VGS module
  • the respective configuration parameters are output, and the instructions are used to instruct the sensor module what function to perform, and the configuration parameters are used to configure the sensor module, the VI module, the VPSS module, the AI module, the VENC module and the VGS module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the remote housekeeping application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and sends the processing result and the remote housekeeping application ID to the VI module.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module outputs the processing result and the remote housekeeping application ID to the VGS module, the VGS module performs corresponding processing according to the input of the VPSS module, and outputs the processing result and the remote housekeeping application ID to the VENC module, and the VENC module performs corresponding processing , get the processing result.
  • the VENC module outputs the processing result and the remote housekeeping application ID to the CameraHAL driver through the UVC channel.
  • the CameraHAL driver returns the processing result to the "remote housekeeping" application.
  • the "remote housekeeping" application of the first electronic device transmits the acquired processing result, that is, the video stream, to the "remote housekeeping" application of the second electronic device (such as a mobile phone).
  • the user can view the picture of the home captured by the camera of the first electronic device through the "remote housekeeping" application on the mobile phone.
  • the "child mode" application inputs a call request message including the child mode application ID to the CameraHAL driver.
  • the user can put the "remote housekeeping" application of the second electronic device in the background, that is, the "remote housekeeping” application is still calling the camera of the first electronic device remotely, and the user can use the second electronic device (for example, phone) triggers the "Kids Mode” app launch.
  • the "Kids Mode” application has only AI functionality and no other sub-functions.
  • the child mode application ID may be the application package name of the child mode application.
  • the CameraHAL driver inputs the second message including the ID of the child mode application to the camera through the UVC channel.
  • the CPU receives the second message, determines the type and the module to be activated according to the child mode application ID, outputs an instruction to the sensor module, and outputs to the sensor module, VI module, VPSS module, AI module, VENC module and VGS module
  • the respective configuration parameters, the instructions are used to instruct the sensor module to perform what function, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the CPU outputs an instruction to the sensor module to instruct the sensor module to collect images.
  • the configuration parameters provided by the CPU the sensor module, the VI module, the VPSS module, and the AI module are activated, and the configuration parameters of the sensor module instruct the sensor module to output the processing result to the VI module.
  • the configuration parameters of the VI module instruct the VI module to output the processing results to the VPSS module.
  • the configuration parameters of the VPSS module instruct the VPSS module to output the processing results to the AI module.
  • the configuration parameters of the AI module instruct the AI module to output the processing results to the CameraProxy driver.
  • the configuration parameter of the VGS module indicates that the VGS module does not need to be started, and the configuration parameter of the VENC module indicates that the VENC module does not need to be started.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the child mode application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the child mode application ID to the VI module.
  • VPSS module the VPSS module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module outputs the processing result and the child mode application ID to the AI module, and the AI module performs corresponding processing according to the input of the VPSS module to obtain the processing result.
  • the AI module performs AI recognition on the image based on the received VPSS-processed image, and detects whether there is a corresponding AI event according to the recognized features.
  • the AI event is the child lying AI event. If the child lying AI event is detected, S315 is executed. If no AI event of children lying down is detected, the AI module continues to perform AI detection on the images processed by VPSS.
  • the AI module outputs the processing result to the CameraProxy driver through the Socket channel.
  • the AI module sends a Socket message to the CameraProxy driver, and the message carries the AI event of the child lying down.
  • the CameraProxy driver returns the processing result to the "child mode" application according to the child mode application ID.
  • the CameraProxy driver reports the AI event of the child lying down to the "Kids Mode” application.
  • the "Kids Mode” application can send a child lying AI event to the user's second electronic device, so as to notify the user that there is a child lying AI event through the "Kids Mode” application of the second electronic device, and the user can learn that the child is lying down at home.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • the module invocation manner corresponding to the "Kids Mode" application indicates that the VENC module or the VGS module does not need to be activated. Therefore, the VPSS module can determine that it is not necessary to output the processed result to the VENC module or the VGS module based on the configuration parameters sent by the CPU.
  • the first application may be an "AI fitness” application and the second application may be a "Kids Mode” application.
  • the "AI fitness” application uses the camera to capture the image of the current user when calling the camera, and recognizes it through AI to determine whether the user's fitness action is standard. Exemplarily, if it is determined that the user's fitness action is not standard, it is determined that there is an AI event of non-standard action.
  • the method steps for remotely calling the camera of the first electronic device by the second electronic device include:
  • the "AI fitness" application inputs a call request message including an AI fitness application ID to the CameraHAL driver.
  • the "AI Fitness" application has only AI functions and no other sub-functions.
  • the AI fitness application ID may be the application package name of the AI fitness application.
  • the CameraHAL driver inputs the first message including the AI fitness application ID to the camera through the UVC channel.
  • the CPU receives the first message, determines the type and the module to be activated according to the AI fitness application ID, outputs an instruction to the sensor module, and outputs to the sensor module, VI module, VPSS module, AI module, VENC module and VGS module
  • the respective configuration parameters, the instructions are used to instruct the sensor module to perform what function, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the AI fitness application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the AI fitness application ID to the VI module.
  • VPSS module the VPSS module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module.
  • the VPSS module outputs the processing result and the AI fitness application ID to the AI module, and the AI module performs corresponding processing according to the input of the VPSS module to obtain the processing result.
  • the AI module outputs the processing result and the AI fitness application ID to the CameraProxy driver through the Socket channel.
  • the CameraProxy driver returns the processing result to the "AI fitness” application.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module. After the configuration parameter output from the CPU to the VPSS module indicates not to output to the VENC module or the VGS module, execute S410.
  • the "child mode" application inputs a call request message including the child mode application ID to the CameraHAL driver.
  • the CameraHAL driver inputs the second message including the ID of the child mode application to the camera through the UVC channel.
  • the CPU receives the second message, determines the type and the module to be activated according to the child mode application ID, outputs an instruction to the sensor module, and outputs to the sensor module, VI module, VPSS module, AI module, VENC module and VGS module
  • the respective configuration parameters, the instructions are used to instruct the sensor module to perform what function, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the child mode application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the child mode application ID to the VI module.
  • VPSS module the VPSS module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module. After the configuration parameter instruction output by the CPU to the VPSS module is output to the AI module, S415 is executed.
  • the VPSS module outputs the processing result and the child mode application ID to the AI module, and the AI module performs corresponding processing according to the input of the VPSS module to obtain the processing result.
  • the AI module outputs the processing result and the child mode application ID to the CameraProxy driver through the Socket channel.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • FIG. 6 is a schematic structural diagram of a camera in an electronic device according to Embodiment 2 of the present application.
  • the components included in the camera head 700 in FIG. 6 are the same as those included in the camera head 400 in FIG. 4 , except that the reference numerals are adjusted accordingly.
  • the VPSS module 622 in FIG. 6 and the VPSS module 422 in FIG. 4 have the same function and purpose. Therefore, for each component included in the camera 600, reference may be made to the introduction of the corresponding component in FIG. 4, and details are not repeated here.
  • the camera 600 is also connected to the electronic device 710 through a USB interface.
  • the USB interface is only an example, and other interfaces such as UART and USART can also be used for the connection between the two.
  • the difference between FIG. 6 and FIG. 4 is that in the HAL 612 of the electronic device 610, the HAL 612 at least includes the CameraProxy driver.
  • the CameraProxy driver is used to receive the data input by the AI module 623 through the Socket channel and the data input by the VENC module 624 through the UVC channel.
  • the CameraProxy driver is the proxy of the camera on the side of the electronic device, which is used to receive the two-way data uploaded from the camera, and continue to transmit to the higher layer of the electronic device in two ways, as well as to receive the data from the higher layer of the electronic device and pass the hardware. Two-way transmission of the layer to the camera. It should be noted that, if the camera 600 is connected to the electronic device 600 through a USB interface, the Socket message and the UVC message share the USB cable for transmission. During the transmission process, the camera's AI module or VENC module can be occupied by preemption or balance. USB cable to transfer respective data. An example of inputting data through a Socket channel is sending a Socket message; an example of inputting data through a UVC channel is sending a UVC message, such as sending a SET_CUR message.
  • the CameraProxy driver may acquire the application identification information and/or type of the application started in the electronic device 610, and send the acquired application identification information and/or type to the camera 600; the CPU 640 of the camera 600 according to The received application identification information and/or type determines the configuration parameters of each module, and sends the configuration parameters of each module to each module respectively. Each module determines whether to start, run, operate, and to which branch the processing result is sent according to the received configuration parameters.
  • the memory 650 stores the application identification information (ie, the application ID), the corresponding relationship between the type and the module calling mode.
  • the CPU 640 of the camera 600 obtains the corresponding type and module calling method based on the received application identification information, and starts (or calls) the corresponding module.
  • the memory 650 may not store the corresponding relationship between the application identification information (ie, the application ID), the type, and the module calling mode.
  • the related content involved in the second embodiment of the present application is the same as or similar to the related content of the first embodiment of the present application, and details are not repeated here.
  • the invoking process of the camera by the CameraProxy driver of the electronic device in Fig. 7a is basically the same as the invoking process of the camera by the CameraHAL driver and the CameraProxy driver in Fig. 5a.
  • the difference is that the sending of the first message in Fig. 5a is driven by the CameraHAL, while the receiving of the processing result of the AI module or the VENC module is driven by the CameraProxy.
  • the sending of the first message and the receiving of the processing result of the AI module or the VENC module are all performed by the CameraProxy driver.
  • the specific steps of the calling process of the camera driven by the CameraProxy of the electronic device in FIG. 7a are as follows.
  • the CameraProxy driver inputs the first message including the application ID to the camera through the UVC channel.
  • the CPU receives the first message, determines the type and the module to be activated according to the application ID, outputs an instruction to the sensor module, and outputs the respective information to the sensor module, VI module, VPSS module, AI module, VENC module, and VGS module.
  • Configuration parameters the instructions are used to instruct the sensor module what function to perform, and the configuration parameters are used to configure the sensor module, VI module, VPSS module, AI module, VENC module and VGS module.
  • the sensor module performs corresponding processing according to the instruction of the CPU, and outputs the processing result and the application ID to the VI module; the VI module performs corresponding processing according to the input of the sensor module, and outputs the processing result and the application ID to the VPSS module; VPSS The module performs corresponding processing to obtain the processing result.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the AI module. After the configuration parameter indication output from the CPU to the VPSS module is output to the AI module, execute S605; otherwise, execute S608;
  • the VPSS module outputs the processing result to the AI module, and the AI module performs corresponding processing to obtain the processing result.
  • the AI module outputs the processing result and the application ID to the CameraProxy driver through the Socket channel.
  • the configuration parameter output by the CPU to the VPSS module indicates whether to output to the VENC module or the VGS module.
  • the VPSS module outputs the processing result and the application ID to the VENC module or the VGS module, and the VENC module or the VGS module performs corresponding processing to obtain the processing result.
  • the VENC module outputs the processing result to the CameraProxy driver through the UVC channel.
  • FIG. 7b further illustrates the method steps for remotely calling the camera of the first electronic device by the second electronic device.
  • both are basically the same.
  • the difference is that both the first application and the second application in Figure 5b send a message containing the application ID to the camera through the CameraHAL driver, and receive the processing result and application ID through the CameraHAL driver or the CameraProxy driver according to the application ID.
  • the first application and the second application both send a message containing the application ID to the camera through the CameraProxy driver, and both receive the processing result and the application ID through the CameraProxy driver.
  • the specific steps in FIG. 7b will not be repeated here.
  • Fig. 7c and Fig. 7d further illustrate the method steps for the second electronic device to remotely call the camera of the first electronic device in combination with specific applications.
  • the first application in FIG. 7c is a "remote housekeeping" application
  • the second application is a "child mode” application.
  • the first application is an "AI fitness” application
  • the second application is a "Kids Mode” application.
  • Fig. 7c and Fig. 7d are basically the same as Fig. 5c and Fig. 5d, respectively.
  • the corresponding modules of the camera can be activated based on different types, and the type-based dynamic calling method can be implemented, so that multiple applications can use the camera.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the electronic device can be divided into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 8 shows a schematic block diagram of an apparatus 800 according to an embodiment of the present application.
  • the apparatus 800 may include: a processor 801 , a transceiver/transceiver pin 802 , and optionally, a memory 803 .
  • bus 804 includes a power bus, a control bus and a status signal bus in addition to a data bus.
  • bus 804 includes a power bus, a control bus and a status signal bus in addition to a data bus.
  • the various buses are referred to as bus 804 in the figures.
  • the memory 803 may be used for instructions in the foregoing method embodiments.
  • the processor 801 can be used to execute the instructions in the memory 803, and control the receive pins to receive signals, and control the transmit pins to transmit signals.
  • the apparatus 800 may be the first electronic device, the second electronic device, or the camera in the above method embodiments.
  • This embodiment also provides a computer storage medium, where computer instructions are stored in the computer storage medium, and when the computer instructions are executed on the electronic device, the electronic device executes the above-mentioned relevant method steps to realize the method for invoking the camera in the above-mentioned embodiment. .
  • This embodiment also provides a computer program product, when the computer program product runs on the computer, the computer executes the above-mentioned relevant steps, so as to realize the calling method of the camera in the above-mentioned embodiment.
  • the embodiments of the present application also provide an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a connected processor and a memory; wherein, the memory is used for storing computer execution instructions, and when the apparatus is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the method for invoking the camera in the foregoing method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are all used to execute the corresponding method provided above. Therefore, for the beneficial effects that can be achieved, reference can be made to the corresponding provided above. The beneficial effects in the method will not be repeated here.
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • Units described as separate components may or may not be physically separated, and components shown as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium.
  • a readable storage medium including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请涉及摄像头调用方法、电子设备和摄像头。通过第一接口与电子设备连接的摄像头包括:处理器,存储器及存储在存储器的计算机程序,当计算机程序被处理器执行时,摄像头执行:接收到包含应用ID或应用子功能ID的第一消息;在检测到其对应的类型为第一类型时,沿第一路径通过第一接口输出第一消息类型的第一处理结果;在检测到其对应的类型为第二类型时,沿第二路径或第三路径通过第一接口输出第二消息类型的第二处理结果;接收到包含另一应用ID或另一应用子功能ID的第二消息;在检测到其对应的类型为第一类型时,沿第一路径通过第一接口输出第一消息类型的第三处理结果。本申请能实现摄像头被多应用和/或多功能调用,提升用户体验。

Description

一种摄像头的调用方法、电子设备和摄像头
本申请要求于2020年6月30日提交中国专利局、申请号为202010618161.2、申请名称为“一种摄像头的调用方法、电子设备和摄像头”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备控制领域,尤其涉及一种摄像头的调用方法、电子设备和摄像头。
背景技术
电子设备的摄像头可被另一电子设备远程调用,并实现相应功能。例如,在移动设备和大屏都安装有远程看家应用后,大屏的摄像头可被移动设备通过远程看家应用远程调用,实现远程看家功能。然而,在上述远程调用的过程中,电子设备的摄像头只能被一个应用独占调用,若此时另一应用要调用该摄像头,则只有在当前应用退出后,所述另一应用才能调用该摄像头。因此,如何实现多个应用对摄像头的调用,成为我们的需求。
发明内容
为了解决上述技术问题,本申请提出了一种摄像头的调用方法、电子设备和摄像头。在该方法中,在远程调用的过程中,电子设备的摄像头能够被至少两个应用调用,甚至该摄像头能够同时满足至少两个应用的调用,提高了使用效率,提升了用户体验。
第一方面,提供一种摄像头。摄像头通过第一接口与电子设备连接,摄像头包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得摄像头执行以下步骤:接收到包含应用ID或应用子功能ID的第一消息;响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;在检测到应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;接收到包含另一应用ID或另一应用子功能ID的第二消息;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。其中,另一应用子功能可以为一个应用下的另一子功能,也可以为另一应用下的子功能。这样,摄像头通过一个接口与电子设备连接,摄像头可实现基于应用的类型的动态调用方法,可满足至少两个应用、至少一个应用一个应用子功能、至少两个应用子功能的调用请求,在不改变摄像头的内部架构的情况下,解决摄像头独占问题,提高了使用效率,提升了用户体验。
根据第一方面,摄像头还执行以下步骤:响应于第二消息,在检测到另一应用ID或 另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果。这样,提供了在另一应用或另一应用子功能调用摄像头时,消息为第二类型时的处理方式。
根据第一方面,或者以上第一方面的任意一种实现方式,摄像头还执行以下步骤:响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;第三类型为第一类型+第二类型;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果;第三类型为所述第一类型+所述第二类型。这样,提供了在一个应用或一个应用子功能调用摄像头时,消息为第三类型时的处理方式;以及在另一应用或另一应用子功能调用摄像头时,消息为第三类型时的处理方式。
根据第一方面,或者以上第一方面的任意一种实现方式,摄像头还包括:一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,传感器模块用于采集图像,并将采集的图像输出至视频输入模块;视频输入模块用于对传感器模块采集到的图像进行预处理;视频处理子系统模块用于对视频输入模块预处理后的图像进行降噪处理;人工智能模块用于对视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;视频图形系统模块用于对视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至视频编码模块;视频编码模块用于对视频处理子系统模块处理后的图像或者视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第一接口输出第二消息类型的视频流。这样,就提出了摄像头的具体架构。
根据第一方面,或者以上第一方面的任意一种实现方式,第一路径包括传感器模块、视频输入模块、视频处理子系统模块和人工智能模块;第二路径包括传感器模块、视频输入模块、视频处理子系统模块、视频图形系统模块和视频编码模块;第三路径包括传感器模块、视频输入模块、视频处理子系统模块和视频编码模块。这样,基于摄像头的具体架构,提供不同的路径。
根据第一方面,或者以上第一方面的任意一种实现方式,第一类型为人工智能类型;第二类型为视频流类型;第三类型为人工智能类型+视频流类型;第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口为USB接口。这样,提供了具体的类型、消息类型和接口。
第二方面,提供一种摄像头。摄像头通过第一接口和第二接口与电子设备连接,摄像头包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得摄像头执行以下步骤:接收到包含应用ID或应用子功能ID的第一消息;响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输 出第一消息类型的第一处理结果;在检测到应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;接收到包含另一应用ID或另一应用子功能ID的第二消息;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。其中,另一应用子功能可以为一个应用下的另一子功能,也可以为另一应用下的子功能。这样,摄像头通过两个接口与电子设备连接,摄像头可实现基于应用的类型的动态调用方法,可满足至少两个应用、至少一个应用一个应用子功能、至少两个应用子功能的调用请求,在不改变摄像头的内部架构的情况下,解决摄像头独占问题,提高了使用效率,提升了用户体验。
根据第二方面,摄像头还执行以下步骤:响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果。这样,提供了在另一应用或另一应用子功能调用摄像头时,消息为第二类型时的处理方式。
根据第二方面,或者以上第二方面的任意一种实现方式,摄像头还执行以下步骤:响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;第三类型为所述第一类型+第二类型;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果;第三类型为第一类型+第二类型。这样,提供了在一个应用或一个应用子功能调用摄像头时,消息为第三类型时的处理方式;以及在另一应用或另一应用子功能调用摄像头时,消息为第三类型时的处理方式。
根据第二方面,或者以上第二方面的任意一种实现方式,摄像头还包括:一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,传感器模块用于采集图像,并将采集的图像输出至视频输入模块;视频输入模块用于对传感器模块采集到的图像进行预处理;视频处理子系统模块用于对视频输入模块预处理后的图像进行降噪处理;人工智能模块用于对视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;视频图形系统模块用于对视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至视频编码模块;视频编码模块用于对视频处理子系统模块处理后的图像或者视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第二接口输出第二消息类型的视频流。这样,就提出了摄像头的具体架构。
根据第二方面,或者以上第二方面的任意一种实现方式,第一路径包括传感器模块、视频输入模块、视频处理子系统模块和人工智能模块;第二路径包括传感器模块、视频输入模块、视频处理子系统模块、视频图形系统模块和视频编码模块;第三路径包括传感器模块、视频输入模块、视频处理子系统模块和视频编码模块。这样,基于摄像头的具体架构,提供不同的路径。
根据第二方面,或者以上第二方面的任意一种实现方式,第一类型为人工智能类型;第二类型为视频流类型;第三类型为人工智能类型+视频流类型;第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口和所述第二接口中的至少一个为USB接口。这样,提供了具体的类型、消息类型和接口。
第三方面,提供一种摄像头的调用方法。该方法应用于摄像头,摄像头通过第一接口与电子设备连接,该方法包括:接收到包含应用ID或应用子功能ID的第一消息;响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;在检测到应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;接收到包含另一应用ID或另一应用子功能ID的第二消息;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
根据第三方面,该方法还包括:响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果。
根据第三方面,或者以上第三方面的任意一种实现方式,该方法还包括:响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;第三类型为第一类型+第二类型;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果;第三类型为第一类型+第二类型。
根据第三方面,或者以上第三方面的任意一种实现方式,摄像头包括:一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,传感器模块用于采集图像,并将采集的图像输出至视频输入模块;视频输入模块用于对传感器模块采集到的图像进行预处理;视频处理子系统模块用于对视频输入模块预处理后的图像进行降噪处理;人工智能模块用于对视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;视频图形系统模块用于对视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至视频编码模块;视频编码模块用于对视频处理子系统模块处理后的图像或者视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第一接口输出第二消息类型的视频流。
根据第三方面,或者以上第三方面的任意一种实现方式,第一路径包括传感器模块、视频输入模块、视频处理子系统模块和人工智能模块;第二路径包括传感器模块、视频输入模块、视频处理子系统模块、视频图形系统模块和视频编码模块;第三路径包括传感器模块、视频输入模块、视频处理子系统模块和视频编码模块。
根据第三方面,或者以上第三方面的任意一种实现方式,第一类型为人工智能类型; 第二类型为视频流类型;第三类型为人工智能类型+视频流类型;第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口为USB接口。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,提供一种摄像头的调用方法。该方法应用于摄像头,摄像头通过第一接口和第二接口与电子设备连接,该方法包括:接收到包含应用ID或应用子功能ID的第一消息;响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;在检测到应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;接收到包含另一应用ID或另一应用子功能ID的第二消息;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
根据第四方面,该方法还包括:响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果。
根据第四方面,或者以上第四方面的任意一种实现方式,该方法还包括:响应于第一消息,在检测到应用ID或应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;第三类型为第一类型+第二类型;响应于第二消息,在检测到另一应用ID或另一应用子功能ID对应的类型为第三类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果;第三类型为第一类型+第二类型。
根据第四方面,或者以上第四方面的任意一种实现方式,摄像头包括:一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,传感器模块用于采集图像,并将采集的图像输出至视频输入模块;视频输入模块用于对传感器模块采集到的图像进行预处理;视频处理子系统模块用于对视频输入模块预处理后的图像进行降噪处理;人工智能模块用于对视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;视频图形系统模块用于对视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至视频编码模块;视频编码模块用于对视频处理子系统模块处理后的图像或者视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第二接口输出第二消息类型的视频流。
根据第四方面,或者以上第四方面的任意一种实现方式,第一路径包括传感器模块、视频输入模块、视频处理子系统模块和人工智能模块;第二路径包括传感器模块、视频输入模块、视频处理子系统模块、视频图形系统模块和视频编码模块;第三路径包括传感器模块、视频输入模块、视频处理子系统模块和视频编码模块。
根据第四方面,或者以上第四方面的任意一种实现方式,第一类型为人工智能类型;第二类型为视频流类型;第三类型为人工智能类型+视频流类型;第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口和第二接口中的至少一个为USB接口。
第四方面以及第四方面的任意一种实现方式分别与第二方面以及第二方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第二方面以及第二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,提供一种电子设备。电子设备通过第一接口连接摄像头,电子设备包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:在检测到与摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向摄像头发送包含应用ID或应用子功能ID的第一消息,应用ID对应于应用,或者应用子功能ID对应于应用子功能;通过第一接口接收第一消息类型的第一处理结果;和/或,通过第一接口接收第二消息类型的第二处理结果;在检测到与摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,另一应用ID对应于另一应用,或者另一应用子功能ID对应于另一应用子功能;通过第一接口接收第一消息类型的第三处理结果;和/或,通过第一接口接收第二消息类型的第四处理结果。这样,电子设备与摄像头通过一个接口连接,使得电子设备与摄像头相互配合,满足至少两个应用、至少一个应用一个应用子功能、至少两个应用子功能的调用请求,在不改变摄像头的内部架构的情况下,解决摄像头独占问题,提高了使用效率,提升了用户体验。
根据第五方面,第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口为USB接口。这样,就将消息类型和接口具体化。
第六方面,提供一种电子设备。电子设备通过第一接口和第二接口连接摄像头,电子设备包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:在检测到与摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向摄像头发送包含应用ID或应用子功能ID的第一消息,应用ID对应于应用,或者应用子功能ID对应于应用子功能;通过第一接口接收第一消息类型的第一处理结果;和/或,通过第二接口接收第二消息类型的第二处理结果;在检测到与摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,另一应用ID对应于另一应用,或者另一应用子功能ID对应于另一应用子功能;通过第一接口接收第一消息类型的第三处理结果;和/或,通过第二接口接收第二消息类型的第四处理结果。这样,电子设备与摄像头通过两个接口连接,使得电子设备与摄像头相互配合,满足至少两个应用、至少一个应用一个应用子功能、至少两个应用子功能的调用请求,在不改变摄像头的内部架构的情况下,解决摄像头独占问题,提高了使用效率,提升了用户体验。
根据第六方面,第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口和第二接口中的至少一个为USB接口。这样,就将消息类型和接口具体化。
第七方面,提供一种摄像头的调用方法。该方法应用于电子设备,电子设备通过第一接口连接摄像头,该方法包括:在检测到与摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向摄像头发送包含应用ID或应用子功能ID的第一消息,应用ID对应于应用,或者应用子功能ID对应于应用子功能;通过第一接口接收第一消息类型的第一处理结果;和/或,通过第一接口接收第二消息类型的第二处理结果;在检测到与摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,另一应用ID对应于另一应用,或者另一应用子功能ID对应于另一应用子功能;通过第一接口接收第一消息类型的第三处理结果;和/或,通过第一接口接收第二消息类型的第四处理结果。
根据第七方面,第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口为USB接口。
第七方面以及第七方面的任意一种实现方式分别与第五方面以及第五方面的任意一种实现方式相对应。第七方面以及第七方面的任意一种实现方式所对应的技术效果可参见上述第五方面以及第五方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第八方面,提供一种摄像头的调用方法。该方法应用于电子设备,电子设备通过第一接口和第二接口连接摄像头,该方法包括:在检测到与摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向摄像头发送包含应用ID或应用子功能ID的第一消息,应用ID对应于应用,或者应用子功能ID对应于应用子功能;通过第一接口接收第一消息类型的第一处理结果;和/或,通过第二接口接收第二消息类型的第二处理结果;在检测到与摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,另一应用ID对应于另一应用,或者另一应用子功能ID对应于另一应用子功能;通过第一接口接收第一消息类型的第三处理结果;和/或,通过第二接口接收第二消息类型的第四处理结果。
根据第八方面,第一消息类型为Socket消息类型;第二消息类型为UVC消息类型;第一接口和第二接口中的至少一个为USB接口。
第八方面以及第八方面的任意一种实现方式分别与第六方面以及第六方面的任意一种实现方式相对应。第八方面以及第八方面的任意一种实现方式所对应的技术效果可参见上述第六方面以及第六方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第九方面,提供一种计算机可读存储介质。该介质包括计算机程序,当计算机程序在摄像头上运行时,使得摄像头执行第三方面、第四方面以及第三方面中任意一项、第四方面中任意一项的摄像头的调用方法。
第九方面以及第九方面的任意一种实现方式分别与第三方面、第四方面以及第三方面的任意一种实现方式、第四方面的任意一种实现方式相对应。第九方面以及第九方面的任意一种实现方式所对应的技术效果可参见上述第三方面、第四方面以及第三方面的 任意一种实现方式、第四方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第十方面,提供一种计算机可读存储介质。该介质包括计算机程序,当计算机程序在电子设备上运行时,使得电子设备执行第七方面、第八方面以及第七方面中任意一项、第八方面中任意一项的摄像头的调用方法。
第十方面以及第十方面的任意一种实现方式分别与第七方面、第八方面以及第七方面的任意一种实现方式、第八方面的任意一种实现方式相对应。第十方面以及第十方面的任意一种实现方式所对应的技术效果可参见上述第七方面、第八方面以及第七方面的任意一种实现方式、第八方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第十一方面,提供一种计算机系统。该计算机系统包括第五方面、第六方面以及第五方面中任意一项、第六方面中任意一项的电子设备和第一方面、第二方面以及第一方面中任意一项、第二方面中任意一项所述的摄像头,使得所述电子设备执行第七方面、第八方面以及第七方面中任意一项、第八方面中任意一项的方法,以及使得所述摄像头执行执行第三方面、第四方面以及第三方面中任意一项、第四方面中任意一项的方法。
第十一方面以及第十一方面的任意一种实现方式分别与第五方面、第六方面以及第五方面的任意一种实现方式、第六方面的任意一种实现方式,第一方面、第二方面以及第一方面中任意一项、第二方面中任意一种实现方式,第七方面、第八方面以及第七方面中任意一项、第八方面中任意一种实现方式,第三方面、第四方面以及第三方面中任意一项、第四方面中任意一种实现方式的组合相对应。第十一方面以及第十一方面的任意一种实现方式所对应的技术效果可参见上述第五方面、第六方面以及第五方面的任意一种实现方式、第六方面的任意一种实现方式,第一方面、第二方面以及第一方面中任意一项、第二方面中任意一种实现方式,第七方面、第八方面以及第七方面中任意一项、第八方面中任意一种实现方式,第三方面、第四方面以及第三方面中任意一项、第四方面中任意一种实现方式所对应的技术效果,此处不再赘述。
本申请中有关更多应用和/或应用子功能对摄像头的调用,与上述的调用方式相类似,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的应用场景示意图;
图2为本申请实施例提供的电子设备的硬件结构示意图;
图3为本申请实施例提供的电子设备的软件结构示意图;
图4为本申请实施例一提供的摄像头与电子设备连接的结构示意图;
图5a-5d为本申请实施例一提供的摄像头调用方法的流程示意图;
图6为本申请实施例二提供的摄像头与电子设备连接的结构示意图;
图7a-7d为本申请实施例二提供的摄像头调用方法的流程示意图;
图8为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个以上。其中,两个以上包含两个。
在本申请实施例的描述中,第一应用、第二应用可以是电子设备中需要调用摄像头的任一应用。可选地,第一应用、第二应用可以是电子设备在出厂前安装的,也可以是用户在电子设备的使用中下载的,本申请不做限定。第一应用、第二应用仅是用于举例,并非用于限制应用的具体数量。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的应用场景进行说明。图1为本申请实施例提供的应用场景示意图。如图1所示,第一电子设备100可被第二电子设备200调用。第一电子设备100具有摄像头(未示出),或者第一电子设备100通过诸如通用串行总线(universal serial bus,USB)接口的各种接口连接摄像头。第二电子设备200远程调用并控制第一电子设备100的摄像头。具体来说,第一电子设备100和第二电子设备200都安装有同一应用,比如“远程看家”应用。第二电子设备200先打开自身的“远程看家”应用,之后通过自身的“远程看家”应用向第一电子设备100发送调用请求;第一电子设备100接收到该请求后,打开第一电子设备100的“远程看家”应用。第一电子设备100和第二电子设备200均包括但不限于大屏、膝上型计算机、桌上型计算机、掌上型计算机(如平板电脑、智能手机等)、智能穿戴设备(如智能手环、智能手表、智能眼镜、智能戒指等)等各种计算设备。举例来说,第一电子设备100为配置有摄像头的大屏;第二电子设备200为智能手机。可替代地,第二电子设备200可配置摄像头,也可不配置摄像头。另外,虽然在图1中第一电子设备100和第二电子设 备200的数量均仅为1个,但第一电子设备100和/或第二电子设备200的数量可为多个。
图2为本申请实施例示出的一种电子设备的结构示意图。图2虽然以图1中的第一电子设备100为例说明电子设备的结构,但本领域技术人员明了,图2中的电子设备的结构也适用于图1中的第二电子设备200。如图2所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等,可支持USB1.0、USB2.0、USB3.0和USB4.0或者更高标准USB规范在内的各种USB规范。示例性的,USB接口130可以包括一个或多个USB接口。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电子设备100的无线通信功能可以通过天线1,天线2, 移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
图3为本申请实施例的电子设备100的软件结构框图。分层架构将软件分成若干层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为五层,从上至下分别为应用程序层,系统框架层,系统库与运行时层、硬件抽象层和内核层。应用程序层可以包括远程看家应用,家庭相机应用,视频通话应用,人工智能(Artificial Intelligence,AI)健身应用,儿童模式应用等程序。其中,远程看家应用用于该电子设备100以外的设备通过远程调用的方式,打开电子设备100上的摄像头,并获取摄像头拍摄的视频影像和/或图片。需要说明的是,图3中示出的应用程序层所包括的应用程序仅为示例性说明,本申请对此不作限定。可以理解的是,应用程序层包括的应用并不构成对电子设备100的具体限定。在本申请另一些实施例中,相较于图3所示应用程序层包含的应用,电子设备100可包括更多或更少的应用,电子设备100也可包括完全不同的应用。
系统框架层为应用程序层的应用程序提供应用编程接口(Application Programming Interface,API)和编程框架,包括各种组件和服务来支持开发者的安卓开发。系统框架层包括一些预先定义的函数。如图3所示,系统框架层可包括视图系统、窗口管理器、资源管理器、内容提供器等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可包括视频,图像,音频等。
系统库与运行时层包括系统库和安卓运行时(Android Runtime)。系统库可以包括多个功能模块。例如:浏览器内核,3D图形库(例如:OpenGL ES),字体库等。浏览器内核负责对网页语法的解释(如标准通用标记语言下的一个应用HTML、JavaScript)并渲染(显示)网页。3D图形库用于实现三维图形绘图,图像渲染,合成和图层处理等。字体库用于实现不同字体的输入。安卓运行时包括核心库和虚拟机。安卓运行时负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
可以理解的是,图3示出的系统框架层、系统库与运行时层包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
硬件抽象层(Hardware Abstraction Layer,HAL)是硬件和软件之间的层。HAL包含CameraHAL驱动,摄像头代理(CameraProxy)驱动,显示驱动,音频驱动等。上述驱 动仅为示意性举例,本申请不做限定。HAL是
Figure PCTCN2021081092-appb-000001
系统的基础。
Figure PCTCN2021081092-appb-000002
系统最终的功能实现都是通过HAL完成。
示例性的,CameraHAL驱动与CameraProxy驱动均用于将摄像头抽象化,以隐藏摄像头的特定通道,使应用可访问(或调用)摄像头。CameraHAL驱动可基于通用串行总线视频(universal serial bus video class,UVC)协议与摄像头进行通信。UVC协议也可以理解为基于UVC通道的协议,即摄像头400与HAL通过UVC通道,建立UVC连接(通信连接),并基于UVC连接传输符合UVC协议的消息。CameraProxy驱动可基于远程网络驱动接口规范(remote network driver interface specification,RNDIS)协议与摄像头进行通信。需要说明的是,RNDIS协议也可以理解为基于Socket通道的协议,即摄像头400与HAL通过Socket通道,建立Socket连接(通信连接),并基于Socket连接传输符合RNDIS协议的消息。
可选地,UVC通道可用于传输控制指令与视频流;Socket通道可用于传输AI事件、日志等信息。
电子设备100具有的摄像头可为外置摄像头和/或内置摄像头。外置摄像头可通过USB线缆连接电子设备100的USB接口。内置摄像头可嵌入在电子设备100内,在电子设备100内,该内置摄像头通过USB线缆连接电子设备100的USB接口。
实施例一
图4为本申请实施例一提供的摄像头与电子设备连接的结构示意图。如图4所示,摄像头400通过USB线缆与电子设备410的USB接口连接,进而与电子设备410连接。需要说明的是,图4中USB接口的数量及其在电子设备410一侧的分布,仅为示意性举例,并不限制本申请的范围。其他类型的接口诸如UART、USART等也可用于摄像头400和电子设备410的连接。上述的接口(含USB接口)可位于电子设备410一侧,也可位于摄像头400一侧,也可位于电子设备410和摄像头400两侧。USB接口的数量可为1个,也可为2个,甚至更多个。
图4中USB接口位于电子设备410一侧的硬件层411上。在硬件层411之上有HAL412,HAL 412至少包括CameraProxy驱动和CameraHAL驱动。
CameraProxy驱动为位于Android应用程序包(android application package,APK)应用与摄像头之间的代理程序,处于Android HAL层,为标准的HAL接口定义语言(HAL interface definition language,HIDL)服务,其目的在于将摄像头抽象化。它隐藏了特定摄像头器件的硬件接口细节,为APK应用提供更轻量、便捷的摄像头访问。
CameraHAL驱动为位于APK应用与摄像头之间的代理程序,处于Android HAL层,提供了标准的数据结构和接口定义规范,为不同摄像头硬件的系统服务定义了标准接口,不同的摄像头硬件厂商只需实现相应的接口,将设备相关的实现放在HAL层中实现并以共享库(.so)的形式提供,即能使设备为Android系统所用。
在本申请实施例中,CameraProxy驱动和CameraHAL驱动用于通过两路分别接收AI模块423和VENC模块424输入的数据。
摄像头400包括ISP 420、传感器模块430、CPU 440和存储器450。ISP 420用于对图像和视频流进行处理,并将处理后的视频流和图像分两路输出。CPU 440仅为示意性举例,微控制单元(Microcontroller Unit,MCU)等各种微控制器或起到处理器或微控制器功能的器件均可为上述CPU的可替代形式。
传感器模块430为摄像头400的感光元件,采集光信号,并将采集的光信号转换为电信号,之后将所述电信号传递给ISP 420处理,转化为图像或视频流。
ISP 420包括视频输入(video input,VI)模块421,视频处理子系统(video process sub-system,VPSS)模块422,AI模块423,视频编码(video encoder,VENC)模块424和视频图形系统(video graphic system,VGS)模块425。
其中,VI模块421用于对传感器模块430采集到的图像进行预处理,预处理包括降噪、颜色校正、阴影(shading)等。
VPSS模块422,用于对VI模块421处理后的图像进行3D降噪处理等,VPSS模块422的3D降噪处理是在VI模块421的二维降噪的基础上,基于时间域对图像进行三维降噪。
AI模块423,用于对图像进行AI识别,并上报AI事件。示例性的,AI模块423可对图像中的特征进行识别,以检测是否符合AI事件的特定特征,如果检测到图像中存在该特定特征,则可确定存在对应的AI事件,并上报该AI事件。举例说明:假设一名儿童正在看电视,AI模块423通过其它模块(包括传感器模块430、VI模块421等)处理后的图像进行识别,并检测到该儿童的特征,AI模块421可基于识别到的儿童的特征,确定存在儿童观影事件,并向电子设备410上报儿童观影事件。具体来说,AI模块423在完成AI事件识别检测后,将AI事件识别结果通过Socket通道传输至HAL 412的CameraProxy驱动,进而将AI事件识别结果发送至电子设备侧。AI事件包括AI手势,人像追踪,儿童识别,姿态检测等。Socket通道即套接字通道,是一种基于TCP连接协议传输数据的通道。在本实施例中,Socket通道是摄像头通过USB线缆用于向电子设备侧的USB接口传输AI事件识别结果的通道。
VENC模块424,用于对图像进行编码,生成视频流(也称为视频数据、视频信息等),并将视频流通过UVC通道传输至HAL 412的CameraHAL驱动,进而将视频流发送至电子设备410一侧。UVC通道是摄像头通过USB线缆向电子设备侧的USB接口传输视频数据的通道。可选地,VENC模块424可基于多个图像进行编码(也称为视频编码)。
VGS模块425,用于对图像进行变焦处理,并将变焦处理后的图像输出至VENC模块424。变焦处理为在保证图像不失真的情况下,对图像进行放大、缩小等处理。
VI模块421、VPSS模块422、AI模块423、VENC模块424、VGS模块425和传感器模块430分别连接至CPU 440。具体来说,CPU 440可通过CNG0、CNG1、CNG2、CNG3、CNG4和CNG5分别连接至传感器模块430、VI模块421、VPSS模块422、AI模块423、VENC模块424和VGS模块425。其中,CNG0-CNG5用于CPU 440为各模块提供配置参数。以VPSS模块422为例,在通过CNG2接收到CPU 440提供给VPSS 模块的配置参数后,VPSS模块422可根据提供的配置参数,确定VPSS模块的处理结果输出至下述的哪一个模块或哪几个模块:AI模块423、VENC模块424、VGS模块425。再以AI模块423为例,AI模块423可根据提供的配置参数,确定是否启动。示例性的,若AI模块423接收到的配置参数为“0”,则AI模块423确定不启动。示例性的,若AI模块423接收到的配置参数为“1”,则AI模块423确定启动。上述的配置参数指示各模块是否启动的方式仅为示意性举例,本申请不做限定。可选地,在其他实施例中,配置参数也可以仅用于指示各模块是否启动,启动后的各模块可根据模块间的电路连接确定处理结果的传输对象。举例说明,VPSS模块422连接(如电连接)AI模块423、VGS模块425和VENC模块424。CPU440可通过配置参数指示AI模块423启动,VGS模块425和VENC模块424未启动。VPSS模块422可根据连接关系(即电连接,也可理解为实际的物理连接关系),通过三条连接电路传输处理后的处理结果。而实际上,只有处于启动状态的AI模块423接收到了VPSS模块422的处理结果,而未启动的VGS模块425和VENC模块424并未接收到VPSS模块422的处理结果。
以电子设备410上的某个应用调用摄像头的方式进行示例性说明。
示例性的,电子设备410的应用层的应用,例如家庭相机应用可向HAL 412中的CameraHAL驱动发送指令,以调用摄像头400。CameraHAL驱动可基于应用的指令,通过USB线缆向摄像头400发送UVC命令(或消息),以调用摄像头400。所述UVC命令(或消息)是指通过UVC通道发送的命令(或消息)。
示例性的,摄像头400基于CameraHAL驱动的指令,启动摄像头400中的CPU440及部分或全部模块(如传感器模块430、VI模块421、VPSS模块422、VGS模块425和VENC模块424),并执行各自的功能。例如,传感器模块430将采集到的图像经由VI模块421、VPSS模块422以及VGS模块425进行降噪、3D降噪以及变焦等处理后,由VENC模块424对获取到的图像进行编码,生成视频流,并向CameraHAL驱动通过UVC通道发送UVC消息,该UVC消息携带生成的视频流。
示例性的,CameraHAL驱动向应用层传输视频流。应用层中的应用,例如家庭相机应用,可对视频流进行处理,例如,对视频流进行渲染并显示等操作。
示例性的,若第一电子设备中的应用层包括AI应用,例如AI健身应用,第一电子设备可通过AI健身应用对CameraHAL驱动传输的视频流进行AI处理,以获取相应的AI事件。
需要说明的是,本文中均以模块为各功能实现的主体进行说明,实际上,各模块的功能是由ISP中的处理电路实现的,下文中不再重复说明。
另外,图4中的USB接口可为多个。AI模块423和VENC模块424可通过两个USB线缆分别与电子设备410的两个USB接口连接。
需要说明的是,UVC通道及下文所述的Socket通道均为逻辑通道,其反映了在USB传输的一种消息类型。进一步需要说明的是,本实施例及后续实施例中所示出的摄像头各模块的连接关系以及处理流程仅为示意性举例。实际上,各厂商所生产的摄像头的内部连接(包括硬件连接和逻辑连接)可能不相同,例如,VPSS模块处理后的图像可 传输至VENC模块进行编码,而不经过VGS模块等,本申请对此不作限定。
可选地,摄像头还可以包括传动马达,用于调整摄像头的角度和/或位置,比如升高或降低摄像头。
下面结合图5a对本申请的技术方案进行详细说明,进一步说明电子设备的CameraHAL驱动和CameraProxy驱动对摄像头的调用过程。
S101,CameraHAL驱动通过UVC通道向摄像头输入包含应用ID的第一消息。
具体的,CameraHAL驱动接收应用的调用请求消息,该调用请求消息用于指示应用需要调用摄像头。示例性的,该调用请求消息携带应用ID。具体的,CameraHAL驱动响应于接收到的调用请求消息,通过UVC通道向摄像头发送第一消息,用于请求调用摄像头。其中,第一消息携带应用ID。示例性的,第一消息为UVC消息。该UVC消息具体可为SET_CUR消息,消息中的指定字段中携带有应用ID。
S102,CPU接收所述第一消息,根据应用ID,确定类型和需要启动的模块,向传感器模块430输出指令,以及向传感器模块430、VI模块421、VPSS模块422、AI模块423、VENC模块424和VGS模块425输出各自的配置参数,所述指令用于指示传感器模块430执行何种功能,配置参数用于配置传感器模块430、VI模块421、VPSS模块422、AI模块423、VENC模块424和VGS模块425。
可选地,CPU响应于接收到的第一消息,获取应用ID;摄像头的存储器预先存储有数据库,所述数据库存储有应用ID、与应用ID对应的类型(或类型信息)以及与类型对应的模块调用方式;CPU将获取到的应用ID与预先存储的所述数据库中的应用ID匹配,并提取匹配成功的应用ID对应的类型。具体的,CPU进一步将获取到的类型与预先存储的类型匹配,并提取匹配成功的类型对应的模块调用方式,模块调用方式用于指示需要启动的一个或多个模块。
可选地,CPU响应于接收到的第一消息,获取应用ID;在CPU运行的程序中设置有诸如数据存储匹配表的数据库,所述数据存储匹配表存储有应用ID、与应用ID对应的类型(或类型信息)以及与类型对应的模块调用方式;CPU将获取到的应用ID与所述数据库中的应用ID匹配,并提取匹配成功的应用ID对应的类型以及对应的模块调用方式,并根据所述模块调用方式,启动一个或多个模块。后续,可以通过升级、或者有权限的管理人员修改等方式,更新或修改所述数据库。这样,可以降低所述数据库泄露的风险。
应用ID、类型和模块调用方式的配置过程将在下面的实施例中进行详细说明。
示例性的,CPU可通过与传感器模块的连接通道CNG0,向传感器模块输出指令,以指示传感器模块启动,并采集图像。CPU通过与各模块的通道(例如CNG0~CNG5)输出对应的配置参数。其中,所述配置参数用于执行包含但不限于指示模块启动或不启动的功能。CPU向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各模块对应的配置参数,以指示其中部分模块启动,并使各个启动模块明确处理结果的输出对象,比如使VPSS模块明确其处理结果要输出 给AI模块。
S103,传感器模块根据CPU的指示,进行相应处理,并将处理结果和应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
具体的,传感器模块可基于CPU的指示,进行相应处理。例如通过摄像头采集图像,并将采集到的图像输出至VI模块。VI模块基于CPU发送的配置参数,对来自传感器模块的图像进行相应处理;例如降噪处理,并将处理后的图像输出至VPSS模块。VPSS模块可对来自VI模块的图像进行相应处理,例如3D降噪处理,并得到处理结果。
S104,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
具体的,VPSS模块对图像处理完成,并得到处理结果后,判断CPU输出至VPSS模块的配置参数是否指示需要将处理结果输出至AI模块。如果类型为AI类型,即应用需要摄像头实现AI功能;相应的,CPU输出至VPSS模块的配置参数会指示VPSS模块将处理结果输出至AI模块,并且,AI模块也会接收到指示其启动的配置参数,VPSS确定CPU输出至VPSS模块的配置参数指示将处理结果输出至AI模块,执行S105。
如果类型为视频流类型,即应用需要摄像头实现视频流功能;相应的,CPU输出至VPSS模块的配置参数会指示VPSS模块将处理结果输出至VGS模块或VENC模块,执行S108。
如果类型为AI类型和视频流类型,即应用即需要摄像头实现AI功能,还需要摄像头实现视频流功能;相应的,CPU输出至VPSS模块的配置参数会指示VPSS模块将处理结果输出至AI模块和VENC模块,或者,AI模块和VGS模块,相应执行S105~S109。
S105,VPSS模块将处理结果和应用ID输出至AI模块,AI模块进行相应处理,得到处理结果。
具体的,VPSS模块基于CPU发送的配置参数的指示,将处理结果输出至AI模块,并由AI模块对VPSS输入的处理结果,即处理后的图像进行相应处理;示例性的,AI模块对图像进行AI识别(或检测),并得到处理结果,也可以称为是AI检测结果。AI检测结果包括存在AI事件和不存在AI事件。
S106,AI模块通过Socket通道向CameraProxy驱动输出处理结果和应用ID。
具体的,所述应用ID用于指示向哪个应用反馈处理结果。AI模块可基于CPU发送的配置参数,将获取到的AI事件以及应用ID通过Socket通道输出至CameraProxy驱动;示例性的,AI模块向CameraProxy驱动发送Socket消息,消息中携带AI事件和应用ID。可选地,若AI模块未检测到AI事件,AI模块执行AI检测后,可不作任何处理,即无需向至CameraProxy驱动发送Socket消息。
S107,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。
具体的,VPSS模块可进一步判断是否将处理结果输出至VENC模块或VGS模块。S107和S104的步骤不分先后,VPSS模块可先判断是否需要将处理结果输出至AI模块,也可先判断是否需要将处理结果输出至VENC模块或VGS模块,还可同时判断是否将输出结果输出至AI模块,VENC模块或VGS模块。本申请不做限定。
在一个示例中,若当前请求调用摄像头的应用为AI类型,即应用仅需要摄像头实现AI功能,则CPU输出至VPSS模块的配置参数指示将处理结果和应用ID输出至AI模块,则在本步骤(S107)中,VPSS模块基于配置参数,判定无需将处理结果和应用ID输出至VENC模块或VGS模块。
在另一个示例中,若当前请求调用摄像头的应用为视频流类型,即应用仅需要摄像头实现视频流功能,则CPU输出至VPSS模块的配置参数指示将处理结果和应用ID输出至VENC模块或VGS模块,则在本步骤(S107)中,VPSS模块基于配置参数,判定需将处理结果和应用ID输出至VENC模块或VGS模块。
在又一个示例中,若当前请求调用摄像头的应用为视频流类型和AI类型,即应用需要摄像头实现AI功能和视频流功能,则CPU输出至VPSS模块的配置参数指示将处理结果和应用ID输出至AI模块和VENC模块,或者AI模块和VGS模块,则在本步骤(S107)中,VPSS模块基于配置参数,判定需将处理结果和应用ID输出至AI模块和VENC模块,或者AI模块和VGS模块。
S108,VPSS模块将处理结果和应用ID输出至VENC模块或VGS模块,VENC模块或VGS模块进行相应处理,得到处理结果。
在一个示例中,若VPSS模块将处理结果和应用ID输出至VENC模块,则VENC模块对图像进行编码,以生成视频流。在另一个示例中,若VPSS模块将处理结果和应用ID输出至VGS模块,则VGS块对图像进行变焦,并且VGS模块基于CPU发送的配置参数的指示,将处理结果和应用ID输出至VENC模块,由VENC模块对VGS模块处理后的图像进行编码,以生成视频流。
S109,VENC模块通过UVC通道向CameraHAL驱动输出处理结果和应用ID。
具体的,所述应用ID用于指示向哪个应用反馈处理结果。VENC模块可基于CPU发送的配置参数的指示,将生成的视频流通过UVC通道输出至CameraHAL驱动。示例性的,VENC模块向CameraHAL驱动发送UVC消息,UVC消息包括生成的视频流。
在S102-S108中,各模块的处理仅根据上一模块输入的处理结果进行处。应用ID用于标识该处理结果对应于哪个应用。
在其他实施例中,应用ID也可被替换为指应用下的子功能ID。例如,远程看家可作为一个子功能,集成在“智慧屏”应用下。“智慧屏”应用下包括多个与摄像头相关的子功能。远程看家功能只是“智慧屏”应用下的一个子功能。“智慧屏”应用还可包括其他的与摄像头相关的子功能。在用户点击“智慧屏”应用下的远程看家子功能时,智慧屏(大屏)调用摄像头;在用户点击“智慧屏”应用下的与摄像头相关的其他子功能时,智慧屏(大屏)也会调用摄像头。再比如,用户的手机上有 “智慧屏”应用和“儿童模式”应用,“智慧屏”应用下有远程看家子功能。同理,在点击“儿童模式”应用后,智慧屏(大屏)调用摄像头;在点击远程看家子功能后,智慧屏(大屏)也会调用摄像头。在无特别说明的情况下,本段内容也适用于图5b-5d,图7a-7d的各实施例,下文不再重复。
在图5a所示的实施例的基础上,结合图5b来进一步说明第二电子设备远程调用第一电子设备的摄像头的方法步骤,第一电子设备和第二电子设备都安装有同一应用,且所述同一应用在第一电子设备上运行的账号与所述同一应用在第二电子设备上运行的账号相同,或者账号属于同一群组,比如家庭群组;第二电子设备上的所述同一应用在运行时,会启动第一电子设备上的所述同一应用;所述第一电子设备可以为息屏状态,即不显示任何内容,也可以为亮屏状态;不过所述第一电子设备实质上启动了所述同一应用;具体的步骤如下:
S201,第一应用向CameraHAL驱动输入包含第一应用ID的调用请求消息。
具体的,第一应用启动后,获取CameraHAL驱动发送的第一应用ID,并向CameraHAL驱动输入包含第一应用ID的调用请求消息,以请求调用摄像头。示例性的,用户可通过第二电子设备远程触发第一应用启动,或者用户还可直接在第一电子设备上触发第一应用启动,本申请不做限定。
示例性的,第一应用ID可以是第一应用的ID,也可以是第一应用下的子功能的ID。
S202,CameraHAL驱动通过UVC通道向摄像头输入包含第一应用ID的第一消息。
S203,CPU接收所述第一消息,根据第一应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
S204,传感器模块根据CPU的指示,进行相应处理,并将处理结果和第一应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和第一应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S205,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
S206,VPSS模块将处理结果和第一应用ID输出至AI模块,AI模块对VPSS模块输入的处理结果进行相应处理,得到处理结果。
S207,AI模块通过Socket通道向CameraProxy驱动输出处理结果和第一应用ID。
S208,根据第一应用ID,CameraProxy驱动向第一应用返回处理结果。
示例性的,CameraProxy驱动接收到AI模块输入的处理结果和第一应用ID,即携带AI事件和第一应用ID的Socket消息后,CameraProxy驱动可将AI事件上报给第一应用,以使第一应用对AI事件进行相应处理。
S209,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模 块。
S210,VPSS模块将处理结果输出至VENC模块或VGS模块,VENC模块或VGS模块进行相应处理,得到处理结果。
S211,VENC模块通过UVC通道向CameraHAL驱动输出处理结果。
S202-S207,S209-S211的具体内容分别与S101-S109的内容相同,此处不赘述。
S212,根据第一应用ID,CameraHAL驱动向第一应用返回处理结果。
示例性的,CameraHAL驱动接收到VENC模块输入的处理结果和第一应用ID,即携带视频流和第一应用ID的UVC消息后,CameraHAL驱动可将视频流发送给第一应用,以使第一应用对视频流进行相应处理。例如渲染并显示。
S213,第二应用向CameraHAL驱动输入包含第二应用ID的调用请求消息。
示例性的,本实施例中仅以第二应用ID为例进行说明。在其他实施例中,也可以是第一应用向CameraHAL驱动输入包含第一应用的另一子功能的应用子功能ID的调用请求消息,还可以是第二应用向CameraHAL驱动输入包含第二应用下的子功能对应的应用子功能ID的调用请求消息。
S214,CameraHAL驱动通过UVC通道向摄像头输入包含第二应用ID的第二消息。
S215,CPU接收所述第二消息,根据第二应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
在一种可能的实现方式中,VENC模块为独占模块,即其只能执行一个视频进程。若VENC模块已被第一应用使用,则在第一应用使用完毕后,第二应用才使用VENC模块。若VENC模块未被任何应用使用,则第二应用可直接使用VENC模块。在另一种可能的实现方式中,AI模块为非独占模块,即其可以执行一个或多个进程。无论AI模块是否已被调用,第二应用均可直接使用AI模块。
S216,传感器模块根据CPU的指示,进行相应处理,并将处理结果和第二应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和第二应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S217,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
S218,VPSS模块将处理结果和第二应用ID输出至AI模块,AI模块进行相应处理,得到处理结果。
S219,AI模块通过Socket通道向CameraProxy驱动输出处理结果和第二应用ID。
S220,根据第二应用ID,CameraProxy驱动向第二应用返回处理结果。
S221,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。
S222,VPSS模块将处理结果和第二应用ID输出至VENC模块或VGS模块, VENC模块或VGS模块进行相应处理,得到处理结果。
S223,VENC模块通过UVC通道向CameraHAL驱动输出处理结果和第二应用ID。
S224,根据第二应用ID,CameraHAL驱动向第二应用返回处理结果。
有关S213-S224的具体内容,可参照S201-S212,此处不再赘述。
下面,结合图5c进一步阐明第二电子设备远程调用第一电子设备的摄像头的方法流程。如图5c所示,以第一电子设备中的第一应用为“远程看家”应用,第二应用为“儿童模式”应用为例,对摄像头的调用方式进行举例说明。其中,“远程看家”应用在调用摄像头时,使用摄像头拍摄或录制家里的视频,使得用户远程通过第一电子设备即可获悉到家里的情况。“儿童模式”应用在调用摄像头时,使用摄像头动态捕捉孩子的图像,并通过AI识别,判断识别孩子的状态,使得用户远程通过第一电子设备即可获悉到孩子的情况。示例性的,如果孩子为躺卧状态,则确定存在躺卧状态AI事件。
如图5c所示,在第二电子设备启动“远程看家”应用,并使得第一电子设备启动“远程看家”应用(第一电子设备可为息屏状态,或者亮屏状态)后,第二电子设备远程调用第一电子设备的摄像头的方法步骤包括:
S301,“远程看家”应用向CameraHAL驱动输入包含远程看家应用ID的调用请求消息。
其中,所述“远程看家”应用为第一电子设备上安装的“远程看家”应用。第一电子设备和第二电子设备均安装有“远程看家”应用。“远程看家”应用启动后,获取远程看家应用ID,并向CameraHAL驱动发送调用请求消息,用于请求调用摄像头,该消息携带有远程看家应用ID。该“远程看家”应用可为第一电子设备上的“远程看家”应用。
需要说明的是,“远程看家”应用具体包括三种子功能:AI功能、视频流功能、AI功能和视频流功能。不同的子功能所对应的“远程看家”应用ID也不同。比如AI功能、视频流功能、AI功能和视频流功能所对应的“远程看家”应用ID分别为ID11、ID12和ID13。示例性地,在用户打开“远程看家”应用时,会弹出选择界面,让用户选择上述三种功能中的一种;根据用户对其中一种功能的选择,获取对应的应用ID。比如,用户选择视频流功能,则获取的应用ID为ID12。
S302,CameraHAL驱动通过UVC通道向摄像头输入包含远程看家应用ID的第一消息。
S303,CPU接收所述第一消息,根据远程看家应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
S304,传感器模块根据CPU的指示,进行相应处理,并将处理结果和远程看家应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和远程看家应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S305,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
S306,VPSS模块将处理结果和远程看家应用ID输出至VGS模块,VGS模块根据VPSS模块的输入,进行相应处理,并将处理结果和远程看家应用ID输出至VENC模块,VENC模块进行相应处理,得到处理结果。
S307,VENC模块通过UVC通道向CameraHAL驱动输出处理结果和远程看家应用ID。
S308,根据远程看家应用ID,CameraHAL驱动向“远程看家”应用返回处理结果。
示例性的,最终,第一电子设备的“远程看家”应用接收到处理结果后,向第二电子设备(例如手机)的“远程看家”应用传输获取到的处理结果,即视频流,用户可通过手机上的“远程看家”应用查看第一电子设备的摄像头所拍摄的家里的画面。
S309,“儿童模式”应用向CameraHAL驱动输入包含儿童模式应用ID的调用请求消息。
示例性的,用户可将第二电子设备的“远程看家”应用置于后台,即“远程看家”应用仍然在远程调用第一电子设备的摄像头,并且用户可通过第二电子设备(例如手机)触发“儿童模式”应用启动。示例性的,“儿童模式”应用仅具有AI功能,并无其他子功能。儿童模式应用ID可以是儿童模式应用的应用包名。
S310,CameraHAL驱动通过UVC通道向摄像头输入包含儿童模式应用ID的第二消息。
S311,CPU接收所述第二消息,根据儿童模式应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
具体的,CPU向传感器模块输出指令,以指示传感器模块采集图像。根据CPU提供的配置参数,传感器模块、VI模块、VPSS模块和AI模块启动,并且传感器模块的配置参数指示传感器模块将处理结果输出至VI模块。VI模块的配置参数指示VI模块将处理结果输出至VPSS模块。VPSS模块的配置参数指示VPSS模块将处理结果输出至AI模块。AI模块的配置参数指示AI模块将处理结果输出至CameraProxy驱动。VGS模块的配置参数指示VGS模块无需启动,VENC模块的配置参数指示VENC模块无需启动。
S312,传感器模块根据CPU的指示,进行相应处理,并将处理结果和儿童模式 应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和儿童模式应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S313,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
S314,VPSS模块将处理结果和儿童模式应用ID输出至AI模块,AI模块根据VPSS模块的输入,进行相应处理,得到处理结果。
示例性的,AI模块基于接收到的VPSS处理后的图像,对图像进行AI识别,根据识别到的特征,检测是否存在对应的AI事件。在本实施例中,AI事件即为儿童躺卧AI事件。若检测到儿童躺卧AI事件,则执行S315。若未检测到儿童躺卧AI事件,则AI模块继续对VPSS处理后的图像进行AI检测。
S315,根据儿童模式应用ID,AI模块通过Socket通道向CameraProxy驱动输出处理结果。
示例性的,AI模块向CameraProxy驱动发送Socket消息,消息中携带儿童躺卧AI事件。
S316,根据儿童模式应用ID,CameraProxy驱动向“儿童模式”应用返回处理结果。
示例性的,CameraProxy驱动将儿童躺卧AI事件上报给“儿童模式”应用。“儿童模式”应用可向用户的第二电子设备发送儿童躺卧AI事件,以通过第二电子设备的“儿童模式”应用通知用户存在儿童躺卧AI事件,用户可获知儿童正在家中躺卧。
S317,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。
示例性的,“儿童模式”应用对应的模块调用方式指示无需启动VENC模块或VGS模块。因此,VPSS模块可基于CPU发送的配置参数确定无需将处理后的结果输出至VENC模块或VGS模块。
在S301-S317中,未描述事宜,与S201-S222相同或相类似,此处不再赘述。
在另外的示例中,第一应用可为“AI健身”应用,第二应用可为“儿童模式”应用。其中“AI健身”应用在调用摄像头时,使用摄像投捕捉当前用户的图像,并通过AI识别,判断用户的健身动作是否标准。示例性的,如果判断用户的健身动作不标准,则确定存在动作不标准AI事件。
如图5d所示,第二电子设备远程调用第一电子设备的摄像头的方法步骤包括:
S401,“AI健身”应用向CameraHAL驱动输入包含AI健身应用ID的调用请求消息。
示例性的,“AI健身”应用仅具有AI功能,并无其他子功能。AI健身应用ID可以是AI健身应用的应用包名。
S402,CameraHAL驱动通过UVC通道向摄像头输入包含AI健身应用ID的第一消息。
S403,CPU接收所述第一消息,根据AI健身应用ID,确定类型和需要启动的 模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
S404,传感器模块根据CPU的指示,进行相应处理,并将处理结果和AI健身应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和AI健身应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S405,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。
S406,VPSS模块将处理结果和AI健身应用ID输出至AI模块,AI模块根据VPSS模块的输入,进行相应处理,得到处理结果。
S407,AI模块通过Socket通道向CameraProxy驱动输出处理结果和AI健身应用ID。
S408,根据AI健身应用ID,CameraProxy驱动向“AI健身”应用返回处理结果。
S409,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。在CPU输出至VPSS模块的配置参数指示不输出至VENC模块或VGS模块后,执行S410。
S410,“儿童模式”应用向CameraHAL驱动输入包含儿童模式应用ID的调用请求消息。
S411,CameraHAL驱动通过UVC通道向摄像头输入包含儿童模式应用ID的第二消息。
S412,CPU接收所述第二消息,根据儿童模式应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
S413,传感器模块根据CPU的指示,进行相应处理,并将处理结果和儿童模式应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和儿童模式应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S414,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。在CPU输出至VPSS模块的配置参数指示输出至AI模块后,执行S415。
S415,VPSS模块将处理结果和儿童模式应用ID输出至AI模块,AI模块根据VPSS模块的输入,进行相应处理,得到处理结果。
S416,AI模块通过Socket通道向CameraProxy驱动输出处理结果和儿童模式应用ID。
S417,根据儿童模式应用ID,CameraProxy驱动向“儿童模式”应用返回处理 结果。
S418,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。
在S401-S418中,未描述事宜,与S301-S317相同或相类似,此处不再赘述。
实施例二
图6为本申请实施例二提供的电子设备中摄像头的结构示意图。图6的摄像头700所包括的部件与图4的摄像头400所包括的部件相同,只是附图标记相应做了调整。比如,图6中的VPSS模块622与图4中的VPSS模块422,两者的功能和用途相同。因此有关摄像头600包括的各部件,可参照图4中对应部件的介绍,此处不再赘述。
图6中,摄像头600也通过USB接口与电子设备710连接,USB接口仅为示例,其他的接口诸如UART、USART也可用于两者的连接。图6与图4不同的是,在电子设备610的HAL 612中,HAL 612至少包括CameraProxy驱动。CameraProxy驱动用于接收AI模块623通过Socket通道输入的数据,以及VENC模块624通过UVC通道输入的数据。CameraProxy驱动为摄像头在电子设备一侧的代理,用于接收从摄像头上传的两路数据,并继续分两路向电子设备的更高层传输,以及用于接收从电子设备更高层的数据,并通过硬件层向摄像头的两路传输。需要说明的是,若摄像头600通过一个USB接口与电子设备600连接,则Socket消息与UVC消息共用USB线缆传输,在传输过程中,摄像头的AI模块或VENC模块可采用抢占或均衡的方式占用USB线缆以传输各自的数据。通过Socket通道输入数据的示例为发送Socket消息;通过UVC通道输入数据的示例为发送UVC消息,例如发送SET_CUR消息。
在一种可能的实现方式中,CameraProxy驱动可获取电子设备610中启动应用的应用标识信息和/或类型,并将获取的应用标识信息和/或类型发送至摄像头600;摄像头600的CPU 640根据接收到的应用标识信息和/或类型,确定各模块的配置参数,并将各模块的配置参数分别发送至各模块。各模块根据接收的配置参数,确定是否启动、运行、操作、处理结果发送至哪个支路等。
可选地,存储器650存储有应用标识信息(即应用ID)、类型与模块调用方式之间的对应关系。摄像头600的CPU 640基于接收到的应用标识信息,获取到对应的类型和模块调用方式,并启动(或调用)对应的模块。
可选地,存储器650也可不存储有应用标识信息(即应用ID)、类型与模块调用方式之间的对应关系。
在无特别说明的情况下,本申请实施例二涉及的相关内容与本申请实施例一的相关内容相同或相类似,此处不再赘述。
图7a中电子设备的CameraProxy驱动对摄像头的调用过程,与图5a中CameraHAL驱动和CameraProxy驱动对摄像头的调用过程基本相同。所不同的是,图5a中有关第一消息的发送是由CameraHAL驱动执行,而对AI模块或VENC模块 处理结果的接收,是由CameraProxy驱动来执行。而图7a中有关第一消息的发送以及对AI模块或VENC模块处理结果的接收,都是CameraProxy驱动来执行。图7a中电子设备的CameraProxy驱动对摄像头的调用过程的具体步骤如下。
S601,CameraProxy驱动通过UVC通道向摄像头输入包含应用ID的第一消息。
S602,CPU接收所述第一消息,根据应用ID,确定类型和需要启动的模块,向传感器模块输出指令,以及向传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块输出各自的配置参数,所述指令用于指示传感器模块执行何种功能,配置参数用于配置传感器模块、VI模块、VPSS模块、AI模块、VENC模块和VGS模块。
S603,传感器模块根据CPU的指示,进行相应处理,并将处理结果和应用ID输出至VI模块;VI模块根据传感器模块的输入,进行相应处理,并将处理结果和应用ID输出至VPSS模块;VPSS模块进行相应处理,得到处理结果。
S604,CPU输出至VPSS模块的配置参数指示是否输出至AI模块。在CPU输出至VPSS模块的配置参数指示输出至AI模块后,执行S605;否则,执行S608;
S605,VPSS模块将处理结果输出至AI模块,AI模块进行相应处理,得到处理结果。
S606,AI模块通过Socket通道向CameraProxy驱动输出处理结果和应用ID。
S607,CPU输出至VPSS模块的配置参数指示是否输出至VENC模块或VGS模块。在CPU输出至VPSS模块的配置参数指示输出至VENC模块或VGS模块后,执行S608;
S608,VPSS模块将处理结果和应用ID输出至VENC模块或VGS模块,VENC模块或VGS模块进行相应处理,得到处理结果。
S609,VENC模块通过UVC通道向CameraProxy驱动输出处理结果。
图7b进一步说明了第二电子设备远程调用第一电子设备的摄像头的方法步骤。与图5b相比,两者基本相同。所不同的是,图5b中第一应用、第二应用都是通过CameraHAL驱动向摄像头发送包含应用ID的消息,并根据应用ID的不同,通过CameraHAL驱动或CameraProxy驱动接收处理结果和应用ID。而图7b中第一应用、第二应用都是通过CameraProxy驱动向摄像头发送包含应用ID的消息,并都是通过CameraProxy驱动接收处理结果和应用ID。有关图7b中的具体步骤,此处不再赘述。
图7c和图7d分别结合具体的应用,进一步说明了第二电子设备远程调用第一电子设备的摄像头的方法步骤。其中,图7c中第一应用为“远程看家”应用,第二应用为“儿童模式”应用。图7d中第一应用为“AI健身”应用,第二应用为“儿童模式”应用。图7c、图7d分别与图5c、图5d相比,基本相同。所不同的是,图5c、图5d中两个具体应用都是通过CameraHAL驱动向摄像头发送包含应用ID的消息,并根据应用ID的不同,通过CameraHAL驱动或CameraProxy驱动接收处理结果和应用ID。而图7c、图7d中两个具体应用都是通过CameraProxy驱动向摄像头发送包含应用ID的消息,并都是通过CameraProxy驱动接收处理结果和应用ID。有关图 7c、图7d中的具体步骤,此处不再赘述。
综上,在本申请中,可基于不同的类型,启动摄像头的相应模块,实现基于类型的动态调用方法,使得多个应用使用摄像头。
可以理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在另一个示例中,图8示出了本申请实施例的一种装置800的示意性框图装置800可包括:处理器801和收发器/收发管脚802,可选地,还包括存储器803。
装置800的各个组件通过总线804耦合在一起,其中总线804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线804。
可选地,存储器803可以用于前述方法实施例中的指令。该处理器801可用于执行存储器803中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
装置800可以是上述方法实施例中的第一电子设备、第二电子设备或摄像头。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的摄像头的调用方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的摄像头的调用方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的摄像头的调用方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁, 仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请的范围之内。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (34)

  1. 一种摄像头,所述摄像头通过第一接口与电子设备连接,所述摄像头包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述摄像头执行以下步骤:
    接收到包含应用ID或应用子功能ID的第一消息;
    响应于所述第一消息,
    在检测到所述应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;
    在检测到所述应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;
    接收到包含另一应用ID或另一应用子功能ID的第二消息;
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
  2. 根据权利要求1所述的摄像头,其特征在于,所述摄像头还执行以下步骤:
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果。
  3. 根据权利要求1或2所述的摄像头,其特征在于,所述摄像头还执行以下步骤:
    响应于所述第一消息,
    在检测到所述应用ID或所述应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和
    沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;所述第三类型为所述第一类型+所述第二类型;
    响应于所述第二消息,
    在检测到所述另一应用ID或所述另一应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和
    沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果;所述第三类型为所述第一类型+所述第二类型。
  4. 根据权利要求1-3中任意一项所述的摄像头,其特征在于,所述摄像头还包括:
    一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,
    所述传感器模块用于采集图像,并将采集的图像输出至所述视频输入模块;
    所述视频输入模块用于对所述传感器模块采集到的图像进行预处理;
    所述视频处理子系统模块用于对所述视频输入模块预处理后的图像进行降噪处理;
    所述人工智能模块用于对所述视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;
    所述视频图形系统模块用于对所述视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至所述视频编码模块;
    所述视频编码模块用于对所述视频处理子系统模块处理后的图像或者所述视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第一接口输出第二消息类型的视频流。
  5. 根据权利要求4所述的摄像头,其特征在于,
    所述第一路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述人工智能模块;
    所述第二路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块、所述视频图形系统模块和所述视频编码模块;
    所述第三路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述视频编码模块。
  6. 根据权利要求1-6中任意一项所述的摄像头,其特征在于,所述第一类型为人工智能类型;所述第二类型为视频流类型;所述第三类型为人工智能类型+视频流类型;所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;第一接口为USB接口。
  7. 一种摄像头,所述摄像头通过第一接口和第二接口与电子设备连接,所述摄像头包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述摄像头执行以下步骤:
    接收到包含应用ID或应用子功能ID的第一消息;
    响应于所述第一消息,
    在检测到所述应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;
    在检测到所述应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;
    接收到包含另一应用ID或另一应用子功能ID的第二消息;
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
  8. 根据权利要求7所述的摄像头,其特征在于,所述摄像头还执行以下步骤:
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果。
  9. 根据权利要求7或8所述的摄像头,其特征在于,所述摄像头还执行以下步骤:
    响应于所述第一消息,
    在检测到所述应用ID或所述应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和
    沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;所述第三类型为所述第一类型+所述第二类型;
    响应于所述第二消息,
    在检测到所述另一应用ID或所述另一应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和
    沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果;所述第三类型为所述第一类型+所述第二类型。
  10. 根据权利要求7-9中任意一项所述的摄像头,其特征在于,所述摄像头还包括:
    一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,
    所述传感器模块用于采集图像,并将采集的图像输出至所述视频输入模块;
    所述视频输入模块用于对所述传感器模块采集到的图像进行预处理;
    所述视频处理子系统模块用于对所述视频输入模块预处理后的图像进行降噪处理;
    所述人工智能模块用于对所述视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;
    所述视频图形系统模块用于对所述视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至所述视频编码模块;
    所述视频编码模块用于对所述视频处理子系统模块处理后的图像或者所述视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第二接口输出第二消息类型的视频流。
  11. 根据权利要求10所述的摄像头,其特征在于,
    所述第一路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述人工智能模块;
    所述第二路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块、所述视频图形系统模块和所述视频编码模块;
    所述第三路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述视频编码模块。
  12. 根据权利要求7-11中任意一项所述的摄像头,其特征在于,所述第一类型为人工智能类型;所述第二类型为视频流类型;所述第三类型为人工智能类型+视频流类型;所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一 接口和所述第二接口中的至少一个为USB接口。
  13. 一种摄像头的调用方法,应用于摄像头,所述摄像头通过第一接口与电子设备连接,其特征在于,所述方法包括:
    接收到包含应用ID或应用子功能ID的第一消息;
    响应于所述第一消息,
    在检测到所述应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;
    在检测到所述应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;
    接收到包含另一应用ID或另一应用子功能ID的第二消息;
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果。
  15. 根据权利要求13或14所述的方法,其特征在于,所述方法还包括:
    响应于所述第一消息,
    在检测到所述应用ID或所述应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和
    沿第二路径或第三路径,通过第一接口输出第二消息类型的第二处理结果;所述第三类型为所述第一类型+所述第二类型;
    响应于所述第二消息,
    在检测到所述另一应用ID或所述另一应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和
    沿第二路径或第三路径,通过第一接口输出第二消息类型的第四处理结果;所述第三类型为所述第一类型+所述第二类型。
  16. 根据权利要求13-15中任意一项所述的方法,其特征在于,所述摄像头包括:
    一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,
    所述传感器模块用于采集图像,并将采集的图像输出至所述视频输入模块;
    所述视频输入模块用于对所述传感器模块采集到的图像进行预处理;
    所述视频处理子系统模块用于对所述视频输入模块预处理后的图像进行降噪处理;
    所述人工智能模块用于对所述视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;
    所述视频图形系统模块用于对所述视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至所述视频编码模块;
    所述视频编码模块用于对所述视频处理子系统模块处理后的图像或者所述视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第一接口输出第二消息类型的视频流。
  17. 根据权利要求16所述的方法,其特征在于,
    所述第一路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述人工智能模块;
    所述第二路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块、所述视频图形系统模块和所述视频编码模块;
    所述第三路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述视频编码模块。
  18. 根据权利要求13-17中任意一项所述的方法,其特征在于,所述第一类型为人工智能类型;所述第二类型为视频流类型;所述第三类型为人工智能类型+视频流类型;所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;第一接口为USB接口。
  19. 一种摄像头的调用方法,应用于摄像头,所述摄像头通过第一接口和第二接口与电子设备连接,所述方法包括:
    接收到包含应用ID或应用子功能ID的第一消息;
    响应于所述第一消息,
    在检测到所述应用ID或应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第一处理结果;
    在检测到所述应用ID或应用子功能ID对应的类型为第二类型时,沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果;
    接收到包含另一应用ID或另一应用子功能ID的第二消息;
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第一类型时,沿第一路径,通过第一接口输出第一消息类型的第三处理结果。
  20. 根据权利要求19所述的方法,其特征在于,所述方法还包括:
    响应于所述第二消息,
    在检测到所述另一应用ID或另一应用子功能ID对应的类型为第二类型,沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果。
  21. 根据权利要求19或20所述的方法,其特征在于,所述方法还包括:
    响应于所述第一消息,
    在检测到所述应用ID或所述应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第一处理结果;和
    沿第二路径或第三路径,通过第二接口输出第二消息类型的第二处理结果; 所述第三类型为所述第一类型+所述第二类型;
    响应于所述第二消息,
    在检测到所述另一应用ID或所述另一应用子功能ID对应的类型为第三类型时,
    沿第一路径,通过第一接口输出第一消息类型的第三处理结果;和
    沿第二路径或第三路径,通过第二接口输出第二消息类型的第四处理结果;所述第三类型为所述第一类型+所述第二类型。
  22. 根据权利要求19-21中任意一项所述的方法,其特征在于,所述摄像头包括:
    一个或多个传感器模块、视频输入模块、视频处理子系统模块、人工智能模块、视频编码模块和视频图形系统模块;其中,
    所述传感器模块用于采集图像,并将采集的图像输出至所述视频输入模块;
    所述视频输入模块用于对所述传感器模块采集到的图像进行预处理;
    所述视频处理子系统模块用于对所述视频输入模块预处理后的图像进行降噪处理;
    所述人工智能模块用于对所述视频处理子系统模块处理后的图像进行人工智能识别,并通过第一接口输出第一消息类型的人工智能事件;
    所述视频图形系统模块用于对所述视频处理子系统模块处理后的图像进行变焦处理,并将变焦处理后的图像输出至所述视频编码模块;
    所述视频编码模块用于对所述视频处理子系统模块处理后的图像或者所述视频图形系统模块变焦处理后的图像进行编码,生成视频流,并通过第二接口输出第二消息类型的视频流。
  23. 根据权利要求19-22中任意一项所述的方法,其特征在于,
    所述第一路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述人工智能模块;
    所述第二路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块、所述视频图形系统模块和所述视频编码模块;
    所述第三路径包括所述传感器模块、所述视频输入模块、所述视频处理子系统模块和所述视频编码模块。
  24. 根据权利要求19-23中任意一项所述的方法,其特征在于,
    所述第一类型为人工智能类型;所述第二类型为视频流类型;所述第三类型为人工智能类型+视频流类型;所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一接口和所述第二接口中的至少一个为USB接口。
  25. 一种电子设备,所述电子设备通过第一接口连接所述摄像头,所述电子设备包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在检测到与所述摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向所述摄像头发送包含应用ID或应用子功能ID的第一消息,所述应用ID对应于所述应用,或者所述应用子功能ID对应于所述应用子功能;
    通过第一接口接收第一消息类型的第一处理结果;和/或,
    通过第一接口接收第二消息类型的第二处理结果;
    在检测到与所述摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向所述摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,所述另一应用ID对应于所述另一应用,或者所述另一应用子功能ID对应于所述另一应用子功能;
    通过第一接口接收第一消息类型的第三处理结果;和/或,
    通过第一接口接收第二消息类型的第四处理结果。
  26. 根据权利要求25所述的电子设备,其特征在于,
    所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一接口为USB接口。
  27. 一种电子设备,所述电子设备通过第一接口和第二接口连接所述摄像头,所述电子设备包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在检测到与所述摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向所述摄像头发送包含应用ID或应用子功能ID的第一消息,所述应用ID对应于所述应用,或者所述应用子功能ID对应于所述应用子功能;
    通过第一接口接收第一消息类型的第一处理结果;和/或,
    通过第二接口接收第二消息类型的第二处理结果;
    在检测到与所述摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向所述摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,所述另一应用ID对应于所述另一应用,或者所述另一应用子功能ID对应于所述另一应用子功能;
    通过第一接口接收第一消息类型的第三处理结果;和/或,
    通过第二接口接收第二消息类型的第四处理结果。
  28. 根据权利要求25所述的电子设备,其特征在于,
    所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一接口和所述第二接口中的至少一个为USB接口。
  29. 一种摄像头的调用方法,应用于电子设备,所述电子设备通过第一接口连接所述摄像头,所述方法包括:
    在检测到与所述摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子 功能被打开时,向所述摄像头发送包含应用ID或应用子功能ID的第一消息,所述应用ID对应于所述应用,或者所述应用子功能ID对应于所述应用子功能;
    通过第一接口接收第一消息类型的第一处理结果;和/或,
    通过第一接口接收第二消息类型的第二处理结果;
    在检测到与所述摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向所述摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,所述另一应用ID对应于所述另一应用,或者所述另一应用子功能ID对应于所述另一应用子功能;
    通过第一接口接收第一消息类型的第三处理结果;和/或,
    通过第一接口接收第二消息类型的第四处理结果。
  30. 根据权利要求29所述的方法,其特征在于,
    所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一接口为USB接口。
  31. 一种摄像头的调用方法,应用于电子设备,所述电子设备通过第一接口和第二接口连接所述摄像头,所述方法包括:
    在检测到与所述摄像头相关的一个应用被打开时,或者在检测到一个应用的应用子功能被打开时,向所述摄像头发送包含应用ID或应用子功能ID的第一消息,所述应用ID对应于所述应用,或者所述应用子功能ID对应于所述应用子功能;
    通过第一接口接收第一消息类型的第一处理结果;和/或,
    通过第二接口接收第二消息类型的第二处理结果;
    在检测到与所述摄像头相关的另一应用被打开时,或者在检测到另一应用子功能被打开时,向所述摄像头发送包含另一应用ID或另一应用子功能ID的第二消息,所述另一应用ID对应于所述另一应用,或者所述另一应用子功能ID对应于所述另一应用子功能;
    通过第一接口接收第一消息类型的第三处理结果;和/或,
    通过第二接口接收第二消息类型的第四处理结果。
  32. 根据权利要求31所述的方法,其特征在于,
    所述第一消息类型为Socket消息类型;所述第二消息类型为UVC消息类型;所述第一接口和所述第二接口中的至少一个为USB接口。
  33. 一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在摄像头上运行时,使得所述摄像头执行如权利要求13-24中任意一项所述的摄像头的调用方法。
  34. 一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求29-31中任意一项所述的摄像头的调用方法。
PCT/CN2021/081092 2020-06-30 2021-03-16 一种摄像头的调用方法、电子设备和摄像头 WO2022001191A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022581629A JP2023532741A (ja) 2020-06-30 2021-03-16 カメラ使用方法、電子デバイス、およびカメラ
EP21834685.6A EP4161060A4 (en) 2020-06-30 2021-03-16 CAMERA CALLING METHOD, ELECTRONIC DEVICE AND CAMERA
US18/003,652 US20230254575A1 (en) 2020-06-30 2021-03-16 Camera use method, electronic device, and camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010618161.2 2020-06-30
CN202010618161.2A CN113873140A (zh) 2020-06-30 2020-06-30 一种摄像头的调用方法、电子设备和摄像头

Publications (1)

Publication Number Publication Date
WO2022001191A1 true WO2022001191A1 (zh) 2022-01-06

Family

ID=76275594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081092 WO2022001191A1 (zh) 2020-06-30 2021-03-16 一种摄像头的调用方法、电子设备和摄像头

Country Status (5)

Country Link
US (1) US20230254575A1 (zh)
EP (1) EP4161060A4 (zh)
JP (1) JP2023532741A (zh)
CN (2) CN113873140A (zh)
WO (1) WO2022001191A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733884B (zh) * 2021-08-25 2023-10-24 荣耀终端有限公司 请求的处理方法及相关装置
CN116156311A (zh) * 2021-11-16 2023-05-23 华为终端有限公司 摄像头控制方法及装置
CN114302040B (zh) * 2021-12-24 2024-03-19 展讯半导体(成都)有限公司 多应用共享单一摄像头的方法及相关产品
CN117714854A (zh) * 2022-09-02 2024-03-15 华为技术有限公司 摄像头调用方法、电子设备、可读存储介质和芯片
CN116074622B (zh) * 2022-12-17 2023-08-29 珠海视熙科技有限公司 多协议控制usb相机的实现方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883139A (zh) * 2011-07-12 2013-01-16 北京中星微电子有限公司 摄像头应用系统和方法
CN108322640A (zh) * 2017-12-27 2018-07-24 武汉长江通信智联技术有限公司 一种基于广播机制实现多应用同时调用摄像头的方法及系统
WO2019198941A1 (ko) * 2018-04-11 2019-10-17 삼성전자 주식회사 사용 이력을 표시하는 방법 및 이를 수행하는 전자 장치
CN110753187A (zh) * 2019-10-31 2020-02-04 芋头科技(杭州)有限公司 一种摄像头的控制方法及设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833520B (zh) * 2009-03-11 2013-01-16 凹凸电子(武汉)有限公司 多个应用程序同时使用一个摄像头的系统和方法
KR101383529B1 (ko) * 2012-02-29 2014-04-08 주식회사 팬택 어플리케이션 공유를 위한 이동단말장치 및 이동단말장치에서의 어플리케이션 공유 방법
US8799900B1 (en) * 2012-04-17 2014-08-05 Parallels IP Holdings GmbH Sharing webcam between guest and host OS
CN105808353A (zh) * 2016-03-08 2016-07-27 珠海全志科技股份有限公司 一种摄像机资源共享的方法和装置
US10212326B2 (en) * 2016-11-18 2019-02-19 Microsoft Technology Licensing, Llc Notifications for control sharing of camera resources
CN109462726B (zh) * 2017-09-06 2021-01-19 比亚迪股份有限公司 摄像头的控制方法和装置
CN110457987A (zh) * 2019-06-10 2019-11-15 中国刑事警察学院 基于无人机的人脸识别方法
CN110505390B (zh) * 2019-09-24 2021-02-05 深圳创维-Rgb电子有限公司 电视机及其摄像头调用方法、控制装置和可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102883139A (zh) * 2011-07-12 2013-01-16 北京中星微电子有限公司 摄像头应用系统和方法
CN108322640A (zh) * 2017-12-27 2018-07-24 武汉长江通信智联技术有限公司 一种基于广播机制实现多应用同时调用摄像头的方法及系统
WO2019198941A1 (ko) * 2018-04-11 2019-10-17 삼성전자 주식회사 사용 이력을 표시하는 방법 및 이를 수행하는 전자 장치
CN110753187A (zh) * 2019-10-31 2020-02-04 芋头科技(杭州)有限公司 一种摄像头的控制方法及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4161060A4

Also Published As

Publication number Publication date
EP4161060A1 (en) 2023-04-05
CN113873140A (zh) 2021-12-31
EP4161060A4 (en) 2023-11-22
US20230254575A1 (en) 2023-08-10
CN112969024B (zh) 2022-03-11
CN112969024A (zh) 2021-06-15
JP2023532741A (ja) 2023-07-31

Similar Documents

Publication Publication Date Title
WO2022001191A1 (zh) 一种摄像头的调用方法、电子设备和摄像头
WO2021013158A1 (zh) 显示方法及相关装置
CN116360725B (zh) 显示交互系统、显示方法及设备
US20230422154A1 (en) Method for using cellular communication function, and related apparatus and system
CN112130788A (zh) 一种内容分享方法及其装置
WO2022127661A1 (zh) 应用共享方法、电子设备和存储介质
CN114489350B (zh) 一种输入法调用方法及相关设备
CN113703894A (zh) 通知消息的显示方法和显示装置
CN116027997A (zh) 一种打开文件的方法及设备
WO2022222773A1 (zh) 拍摄方法、相关装置及系统
EP4258099A1 (en) Double-channel screen projection method and electronic device
WO2022127632A1 (zh) 一种资源管控方法及设备
WO2022002213A1 (zh) 翻译结果显示方法、装置及电子设备
WO2024037542A1 (zh) 一种触控输入的方法、系统、电子设备及存储介质
CN116366957B (zh) 一种虚拟化相机使能的方法、电子设备及协同工作系统
WO2022161058A1 (zh) 一种全景图像的拍摄方法及电子设备
WO2022160999A1 (zh) 显示方法和电子设备
EP4239464A1 (en) Method for invoking capabilities of other devices, electronic device, and system
WO2022228214A1 (zh) 设备发现方法、系统及其电子设备
WO2023045966A1 (zh) 能力共享方法、电子设备以及计算机可读存储介质
CN116560536A (zh) 应用组件设置方法及相关设备
CN117193583A (zh) 光标显示的方法及电子设备
CN116777740A (zh) 一种截屏方法、电子设备及系统
CN115048193A (zh) 一种多设备分布式调度方法及相关设备
CN116560769A (zh) 应用组件分享方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834685

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022581629

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021834685

Country of ref document: EP

Effective date: 20221229

NENP Non-entry into the national phase

Ref country code: DE