CN115484404B - Camera control method based on distributed control and terminal equipment - Google Patents

Camera control method based on distributed control and terminal equipment Download PDF

Info

Publication number
CN115484404B
CN115484404B CN202210973225.XA CN202210973225A CN115484404B CN 115484404 B CN115484404 B CN 115484404B CN 202210973225 A CN202210973225 A CN 202210973225A CN 115484404 B CN115484404 B CN 115484404B
Authority
CN
China
Prior art keywords
camera
local
virtual
task
authorized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210973225.XA
Other languages
Chinese (zh)
Other versions
CN115484404A (en
Inventor
占航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210973225.XA priority Critical patent/CN115484404B/en
Publication of CN115484404A publication Critical patent/CN115484404A/en
Application granted granted Critical
Publication of CN115484404B publication Critical patent/CN115484404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to a camera control method and terminal equipment based on distributed control, wherein the method for a first device comprises the following steps: displaying a to-be-selected camera which can be controlled by the first device, wherein the to-be-selected camera comprises a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device; determining a selected camera in the cameras to be selected and a target task to be executed; and generating a first task command according to a first camera identification of the selected camera in the first device and the target task, and sending the first task command to the selected camera, so that the selected camera controls to execute the target task according to the first task command, and each first virtual camera of the first device is used for realizing control of a local camera of the mapped second device with at least one level of mapping relation. The obtaining and device provided by the application realize direct and/or indirect control of the local cameras in the second devices through one first device, and meet the camera control requirements of different application scenes.

Description

Camera control method based on distributed control and terminal equipment
Technical Field
The application relates to the technical field of terminals, in particular to a camera control method based on distributed control and terminal equipment.
Background
With the development of cameras, the types of camera-mounted devices are increasing, for example, smart televisions with cameras, bluetooth cameras, home cameras, road monitoring cameras, unmanned aerial vehicles with cameras, and the like. In the related art, a device provided with a camera can be remotely controlled through a terminal device or a system such as a mobile phone, so that tasks such as image shooting and video shooting are executed. Taking control terminal that a plurality of unmanned aerial vehicles carry out control of shooing as an example, every unmanned aerial vehicle carries out network signal transmission through ethernet and control terminal in the related art, and control terminal will shoot etc. control command network and transmit every unmanned aerial vehicle, and unmanned aerial vehicle carries out further action of shooing after analyzing, then sends back the photo data network to control terminal. In order to achieve the above control, each unmanned aerial vehicle needs to contain a network module, and can be connected with the control terminal through a network to achieve direct control of the unmanned aerial vehicle, and the control terminal can only control the unmanned aerial vehicle directly connected to the unmanned aerial vehicle through the network, and cannot control the unmanned aerial vehicle which cannot be directly connected with the control terminal through the network. How to realize indirect control of camera equipment based on realizing direct control of camera equipment, and meet the requirements of different camera equipment use scenes is a technical problem to be solved.
Disclosure of Invention
In view of this, a camera control method and a terminal device based on distributed control are provided.
In a first aspect, embodiments of the present application provide a camera control method based on distributed control, applied to a first device, the method including:
displaying a to-be-selected camera which can be controlled by the first device, wherein the to-be-selected camera comprises a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device;
according to the detected task creating operation aiming at the camera to be selected, determining the selected camera and a target task required to be executed by the selected camera from the camera to be selected;
generating a first task command according to a first camera identification of the selected camera in the first device and the target task;
sending the first task command to the selected camera so that the selected camera can control the target task to be executed according to the first task command,
wherein the first device comprises at least one first virtual camera, each first virtual camera is used for realizing control of the local camera of the mapped second device, at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device,
When the first virtual camera and the mapped local camera of the second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks.
By the method provided by the first aspect, the control of the local cameras in the plurality of second devices can be realized through one first device, the second devices can be directly connected with the first device through the same local area network, or can be indirectly connected with the first device in different local area networks through an intermediate device, so that the distributed and hierarchical control of the local cameras of different second devices is realized, and the camera control requirements of different application scenes can be met.
In a first possible implementation manner of the method according to the first aspect, the method further includes:
when a device connection request is detected, searching for a third device which can be connected with the first device and meets connection conditions, wherein the connection conditions comprise: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device;
sending a first authorization control request to the third device, and receiving a first authorization instruction returned by the third device in response to the first authorization control request;
After determining an authorized first authorization camera according to the first authorization instruction, acquiring a second camera identification of the first authorization camera in the third device, wherein the first authorization camera comprises a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera;
determining a first camera identification of the first authorized camera in the first device according to the second camera identification, creating a first virtual camera for controlling the first authorized camera according to the first camera identification of the first authorized camera,
each second virtual camera is used for realizing control of a local camera of the fourth device mapped by the second virtual camera, and at least one level of mapping relation exists between the second virtual camera and the mapped local camera of the fourth device.
With a first possible implementation, a first virtual camera is created that is capable of controlling an authorized camera.
According to a first possible implementation manner, in a second possible implementation manner of the method, sending a first authorization control request to the third device, and receiving a first authorization indication returned by the third device in response to the first authorization control request, where the first authorization indication includes:
Selecting a first request camera according to request operation of a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera, wherein the request operation is controlled by the third device;
and generating the first authorization control request according to the first request camera, and sending the first authorization control request to the third device, so that the third device generates the first authorization indication according to the detected authorization operation for the first authorization control request.
Through a second possible implementation manner, the first request camera can be selected according to the needs of the user, so that the authorization control requirements of different users are met.
According to a first possible implementation manner, in a third possible implementation manner of the method, determining a first camera identifier of the first authorized camera in the first device according to the second camera identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier of the first authorized camera includes:
when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to the mapping relation level indicating the mapping relation between the first authorized camera and the second virtual camera in the second camera identifier;
Determining a first identity of the first authorized camera in the first device according to the identity in the second camera and the identity of the existing camera corresponding to the first mapping relation level in the first device;
and determining a first camera identification of the first authorized camera in the first device according to the first mapping relation level and the first identity identification, and creating a first virtual camera for controlling the first authorized camera according to the first camera identification.
Through a third possible implementation manner, after the first virtual camera is created, the first device may directly determine a mapping relationship level between the first device and the first authorized camera according to the first camera identifier and the first mapping relationship level of the first virtual camera, so as to facilitate sending of the command and receiving of the data.
According to a third possible implementation manner, in a fourth possible implementation manner of the method, determining a first camera identifier of the first authorized camera in the first device according to the second camera identifier, and creating a first virtual camera for controlling the first authorized camera according to the first camera identifier of the first authorized camera includes:
And when the first authorized camera comprises the local camera of the third device, determining a first-level mapping relation as a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera.
According to a third possible implementation manner, in a fifth possible implementation manner of the method, determining, according to an identity identifier in the second camera identifier and an identity identifier of an existing camera corresponding to the first mapping relationship level in the first device, a first identity identifier of the first authorized camera in the first device includes:
when the identity in the second camera identifier exists in the identity of the existing camera corresponding to the first mapping relation level in the first equipment, a first identity identifier of the first authorized camera in the first equipment is created according to a preset identity identifier creation rule; or alternatively
And when the identity identifier in the second camera identifier is different from the identity identifier of the existing camera corresponding to the first mapping relation level in the first equipment, determining the identity identifier in the second camera identifier as the first identity identifier of the first authorized camera in the first equipment.
Through a fifth possible implementation manner, the uniqueness of the first identity identifier of the first authorized camera in the identities of all cameras corresponding to the first mapping relationship level, which can be controlled by the first device, can be ensured, so as to facilitate the discrimination of the cameras with the same mapping relationship level, which can be controlled by the first device.
In a sixth possible implementation manner of the method according to the first aspect, according to the detected task creation operation for the selected camera, determining the selected camera and a target task to be executed by the selected camera from the selected cameras includes:
determining a selected camera according to the detected selection operation for the camera to be selected;
determining a target task to be executed by the selected camera according to a task setting operation for the selected camera,
the task parameters of the target task comprise at least one of task type, execution time information and camera parameter setting when the selected camera executes the target task, and the task type comprises at least one of the following: a photographing task, a shooting task and an image previewing task.
By a sixth possible implementation, the target task may be set precisely, so that the selected camera performs the target task.
In a seventh possible implementation manner of the method according to the first aspect, the sending the first task command to the selected camera, so that the selected camera performs the target task according to the first task command control, includes at least one of the following operations:
when the selected camera comprises a local camera of the first device, sending the first task command to the local camera of the first device so that the local camera of the first device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and a first virtual camera corresponding to the selected camera is in a first-level mapping relation, forwarding the first task command to the local camera of the second device through a first virtual camera corresponding to the selected camera in the first device, so that the local camera of the second device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relationship, determining at least one intermediate device which completes forwarding of the first task command according to a first mapping relationship level between the selected camera and the corresponding first virtual camera, and forwarding the first task command to the local camera of the second device through the virtual camera corresponding to the selected camera in each intermediate device in sequence, so that the local camera of the second device executes the target task indicated by the first task command.
By a seventh possible implementation, the first task command may be forwarded to the selected camera by means of at least one intermediate device, enabling control of the local camera of the device in a different local area network than the first device.
In an eighth possible implementation manner of the method according to the first aspect or the seventh possible implementation manner, the method further includes:
when receiving target task data obtained by the selected camera executing the target task, performing image and/or video display according to the target task data,
wherein receiving the target task data comprises at least one of:
directly receiving target task data sent by a local camera of the first device when the selected camera comprises the local camera of the first device;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and the first virtual camera corresponding to the selected camera is in a first-level mapping relation, the first virtual camera corresponding to the selected camera is utilized to directly receive target task data sent by the local camera of the second device;
When the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving target task data which are sent by the local camera of the second device and are forwarded by at least one intermediate device by utilizing the first virtual camera corresponding to the selected camera.
By way of an eighth possible implementation manner, the first device may receive the target task data after the selected camera performs the target task, and perform image and/or video presentation according to the target task data.
In a ninth possible implementation manner of the method according to the first aspect, the method further includes:
when a second authorization control request from a fifth device is received, displaying an authorization prompt according to a second request camera in the second authorization control request;
determining an authorized second authorized camera according to the detected authorization operation for the second request camera;
generating a second authorization indication according to a first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
Through a ninth possible implementation manner, the first device directly controls the local camera of the first device and the local camera of the second device mapped by the set first virtual camera according to the command sent by the user, and can also establish a control relationship with the fifth device to be controlled by the fifth device.
In a second aspect, embodiments of the present application provide a terminal device, which may perform the above-mentioned first aspect or one or several of the multiple possible implementation manners of the first aspect, a camera control method based on distributed control.
In a third aspect, embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the camera control method based on distributed control of one or several of the above-described first aspect or a plurality of possible implementations of the first aspect.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
Fig. 1 shows a schematic structural diagram of a terminal device according to an embodiment of the present application.
Fig. 2 shows a software architecture block diagram of a terminal device according to an embodiment of the present application.
Fig. 3 shows a flowchart of a distributed control-based camera control method according to an embodiment of the present application.
Fig. 4 shows an application scenario diagram of a camera control method based on distributed control according to an embodiment of the present application.
FIG. 5 illustrates a process diagram of determining a target task according to an embodiment of the present application.
Fig. 6 shows a flowchart of a distributed control-based camera control method according to an embodiment of the present application.
Fig. 7 illustrates a schematic diagram of a selective authorization camera according to an embodiment of the application.
Fig. 8 is a schematic diagram of an implementation process of a camera control method based on distributed control according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
In order to solve the technical problems, the application provides a camera control method based on distributed control, and the camera control method based on distributed control in the embodiment of the application can realize multistage indirect control of camera equipment, is applicable to use scenes of different camera equipment, and can be applied to equipment such as terminal equipment, control equipment for realizing camera control and the like, so that the method is realized.
The equipment that this application relates to (including equipment such as first equipment, second equipment, third equipment, fourth equipment) can be to the equipment that has wireless connection function, and wireless connection's function is to be connected with other equipment through wireless connection modes such as wifi, bluetooth, and the equipment of this application also can have wired connection and carry out the function of communication. The device can be touch screen, non-touch screen or no screen, the touch screen can be controlled by clicking, sliding and the like on the display screen through fingers, a touch pen and the like, the non-touch screen device can be connected with input devices such as a mouse, a keyboard, a touch panel and the like, the device is controlled through the input devices, and the device without the screen can be a Bluetooth loudspeaker box without the screen and the like.
For example, the terminal device in the device referred to in the present application may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a TV, a virtual reality device, a sound, electronic ink, etc.
Fig. 1 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. Taking the example that the terminal device is a mobile phone, fig. 1 shows a schematic structural diagram of a mobile phone 200.
The handset 200 may include a processor 210, an external memory interface 220, an internal memory 221, a usb interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a camera 293, a display 294, a SIM card interface 295, and the like. The sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors such as a temperature sensor, a pressure sensor, a distance sensor, a magnetic sensor, an ambient light sensor, an air pressure sensor, a bone conduction sensor, etc., which are not shown).
It should be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the mobile phone 200. In other embodiments of the present application, the cell phone 200 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a Neural network processor (Neural-networkProcessing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. The controller may be a neural center or a command center of the mobile phone 200. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
The processor 210 may run the camera control method based on distributed control provided in the embodiments of the present application, so as to facilitate multi-stage indirect control of camera devices, and be applicable to use scenarios of different camera devices. The processor 210 may include different devices, such as an integrated CPU and a GPU, where the CPU and the GPU may cooperate to execute the distributed control-based camera control method provided in the embodiments of the present application, such as a part of the algorithm in the distributed control-based camera control method is executed by the CPU, and another part of the algorithm is executed by the GPU, so as to obtain a faster processing efficiency.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the cell phone 200 may include 1 or N displays 294, N being a positive integer greater than 1. The display 294 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces (graphical user interface, GUI). For example, the display 294 may display photographs, videos, web pages, or files, etc. For another example, the display 294 may display a graphical user interface. The graphical user interface includes status bars, hidden navigation bars, time and weather gadgets (widgets), and icons of applications, such as browser icons, etc. The status bar includes the name of the operator (e.g., chinese mobile), the mobile network (e.g., 4G), time, and the remaining power. The navigation bar includes a back (back) key icon, a home screen (home) key icon, and a forward key icon. Further, it is to be appreciated that in some embodiments, bluetooth icons, wi-Fi icons, external device icons, etc. may also be included in the status bar. It will also be appreciated that in other embodiments, a Dock may be included in the graphical user interface, a commonly used application icon may be included in the Dock, and the like. When the processor 210 detects a touch event of a user's finger (or a stylus, etc.) for a certain application icon, a user interface of the application corresponding to the application icon is opened in response to the touch event, and the user interface of the application is displayed on the display 294.
In the embodiment of the present application, the display 294 may be an integral flexible display, or a tiled display formed of two rigid screens and a flexible screen located between the two rigid screens may be used.
After the processor 210 runs the camera control method based on distributed control provided in the embodiment of the present application, the terminal device may establish a communication connection with the second device capable of being directly connected through the antenna 1 and the antenna 2 as the first device, and according to the camera control method based on distributed control provided in the embodiment of the present application, the first device controls the local camera of the second device, and controls the local camera of the second device incapable of directly establishing a communication connection with the first device.
The camera 293 (front camera or rear camera, or one camera may be used as either a front camera or a rear camera) is used to capture still images or video. In general, the camera 293 may include a photosensitive element such as a lens group including a plurality of lenses (convex lenses or concave lenses) for collecting optical signals reflected by an object to be photographed and transmitting the collected optical signals to an image sensor. The image sensor generates an original image of the object to be photographed according to the optical signal.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the cellular phone 200 and data processing by executing instructions stored in the internal memory 221. The internal memory 221 may include a storage program area and a storage data area. The storage program area may store, among other things, code for an operating system, an application program (e.g., a camera application, a WeChat application, etc.), and so on. The storage data area may store data created during use of the handset 200 (e.g., images, video, etc. captured by the camera application), etc.
The internal memory 221 may also store one or more computer programs 1310 corresponding to the distributed control-based camera control method provided in embodiments of the present application. The one or more computer programs 1304 are stored in the memory 221 and configured to be executed by the one or more processors 210, the one or more computer programs 1310 comprising instructions that can be used to perform the steps as in the respective embodiments of fig. 3 and 6, the computer programs 1310 can comprise one or more modules that perform the steps to implement multi-level indirect control of a camera device, suitable for use in different camera device usage scenarios.
In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Of course, the codes of the camera control method based on distributed control provided in the embodiment of the present application may also be stored in an external memory. In this case, the processor 210 may run code of a camera control method based on distributed control stored in the external memory through the external memory interface 220.
The function of the sensor module 280 is described below.
The gyro sensor 280A may be used to determine the motion gesture of the cell phone 200. In some embodiments, the angular velocity of the cell phone 200 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 280A. I.e., gyro sensor 280A may be used to detect the current motion state of the handset 200, such as shaking or being stationary.
When the display screen in the embodiment of the present application is a foldable screen, the gyro sensor 280A may be used to detect a folding or unfolding operation acting on the display screen 294. The gyro sensor 280A may report the detected folding operation or unfolding operation to the processor 210 as an event to determine the folding state or unfolding state of the display screen 294.
The acceleration sensor 280B can detect the magnitude of acceleration of the mobile phone 200 in various directions (typically three axes). I.e., gyro sensor 280A may be used to detect the current motion state of the handset 200, such as shaking or being stationary. When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 280B may be used to detect a folding or unfolding operation acting on the display screen 294. The acceleration sensor 280B may report the detected folding operation or unfolding operation as an event to the processor 210 to determine the folding state or unfolding state of the display screen 294.
Proximity light sensor 280G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The mobile phone emits infrared light outwards through the light emitting diode. The cell phone uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object in the vicinity of the handset. When insufficient reflected light is detected, the handset may determine that there is no object in the vicinity of the handset. When the display screen in the embodiment of the present application is a foldable screen, the proximity light sensor 280G may be disposed on the first screen of the foldable display screen 294, and the proximity light sensor 280G may detect the folding angle or the unfolding angle of the first screen and the second screen according to the optical path difference of the infrared signal.
The gyro sensor 280A (or the acceleration sensor 280B) may transmit detected motion state information (such as angular velocity) to the processor 210. The processor 210 determines whether it is currently in a handheld state or a foot rest state based on the motion state information (e.g., when the angular velocity is not 0, it is indicated that the mobile phone 200 is in a handheld state).
The fingerprint sensor 280H is used to collect a fingerprint. The mobile phone 200 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The touch sensor 280K, also referred to as a "touch panel". The touch sensor 280K may be disposed on the display screen 294, and the touch sensor 280K and the display screen 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 294. In other embodiments, the touch sensor 280K may also be disposed on the surface of the mobile phone 200 at a different location than the display 294.
Illustratively, the display 294 of the handset 200 displays a main interface that includes icons of a plurality of applications (e.g., camera applications, weChat applications, etc.). The user clicks on the icon of the camera application in the main interface by touching the sensor 280K, triggering the processor 210 to launch the camera application, opening the camera 293. The display 294 displays an interface of the camera application, such as a viewfinder interface.
The wireless communication function of the mobile phone 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 251, the wireless communication module 252, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 251 may provide a solution including 2G/3G/4G/5G wireless communication applied to the cell phone 200. The mobile communication module 251 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 251 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 251 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 251 may be provided in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 251 may be disposed in the same device as at least some of the modules of the processor 210. In this embodiment of the present application, the mobile communication module 251 may also be used to interact with other terminal devices to perform information such as the first task command, the target task data, and the like.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 270A, receiver 270B, etc.), or displays images or video through display screen 294. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 251 or other functional module, independent of the processor 210.
The wireless communication module 252 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 200. The wireless communication module 252 may be one or more devices that integrate at least one communication processing module. The wireless communication module 252 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 252 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2. In this embodiment of the present application, the wireless communication module 252 is configured to transmit data between the processor 210 and other terminal devices under the control of the processor 210, for example, when the processor 210 runs the camera control method based on distributed control provided in the embodiment of the present application, the processor in the first device may control the wireless communication module 252 to send a first task command to a local camera of a second device that is directly connected to the first device, and send the first task command to a local camera of the second device that is unable to directly establish communication connection with the first device through mapping of the first virtual camera, so as to implement multi-stage indirect control of the camera device, and be applicable to use scenarios of different camera devices.
In addition, the mobile phone 200 may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, an application processor, and the like. Such as music playing, recording, etc. The handset 200 may receive key 290 inputs, generating key signal inputs related to user settings and function control of the handset 200. The cell phone 200 may use the motor 291 to generate a vibration alert (e.g., an incoming call vibration alert). The indicator 292 in the mobile phone 200 may be an indicator light, which may be used to indicate a state of charge, a change in power, an indication message, a missed call, a notification, etc. The SIM card interface 295 in the handset 200 is used to connect to a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to allow contact and separation from the handset 200.
It should be understood that in practical applications, the mobile phone 200 may include more or fewer components than shown in fig. 1, and embodiments of the present application are not limited. The illustrated cell phone 200 is only one example, and cell phone 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The software system of the terminal device can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of a terminal device is illustrated.
Fig. 2 shows a software architecture block diagram of a terminal device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as phone, camera, gallery, calendar, talk, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is arranged to provide communication functions for the terminal device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the terminal equipment vibrates, and an indicator light blinks.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The embodiment of the application provides a camera control method based on distributed control, which can realize the control of local cameras in a plurality of second devices through one first device, and the second devices can directly establish connection with the first devices through the same local area network, or can establish indirect connection with the first devices in different local area networks through intermediate devices, so that the distributed and hierarchical control of the local cameras of different second devices is realized, and the camera control requirements of different application scenes can be met.
Fig. 3 shows a flowchart of a distributed control-based camera control method according to an embodiment of the present application. Fig. 4 shows an application scenario diagram of a camera control method based on distributed control according to an embodiment of the present application. As shown in fig. 3, the method is applied to a first device, and the method includes steps S11 to S14.
In step S11, a candidate camera controllable by the first device is displayed, where the candidate camera includes a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device. The first device comprises at least one first virtual camera, each first virtual camera is used for realizing control of the local camera of the mapped second device, and at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device. When the first virtual camera and the mapped local camera of the second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks.
In this embodiment, the mapping relationship between the first virtual camera and the mapped local camera of the second device may indicate the mapping times of the first virtual camera for controlling the mapped local camera of the second device under the control of the first device, where the data transmission process obtained by the command and the task executed by the camera needs to be completed through several forwarding. The mapping relationship between the first virtual camera and the local camera of the second device mapped by the first virtual camera may be considered as the mapping relationship between the first device and the local camera of the second device mapped by the first virtual camera. The lower the level of the mapping relationship, the fewer the number of mappings between the first virtual camera and the local camera of the mapped second device, and the fewer the number of transmission forwarding. The mapping relation level comprises a zero-level mapping relation, a first-level mapping relation and a second-level mapping relation … N-level mapping relation, the level of the first-level mapping relation is smaller than that of the second-level mapping relation, and the second-level mapping relation and more than two-level mapping relations are multi-level mapping relations.
To further explain the mapping relationship, an application scenario example is described below in conjunction with fig. 4. As shown in fig. 4, the first device a includes a local camera 100 and at least one first virtual camera 200 therein. The mapping relationship between the local camera 100 of the first device a and the first device a is a zero-order mapping relationship. The second device mapped by each first virtual camera 200 may be a device capable of being directly connected to the first device through a network, such as a device B1 shown in fig. 4, where a first-level mapping relationship is formed between a local camera in the device B1 and the corresponding first virtual camera 200, and the device B1 and the first device a may be in the same phaseIn the same local area network. The second device mapped by the first virtual camera 200 may be a device that cannot directly connect to the first device through a network, and indirectly implement communication with the first device a through other devices, such as devices B2 and B3 shown in fig. 4. Wherein device B2 is mapped to first virtual camera 202 and first device a follows "first virtual camera 202" through first virtual camera 202
Figure BDA0003797015250000111
Second virtual Camera 301->
Figure BDA0003797015250000112
The communication mode of the local camera of the equipment B2 is used for controlling the local camera of the equipment B2, and the communication mode comprises the steps of sending a first task command and receiving target task data; the device B2 and the first virtual camera 202 are two-level mapping relationships, where the device B2 and the first device a are located in different local area networks, the device B2 and the device C1 may be located in the same local area network, and the device C1 and the first device a may be located in the same local area network. Device B3 is mapped to first virtual camera 203 and first device a is mapped via first virtual camera 203 according to "first virtual camera 203- >
Figure BDA0003797015250000113
The communication mode of the second virtual camera 303, … and the local camera of the device B3 is used for controlling the local camera of the device B3, including sending a first task command and receiving target task data; the device B3 and the first virtual camera 203 are x-level mapping relationships (x=the number of virtual cameras), where the device B3 and the first device a are in different local area networks, the device C2 and the device C3 may be in the same local area network, and the device C2 and the first device a may be in the same local area network.
The devices B1, B2, and B3 may be the same or different types of devices, for example, the device B1 is a mobile phone, the device B2 is a monitoring camera device, and the device B3 is an unmanned aerial vehicle with a camera. The local camera of the device C1, the local camera of the device C2, and the local camera of the device C3 may also be mapped into the first device a after being received to form a virtual camera, and for simplicity of illustration, the mapping relationship is not shown in fig. 4, and those skilled in the art may implement according to the method provided in the present application according to the implementation manner examples of mapping the device B1, the device B2, and the device B3 into the first device a, which is not described herein.
In step S12, according to the detected task creation operation for the selected camera, the selected camera and the target task to be executed by the selected camera are determined from the selected cameras.
In one possible implementation, step S12 may include: determining a selected camera according to the detected selection operation for the camera to be selected; and determining a target task to be executed by the selected camera according to the task setting operation aiming at the selected camera. The task parameters of the target task may include at least one of a task type, execution time information, and camera parameter settings when the selected camera executes the target task, where the task type includes at least one of the following: a photographing task, a shooting task and an image previewing task. In this way, the target task can be accurately set so that the selected camera performs the target task.
The execution time information and the camera parameters corresponding to different task types need to be set correspondingly. For a photographing task, the execution time information may include the number of photographs and the time of photographing, and the camera parameters may include parameters for setting the camera to take a photograph, such as a pixel of photographing, a photographing exposure time, a photographing mode (e.g., a daytime mode, a night mode, a panoramic mode, etc.). For the image capturing task, the execution time information may include a start-stop time, an image capturing time, and the like of image capturing, and the camera parameters may include parameters for setting the camera to perform video capturing, such as a pixel of image capturing, an image capturing mode (e.g., a daytime mode, a nighttime mode, and the like). For the image preview task, the execution time information may include a photographing time of a photograph to be previewed (a photograph photographed by opening a camera viewfinder in real time or a photograph photographed before the camera), a photographing time and a duration of a video to be previewed (a video photographed by opening the camera in real time or a video photographed before the camera), and the camera parameters may include parameters for setting the camera to photograph, such as a pixel to photograph, a photographing exposure duration, a photographing mode (e.g., a daytime mode, a night mode, a panoramic mode, etc.).
In this embodiment, when the first device is provided with a display screen, the camera to be selected may be displayed in the display screen in a form of a combination of pictures and/or text. The speaker and the like arranged in the first device can be used for prompting the user of alternative cameras in a voice mode. FIG. 5 illustrates a process diagram of determining a target task according to an embodiment of the present application. As shown in fig. 4 and 5, the first device is the first device a shown in fig. 4, and the to-be-selected camera "local camera 100, the first virtual camera 201 mapped to the local camera of the device B1, the first virtual camera 202 mapped to the local camera of the device B2, and the first virtual camera 203 mapped to the local camera of the device B3" that can be selected by the user are displayed in the interface T1, and the local camera of the device B1 mapped by the selected camera "first virtual camera 201" selected by the user is determined according to the detected triggering operations such as clicking, sliding, and the like. And then, the interface T1 in the display screen can be further switched into the interface T2, the task types of photographing, shooting and image previewing are displayed in the interface T2, and then the task type selected by the user is determined to be the photographing task according to the detected triggering operations such as clicking, sliding and the like of the selection control K in the interface T2. And then the interface T2 in the display screen can be further switched to be T3, and a camera parameter setting prompt corresponding to the task type 'camera task' is displayed in the interface T3, as in fig. 5, the 'duration and the blank boxes behind the duration' are used for prompting the user to directly input or pull-down selection and other modes to determine the shooting duration of the camera. The "camera parameters and the following multiple blank boxes" are used for prompting the user to directly input or pull down selection and other modes to determine the camera parameters of the image capturing. The implementation manner of step S11 and step S12 may be set by those skilled in the art according to actual needs, which is not limited in this application.
In step S13, a first task command is generated according to a first camera identification of the selected camera in the first device and the target task.
In this embodiment, the generated first task command includes the target task that the selected camera needs to execute and the first camera identifier. The first camera identifier includes a first mapping relationship level and a first identity identifier, where the first mapping relationship level may be used to represent a mapping relationship between the selected camera and a first virtual camera corresponding to the selected camera, or to represent a mapping relationship between the selected camera and the first device. The first identity may be used to represent a distinction between existing cameras of the selected camera in the first device corresponding to the first mapping level. The mapping relation level and the identity identification mode can be set by a person skilled in the art according to actual needs, and the application is not limited in this respect. In this way, different cameras controllable by the first device can be distinguished through the first identity identifier, and accurate delivery of the first task command is ensured.
In step S14, the first task command is sent to the selected camera, so that the selected camera controls the target task to be executed according to the first task command.
In one possible implementation, step S14 may include at least one of the following operations one, two, and three:
and firstly, when the selected camera comprises the local camera of the first device, sending the first task command to the local camera of the first device so as to enable the local camera of the first device to execute the target task indicated by the first task command. For example, as shown in fig. 4, the first device a may send the first task command directly to its own local device 100.
And thirdly, when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the first virtual camera corresponding to the selected camera is in a first-level mapping relation, forwarding the first task command to the local camera of the second device through the first virtual camera corresponding to the selected camera in the first device, so that the local camera of the second device executes the target task indicated by the first task command. For example, as shown in fig. 4, the first device a may control the first virtual camera 201 to transmit the first task command into the "local camera of the device B1".
And thirdly, determining at least one intermediate device which completes forwarding of the first task command according to a first mapping relation level between the selected camera and the corresponding first virtual camera when the selected camera comprises the local camera of the second device mapped by the first virtual camera and the selected camera is in a multi-level mapping relation with the first virtual camera corresponding to the selected camera, and forwarding the first task command to the local camera of the second device through the virtual camera corresponding to the selected camera in each intermediate device in sequence so as to enable the local camera of the second device to execute the target task indicated by the first task command. In the process of forwarding the first task command and forwarding the target task data and the command result described below, the intermediate device may directly forward in a transparent transmission manner, or may encrypt the first task command and the target task data and the command result according to a preset encryption manner, which is not limited in this application. For example, as shown in fig. 4, the first device a may control the first virtual camera 202 to send the first task command to the "second virtual camera 301 of the device C1", and the second virtual camera 301 of the device C1 forwards the first task command to the "local device of the device B2", where the intermediate device is the device C1. The first device a may control the first virtual camera 203 to send the first task command to the "second virtual camera 302 of the device C2", the second virtual camera 302 of the device C2 forwards the first task command to the "third virtual camera 400 of the device C3", the third virtual camera 400 … of the device C3 until the first task command is sent to the local device of the device B2, and in this process, the intermediate devices are the devices C2 and C3 …. In this way, control of the local camera of the device in a different local area network than the first device may be achieved by forwarding the first task command to the selected camera by means of the at least one intermediate device.
In one possible implementation, when the selected cameras need to execute different target tasks from different devices in conflict in the same time period, the target task of the device with the previous priority may be selected to execute according to the different priorities of the devices which send out the target tasks. Alternatively, one of the plurality of target tasks is randomly selected for execution. Or, to avoid the occurrence of "the selected cameras need to perform different target tasks from different devices in conflict with each other in the same time period", the device where the selected camera is located may control each camera that it can control to only one device to be controlled by the virtual camera when the permission is authorized.
Fig. 6 shows a flowchart of a distributed control-based camera control method according to an embodiment of the present application. In one possible implementation, as shown in fig. 6, the method may further include "virtual camera creation step" steps S15 to S18. The "virtual camera creation step" may be performed before step S11 (as shown in fig. 6), or may be performed after step S11 (not shown in the figure). When the third device described below is the same device as the above-described second device, the "virtual camera creation step" may be performed before step S11. When the third device described below is a different device from the above-described second device, the "virtual camera creation step" may be performed before, after, or simultaneously with step S11, which is not limited in this application. A first virtual camera capable of controlling an authorized camera is created by the "virtual camera creation step".
In step S15, when a device connection request is detected, a third device capable of connecting with the first device and satisfying a connection condition is searched, where the connection condition may include: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device. Each second virtual camera is used for realizing control of a local camera of the fourth device mapped by the second virtual camera, and at least one level of mapping relation exists between the second virtual camera and the mapped local camera of the fourth device. The setting of the connection condition may ensure that the presence of the determined third device can be controlled by the first device by creating a virtual camera. The third device may be any device such as a mobile phone, an unmanned aerial vehicle, etc., which is not used herein.
For example, as shown in fig. 4, it is assumed that the third device that satisfies the connection condition and is determined after the first device a searches is the device B1 and the device C1. The first device a may then access the third devices to determine the cameras controllable in each third device. For example, it may be determined by access that the device B1 can only control its own local camera, the device C1 can control its own local camera, and the local camera of the device B2 mapped by the second virtual camera 301.
In step S16, a first authorization control request is sent to the third device, and a first authorization indication returned by the third device in response to the first authorization control request is received.
In one possible implementation, step S16 may include: selecting a first request camera according to request operation of a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera, wherein the request operation is controlled by the third device; and generating the first authorization control request according to the first request camera, and sending the first authorization control request to the third device, so that the third device generates the first authorization indication according to the detected authorization operation for the first authorization control request. Thus, the first request camera can be selected according to the needs of users, and the authorization control requirements of different users are met.
In this implementation, information of the third device to which the first device can be connected and the camera that can be controlled by the third device can be displayed in the display screen of the first device with reference to the display manner of the interface T1 in fig. 5. And then determining a third device to be connected by the user and a first request camera which needs to be connected with the third device and can be controlled by the third device according to the detected click operation (namely, request operation), so as to generate a first authorization control request, and sending the first authorization control request to the corresponding third device. The first authorization indication is generated according to the authorization operation of the user after the first authorization control request is received by the third device (this process may refer to the following implementation process and manner of the first device in response to the second authorization control request). And the voice sent by the user can be recognized, and the third equipment to be connected by the user and the first request camera which can be controlled by the third equipment to be connected are determined according to the recognition result. The third equipment and the information of the cameras which can be controlled by the third equipment can be played to the user in a voice mode, and then the third equipment which is required to be connected by the user and the first request camera which is required to be connected with the third equipment and can be controlled by the third equipment are determined according to the voice sent by the user. The manner of generating the first authorization control request according to the request operation may be set by those skilled in the art according to actual needs, which is not limited in this application.
In one possible implementation, after the third devices are determined, a first authorization control request for each camera controlled by the third device may also be directly sent to each detected third device, without generating the first authorization control request according to a request operation based on a user. Therefore, the operation of the user can be simplified, the speed of creating the virtual camera is improved, and the operation required by the user in the creation process is simplified.
In step S17, after determining an authorized first authorized camera according to the first authorization indication, a second camera identifier of the first authorized camera in the third device is obtained, where the first authorized camera includes a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera.
In this implementation manner, the second camera identifier of the first authorized camera in the third device also includes a mapping relationship level and an identity identifier, and the meaning indicated by the second camera identifier refers to the description of the first mapping relationship level and the first identity identifier, which are not described herein.
In step S18, a first camera identifier of the first authorized camera in the first device is determined according to the second camera identifier, and a first virtual camera for controlling the first authorized camera is created according to the first camera identifier of the first authorized camera.
In one possible implementation, step S18 may include: when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to the mapping relation level indicating the mapping relation between the first authorized camera and the second virtual camera in the second camera identifier; determining a first identity of the first authorized camera in the first device according to the identity in the second camera and the identity of the existing camera corresponding to the first mapping relation level in the first device; and determining a first camera identification of the first authorized camera in the first device according to the first mapping relation level and the first identity identification, and creating a first virtual camera for controlling the first authorized camera according to the first camera identification. In this way, after the first virtual camera is created, the first device can directly determine the mapping relation level between the first device and the first authorized camera according to the first camera identification and the first mapping relation level of the first virtual camera, so that the command sending and the data receiving are facilitated.
In a possible implementation manner, determining, according to the identity identifier in the second camera identifier and the identity identifier of the existing camera corresponding to the first mapping relationship level in the first device, the first identity identifier of the first authorized camera in the first device may include: when the identity in the second camera identifier exists in the identity of the existing camera corresponding to the first mapping relation level in the first equipment, a first identity identifier of the first authorized camera in the first equipment is created according to a preset identity identifier creation rule; or when the identity identifier in the second camera identifier is different from the identity identifier of the existing camera corresponding to the first mapping relation level in the first device, determining the identity identifier in the second camera identifier as the first identity identifier of the first authorized camera in the first device. In this way, the uniqueness of the first identity of the first authorized camera in the identities of all cameras corresponding to the first mapping relation level which can be controlled by the first device can be ensured, so that the cameras with the same mapping relation level which can be controlled by the first device can be conveniently distinguished.
In one possible implementation, step S18 may further include:
and when the first authorized camera comprises the local camera of the third device, determining a first-level mapping relation as a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera.
In this implementation, the first mapping relation level may be obtained by adding a level to "the mapping relation level indicating the mapping relation between the first authorized camera and the second virtual camera in the second camera identification". Different mapping relation levels can be distinguished by using different numbers, letters and other character representations, and the identity of different cameras of the same mapping relation level can be distinguished by using different numbers, letters and other character representations of the cameras, and can also be distinguished by using the identity of the camera in the equipment where the camera is located.
For example, referring to the first device a in fig. 4, assume that the identity of its local camera is "2" in the device B1, the identity of its local camera is "2" in the device C1, the identity of its local camera is "1" in the device B2, and the identity of its local camera is "3" in the first device a. The zero-order mapping relation, the first-order mapping relation and the second-order mapping relation … are respectively used for 0000, 1000 and 2000 … to represent. Then the first time period of the first time period,
In device B1, the camera identity of its local camera may be 0002.
In device B2, the camera identity of its local camera may be 0001.
In the device C1, the camera identification of its local camera may be 0002, and the camera identification of the local camera of the device B2 to which the second virtual camera 301 is mapped may be 1001.
In the first device a, since there is no identity reuse, the first identity of its local camera is 0003, the first identity of the local camera of the device C1 may be 1002, and the first identity of the local camera of the device B2 is 2001. While 2 in the identity of the local camera of the device B1 is already occupied by the local camera of the device C1, the local camera identity of the device B1 may be adjusted, for example, the first identity of the local camera of the device B1 is 1004, by adjusting "2" to "4".
In one possible implementation, the method may further include: when receiving the command result returned after the selected camera receives the first task command, the method can determine whether the selected camera successfully executes the target task, whether the selected camera receives the first task command and other information according to the command result. Wherein receiving the command result may include at least one of:
When the selected camera comprises a local camera of the first device, directly receiving a command result sent by the local camera of the first device;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and the first virtual camera corresponding to the selected camera is in a first-level mapping relation, the first virtual camera corresponding to the selected camera is utilized to directly receive a command result sent by the local camera of the second device;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving a command result sent by the local camera of the second device and forwarded by at least one intermediate device by utilizing the first virtual camera corresponding to the selected camera.
In one possible implementation, the method may further include: and when target task data obtained by the selected camera executing the target task is received, displaying images and/or videos according to the target task data. The first device may store target task data in addition to the presentation of images and/or video.
Wherein receiving the target task data comprises at least one of:
directly receiving target task data sent by a local camera of the first device when the selected camera comprises the local camera of the first device;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and the first virtual camera corresponding to the selected camera is in a first-level mapping relation, the first virtual camera corresponding to the selected camera is utilized to directly receive target task data sent by the local camera of the second device;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving target task data which are sent by the local camera of the second device and are forwarded by at least one intermediate device by utilizing the first virtual camera corresponding to the selected camera.
In one possible implementation, the method may further include:
when a second authorization control request from a fifth device is received, displaying an authorization prompt according to a second request camera in the second authorization control request;
Determining an authorized second authorized camera according to the detected authorization operation for the second request camera;
generating a second authorization indication according to a first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
In this way, the first device can directly control the local camera of the first device and the local camera of the second device mapped by the set first virtual camera according to the command sent by the user, and can also establish a control relationship with the fifth device to be controlled by the fifth device. The fifth device may be provided with a local camera, or may have no local camera.
For example, fig. 7 shows a schematic diagram of a selective authorization camera according to an embodiment of the present application. As shown in fig. 7, assuming that the second requesting camera in the second authorization control request is the local camera 100 of the first device a, the local camera of the device B1 to which the first virtual camera 201 is mapped, the local camera of the device B2 to which the first virtual camera 202 is mapped, and the local camera of the device B3 to which the first virtual camera 203 is mapped in fig. 4, an interface T4 may be displayed in the display screen of the first device a, and the local camera of the device B1 to which the second authorization camera authorized by the user, such as the first virtual camera 201, is determined according to the detected trigger operation of clicking, sliding, etc. of the selection control K of each second requesting camera. The first device a may then generate a second authorization indication from the first camera identity of the local camera of the device B1 mapped to by the first virtual camera 201 in the first device a and send the second authorization indication to the fifth device D, so that the fifth device D creates a virtual camera 401 for controlling the second authorization camera according to the second authorization indication (refer to the implementation procedure of step S17 and step S18 above).
Fig. 8 is a schematic diagram of an implementation process of a camera control method based on distributed control according to an embodiment of the present application. As shown in fig. 8, a process in which the first device a controls the local camera of the second device B2 through the intermediate device C1 is shown, wherein,
in the first task command issuing process:
the camera application in the application layer of the first device a detects task setting operations for a preview task, a photographing task and a photographing task sent by a user, and then a camera service module in the service layer of the first device a generates a photographing command 1 (hereinafter also referred to as a command 1 in the figure), a photographing command 2 (hereinafter also referred to as a command 2 in the figure) and a preview command 3 (hereinafter also referred to as a command 3 in the figure) according to the detected task device operations, wherein each command carries task parameters required for executing the task (the process of determining the command and the task parameters is described above, and is not repeated here). The generation time of the photographing command 1, the photographing command 2 and the preview command 3, or the time when the user issues the three commands are the same or different, and in fig. 8, in order to show the issuing process of the target task of three different task types, all three commands are shown, but in reality, the issuing time of the three commands does not affect each other. The virtual camera device in the virtual camera HAL in the HAL layer of the first device a (hardware abstraction layer, english Hardware Abstraction Layer) (i.e. the second virtual camera 301 in fig. 4) sends "command 1, command 2 and/or command 3" to the "distributed device virtualization platform service" in the service layer of the first device a, which sends "command 1, command 2 and/or command 3" to the intermediate device C1 in a transparent manner through its own transparent pipe.
The multi-device virtualization module in the application layer of the intermediate device C1 receives and sends the command 1, the command 2 and/or the command 3 to the distributed device virtualization platform service of the service layer by adopting a transparent transmission mode through a transparent transmission pipeline. The "distributed device virtualization platform service" also sends "command 1, command 2, and/or command 3" to the second device B2 in a transparent manner through the transparent pipe.
The "multi-device virtualization module" of the second device B2 application layer receives "command 1, command 2, and/or command 3" through the pass-through pipe and then sends "command 1, command 2, and/or command 3" to the "camera service" of the service layer. The "camera service" sends specific target tasks to be executed to its camera device (i.e. the local camera of B2 shown in fig. 4) according to the "command 1, command 2 and/or command 3", so that the camera device can control the sensor of the camera hardware to perform specific operations such as shooting, image capturing, viewing preview and the like, and process the data collected by the sensor through the image signal processor ISP (Image Signal Processing) of the camera hardware to generate target task data. The target task data may include data corresponding to the command 1, data corresponding to the command 2, and/or data corresponding to the command 3.
In the uploading process of the target task data:
the camera equipment of the second equipment B2 sends the target task data of different task types to corresponding buffer areas of camera service respectively, and then the multi-equipment virtualization module of the second equipment B2 recalls the target task data and sends the target task data to the distributed equipment virtualization platform service of the intermediate equipment in a transparent transmission mode through a transparent transmission pipeline.
After the distributed device virtualization platform service of the intermediate device C1 receives the target task data through the transparent pipeline, the target task data is sent to the first device a in a transparent manner through the transparent pipeline of the multi-device virtualization module of the intermediate device C1.
The distributed device virtualization platform service of the first device A receives target task data through a transparent transmission pipeline of the distributed device virtualization platform service, processes the target task data, determines which task type of data corresponds to the target task data, and sends the target task data corresponding to different task types to a corresponding buffer area of the camera service in a service layer of the first device A, so that the camera application of the first device A can perform preview display, photo display and/or video display when the fact that the target task data is stored in the corresponding buffer area and/or a display instruction of a user is received is determined.
In the above manner, control of the local camera of the second device B2, which is no longer in the same local area network as it is, is achieved with one first device a.
The embodiment of the application provides a terminal device, which comprises: a local camera; a first virtual camera; a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described distributed control-based camera control method when executing the instructions.
Embodiments of the present application provide a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disk, hard disk, random Access Memory (Random Access Memory, RAM), read Only Memory (ROM), erasable programmable Read Only Memory (Electrically Programmable Read-Only-Memory, EPROM or flash Memory), static Random Access Memory (SRAM), portable compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (Digital Video Disc, DVD), memory stick, floppy disk, mechanical coding devices, punch cards or in-groove protrusion structures having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it may be connected to an external computer (e.g., through the internet using an internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field programmable gate arrays (Field-Programmable Gate Array, FPGA), or programmable logic arrays (Programmable Logic Array, PLA), with state information of computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., circuits or ASICs (Application Specific Integrated Circuit, application specific integrated circuits)) which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the invention is described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A camera control method based on distributed control, applied to a first device, the method comprising:
displaying a to-be-selected camera which can be controlled by the first device, wherein the to-be-selected camera comprises a local camera of the first device and a local camera of a second device mapped by a first virtual camera in the first device;
according to the detected task creating operation aiming at the camera to be selected, determining the selected camera and a target task required to be executed by the selected camera from the camera to be selected;
generating a first task command according to a first camera identification of the selected camera in the first device and the target task;
sending the first task command to the selected camera so that the selected camera can control the target task to be executed according to the first task command,
wherein the first device comprises at least one first virtual camera, each first virtual camera is used for realizing control of the local camera of the mapped second device, at least one level of mapping relation exists between each first virtual camera and the local camera of the mapped second device,
when a first-level mapping relation is formed between a first virtual camera and a local camera of a mapped second device, the first device and the second device are directly connected;
When the first virtual camera and the mapped local camera of the second device are in a multi-level mapping relationship, the second device and the first device are in different local area networks, the second device establishes indirect connection with the first device by means of at least one intermediate device, and each intermediate device in the at least one intermediate device utilizes the set virtual camera to forward data between the first virtual camera and the mapped local camera of the second device.
2. The method according to claim 1, wherein the method further comprises:
when a device connection request is detected, searching for a third device which can be connected with the first device and meets connection conditions, wherein the connection conditions comprise: the third device is provided with a local camera and/or at least one second virtual camera is created in the third device;
sending a first authorization control request to the third device, and receiving a first authorization instruction returned by the third device in response to the first authorization control request;
after determining an authorized first authorization camera according to the first authorization instruction, acquiring a second camera identification of the first authorization camera in the third device, wherein the first authorization camera comprises a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera;
Determining a first camera identification of the first authorized camera in the first device according to the second camera identification, creating a first virtual camera for controlling the first authorized camera according to the first camera identification of the first authorized camera,
each second virtual camera is used for realizing control of a local camera of the fourth device mapped by the second virtual camera, and at least one level of mapping relation exists between the second virtual camera and the mapped local camera of the fourth device.
3. The method of claim 2, wherein issuing a first authorization control request to the third device and receiving a first authorization indication returned by the third device in response to the first authorization control request comprises:
selecting a first request camera according to request operation of a local camera of the third device and/or a local camera of a fourth device mapped by the second virtual camera, wherein the request operation is controlled by the third device;
and generating the first authorization control request according to the first request camera, and sending the first authorization control request to the third device, so that the third device generates the first authorization indication according to the detected authorization operation for the first authorization control request.
4. The method of claim 2, wherein determining a first camera identity of the first authorized camera in the first device from the second camera identity and creating a first virtual camera for controlling the first authorized camera from the first camera identity of the first authorized camera comprises:
when the first authorized camera comprises a local camera of a fourth device mapped by a second virtual camera, determining a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera according to the mapping relation level indicating the mapping relation between the first authorized camera and the second virtual camera in the second camera identifier;
determining a first identity of the first authorized camera in the first device according to the identity in the second camera and the identity of the existing camera corresponding to the first mapping relation level in the first device;
and determining a first camera identification of the first authorized camera in the first device according to the first mapping relation level and the first identity identification, and creating a first virtual camera for controlling the first authorized camera according to the first camera identification.
5. The method of claim 4, wherein determining a first camera identity of the first authorized camera in the first device based on the second camera identity, and creating a first virtual camera for controlling the first authorized camera based on the first camera identity of the first authorized camera, comprises:
and when the first authorized camera comprises the local camera of the third device, determining a first-level mapping relation as a first mapping relation level between the first authorized camera and a first virtual camera which needs to be created and controls the first authorized camera.
6. The method of claim 4, wherein determining the first identity of the first authorized camera in the first device based on the identity in the second camera identity and the identity of the existing camera in the first device corresponding to the first level of mapping comprises:
when the identity in the second camera identifier exists in the identity of the existing camera corresponding to the first mapping relation level in the first equipment, a first identity identifier of the first authorized camera in the first equipment is created according to a preset identity identifier creation rule; or alternatively
And when the identity identifier in the second camera identifier is different from the identity identifier of the existing camera corresponding to the first mapping relation level in the first equipment, determining the identity identifier in the second camera identifier as the first identity identifier of the first authorized camera in the first equipment.
7. The method of claim 1, wherein determining a selected camera and a target task to be performed by the selected camera from among the candidate cameras according to the detected create task operation for the selected camera, comprises:
determining a selected camera according to the detected selection operation for the camera to be selected;
determining a target task to be executed by the selected camera according to a task setting operation for the selected camera,
the task parameters of the target task comprise at least one of task type, execution time information and camera parameter setting when the selected camera executes the target task, and the task type comprises at least one of the following: a photographing task, a shooting task and an image previewing task.
8. The method of claim 1, wherein sending the first task command to the selected camera to cause the selected camera to control execution of the target task in accordance with the first task command comprises at least one of:
When the selected camera comprises a local camera of the first device, sending the first task command to the local camera of the first device so that the local camera of the first device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and a first virtual camera corresponding to the selected camera is in a first-level mapping relation, forwarding the first task command to the local camera of the second device through a first virtual camera corresponding to the selected camera in the first device, so that the local camera of the second device executes a target task indicated by the first task command;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relationship, determining at least one intermediate device which completes forwarding of the first task command according to a first mapping relationship level between the selected camera and the corresponding first virtual camera, and forwarding the first task command to the local camera of the second device through the virtual camera corresponding to the selected camera in each intermediate device in sequence, so that the local camera of the second device executes the target task indicated by the first task command.
9. The method according to claim 1 or 8, characterized in that the method further comprises:
when receiving target task data obtained by the selected camera executing the target task, performing image and/or video display according to the target task data,
wherein receiving the target task data comprises at least one of:
directly receiving target task data sent by a local camera of the first device when the selected camera comprises the local camera of the first device;
when the selected camera comprises a local camera of a second device mapped by a first virtual camera and the first virtual camera corresponding to the selected camera is in a first-level mapping relation, the first virtual camera corresponding to the selected camera is utilized to directly receive target task data sent by the local camera of the second device;
when the selected camera comprises a local camera of the second device mapped by the first virtual camera and the selected camera and the first virtual camera corresponding to the selected camera are in a multi-level mapping relation, receiving target task data which are sent by the local camera of the second device and are forwarded by at least one intermediate device by utilizing the first virtual camera corresponding to the selected camera.
10. The method according to claim 1, wherein the method further comprises:
when a second authorization control request from a fifth device is received, displaying an authorization prompt according to a second request camera in the second authorization control request;
determining an authorized second authorized camera according to the detected authorization operation for the second request camera;
generating a second authorization indication according to a first camera identification of the second authorization camera in the first device, and sending the second authorization indication to the fifth device, so that the fifth device creates a virtual camera for controlling the second authorization camera according to the second authorization indication.
11. A terminal device, comprising:
a local camera;
a first virtual camera;
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of any of claims 1-10 when executing the instructions.
12. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1-10.
CN202210973225.XA 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment Active CN115484404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210973225.XA CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210973225.XA CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment
CN202011308989.4A CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202011308989.4A Division CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Publications (2)

Publication Number Publication Date
CN115484404A CN115484404A (en) 2022-12-16
CN115484404B true CN115484404B (en) 2023-06-02

Family

ID=81594926

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202011308989.4A Active CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment
CN202210973225.XA Active CN115484404B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202011308989.4A Active CN114520867B (en) 2020-11-20 2020-11-20 Camera control method based on distributed control and terminal equipment

Country Status (2)

Country Link
CN (2) CN114520867B (en)
WO (1) WO2022105716A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116366957B (en) * 2022-07-21 2023-11-14 荣耀终端有限公司 Virtualized camera enabling method, electronic equipment and cooperative work system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154868A (en) * 2017-04-24 2017-09-12 北京小米移动软件有限公司 Smart machine control method and device
CN109600549A (en) * 2018-12-14 2019-04-09 北京小米移动软件有限公司 Photographic method, device, equipment and storage medium
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPQ412899A0 (en) * 1999-11-18 1999-12-09 Prescient Networks Pty Ltd A gateway system for interconnecting wireless ad-hoc networks
US8429630B2 (en) * 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
JP5429882B2 (en) * 2010-12-08 2014-02-26 Necアクセステクニカ株式会社 Camera synchronization system, control device, and camera synchronization method used therefor
US9720508B2 (en) * 2012-08-30 2017-08-01 Google Technology Holdings LLC System for controlling a plurality of cameras in a device
CN104639418B (en) * 2015-03-06 2018-04-27 北京深思数盾科技股份有限公司 The method and system that structure LAN is transmitted into row information
AU2016228525B2 (en) * 2015-03-12 2021-01-21 Alarm.Com Incorporated Virtual enhancement of security monitoring
US20170332009A1 (en) * 2016-05-11 2017-11-16 Canon Canada Inc. Devices, systems, and methods for a virtual reality camera simulator
EP4085593A2 (en) * 2020-02-20 2022-11-09 Huawei Technologies Co., Ltd. Integration of internet of things devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154868A (en) * 2017-04-24 2017-09-12 北京小米移动软件有限公司 Smart machine control method and device
CN109600549A (en) * 2018-12-14 2019-04-09 北京小米移动软件有限公司 Photographic method, device, equipment and storage medium
CN111083364A (en) * 2019-12-18 2020-04-28 华为技术有限公司 Control method, electronic equipment, computer readable storage medium and chip

Also Published As

Publication number Publication date
WO2022105716A1 (en) 2022-05-27
CN114520867B (en) 2023-02-03
CN115484404A (en) 2022-12-16
CN114520867A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
JP2023514631A (en) Interface layout method, apparatus and system
CN111666055B (en) Data transmission method and device
CN112286477B (en) Screen projection display method and related product
CN114520868B (en) Video processing method, device and storage medium
CN111221845A (en) Cross-device information searching method and terminal device
CN115297405A (en) Audio output method and terminal equipment
CN114885328B (en) Vehicle-computer connection method and device
EP4213026A1 (en) Fault detection method and electronic terminal
CN114065706A (en) Multi-device data cooperation method and electronic device
CN114666433B (en) Howling processing method and device in terminal equipment and terminal
CN115484404B (en) Camera control method based on distributed control and terminal equipment
WO2022105793A1 (en) Image processing method and device
CN116954409A (en) Application display method and device and storage medium
CN115114607A (en) Sharing authorization method, device and storage medium
CN116974496A (en) Multi-screen interaction method and electronic equipment
CN114615362B (en) Camera control method, device and storage medium
EP4273679A1 (en) Method and apparatus for executing control operation, storage medium, and control
CN114519935B (en) Road identification method and device
WO2022179471A1 (en) Card text recognition method and apparatus, and storage medium
WO2022194005A1 (en) Control method and system for synchronous display across devices
CN114513760B (en) Font library synchronization method, device and storage medium
CN117850718A (en) Display screen selection method and electronic equipment
CN116108118A (en) Method and terminal equipment for generating thermal map
CN115994051A (en) Picture backup system, method and equipment
CN116801346A (en) Data transmission method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant