CN113055599A - Camera switching method and device, electronic equipment and readable storage medium - Google Patents
Camera switching method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN113055599A CN113055599A CN202110336564.2A CN202110336564A CN113055599A CN 113055599 A CN113055599 A CN 113055599A CN 202110336564 A CN202110336564 A CN 202110336564A CN 113055599 A CN113055599 A CN 113055599A
- Authority
- CN
- China
- Prior art keywords
- target
- camera
- calibration pattern
- target image
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
- H04N23/661—Transmitting camera control signals through networks, e.g. control via the Internet
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a camera switching method and device, electronic equipment and a readable storage medium, and belongs to the technical field of image processing. The camera switching method is applied to electronic equipment, and the electronic equipment comprises a host and a plurality of cameras separated from the host; the switching method of the camera comprises the following steps: receiving a target image acquired by the camera, wherein the target image comprises a target calibration pattern; determining position information of the camera based on the target image; and determining a target camera to be switched from the plurality of cameras based on the position information.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a camera switching method and device, electronic equipment and a readable storage medium.
Background
With the development of the technology, the shooting performance of the mobile terminal is stronger, and the shooting playability and the shooting functionality are richer and richer. The mobile terminal with a plurality of separable cameras has appeared at present, when the mobile terminal is used for shooting, the cameras need to be numbered, and after a user remembers the numbers and the corresponding position information of the cameras, the pictures of the cameras are manually switched, so that the shooting is realized. In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: the switching mode has high technical requirements on users, the situation that the cameras are not switched timely is easy to occur, and the shooting success rate is low.
Disclosure of Invention
An object of the embodiments of the present application is to provide a camera switching method, a camera switching device, an electronic device, and a readable storage medium, which can solve the problem that a plurality of separate cameras are difficult to switch during shooting.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for switching cameras, where the method is applied to an electronic device, where the electronic device includes a host and a plurality of cameras separated from the host; the method comprises the following steps:
receiving a target image acquired by the camera, wherein the target image comprises a target calibration pattern;
determining position information of the camera based on the target image;
and determining a target camera to be switched from the plurality of cameras based on the position information.
In a second aspect, an embodiment of the present application provides a switching apparatus for cameras, where the apparatus is applied to an electronic device, where the electronic device includes a host and a plurality of cameras separated from the host; the device comprises:
the first receiving module is used for receiving a target image acquired by the camera, and the target image comprises a target calibration pattern;
the first determining module is used for determining the position information of the camera based on the target image;
and the second determining module is used for determining the target camera to be switched from the plurality of cameras according to the position information.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, the position information of the multiple cameras is firstly positioned through image recognition, and then the target camera is determined based on the position information, so that the difficulty of continuous shooting and editing of the multiple cameras can be reduced, and the shooting success rate is improved.
Drawings
Fig. 1 is a flowchart of a switching method of a camera provided in an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a principle of a switching method of a camera according to an embodiment of the present application;
fig. 3 is one of schematic diagrams of target images of a switching method of a camera provided in an embodiment of the present application;
fig. 4 is a second schematic diagram of a target image of a switching method of a camera according to the embodiment of the present application;
fig. 5 is an interface schematic diagram of a switching method of a camera provided in the embodiment of the present application;
fig. 6 is a structural diagram of a switching device of a camera provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 8 is a second hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The switching method of the camera, the switching device of the camera, the electronic device and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The switching method of the camera can be applied to the electronic equipment, and can be specifically executed by hardware or software in the electronic equipment, but is not limited to the hardware or the software in the electronic equipment. The execution subject of the camera switching method may be the host 210, or a control device of the host 210, or the like.
In the following embodiments, a host 210 including a display screen and a touch-sensitive surface is described. However, it should be understood that host 210 may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The embodiment of the application provides a camera switching method, an execution main body of the camera switching method can be electronic equipment, and the electronic equipment can be a mobile terminal.
The camera switching method is applied to electronic equipment, the electronic equipment comprises a host 210 and a plurality of cameras, the cameras can be separated from the host 210, the cameras are in communication connection with the host 210, the communication connection mode can be a wireless connection mode, and for example, the cameras and the host 210 can be connected through a Bluetooth module or an infrared module.
The host 210 may control the operating state of each camera, the image collected by each camera may be sent to the host 210, the host 210 may include a display screen, and the host 210 may determine a target camera from a plurality of cameras as needed to display the image collected by the target camera.
The execution main body of the camera switching method may be the host 210 or a control device of the host 210; or the execution subject of the switching method of the camera may be a server communicatively connected to the host 210.
As shown in fig. 1, the switching method of the camera includes: step 110, step 120 and step 130.
in the method for switching the cameras, a plurality of cameras need to be arranged first.
The shooting object corresponding to the target calibration pattern 211b may have various expressions:
first, the shooting object corresponding to the target calibration pattern 211b may be a physical object.
In this embodiment, the physical object is placed at the target position, and each camera can capture the physical object when at the target position.
The camera takes a picture of the physical object, and forms a target calibration pattern 211b in the target image 310.
In some embodiments, the physical object may be the host 210, the outer profile parameters of the host 210 are known information, and during shooting, both the host 210 and the camera are ready-made devices, and the difficulty of calibration can be reduced by performing calibration through the host 210.
Second, the shooting object corresponding to the target calibration pattern 211b may be a virtual pattern.
In this embodiment, before receiving the target image 310 captured by the camera in step 110, the method may further include:
the display screen of the control host 210 displays the first calibration pattern 211a, and the target calibration pattern 211b is obtained by the camera shooting the first calibration pattern 211 a.
It is understood that the shooting object corresponding to the target calibration pattern 211b may be the first calibration pattern 211a displayed on the display screen of the host 210.
On one hand, the first calibration pattern 211a displayed on the display screen of the host 210 has a more distinct outline, and correspondingly, the generated target calibration pattern 211b has a more distinct outline, which facilitates image processing and recognition in subsequent steps.
In the actual implementation process, the first calibration pattern 211a can be designed to be relatively regular and strongly contrasted with the environment, so that the recognition degree is higher.
For example, as shown in fig. 2, the first calibration pattern 211a may be designed in a grid shape.
On the other hand, the first calibration pattern 211a is displayed through the display screen of the host 210, so that different hosts 210 can each conveniently display the same first calibration pattern 211a, in other words, different users can use the same first calibration pattern 211a in different scenes.
Therefore, the subsequent image processing has smaller calculation amount and higher accuracy.
In an actual implementation, the method may further include:
receiving a first input of a user; in response to the first input, the display screen of the control host 210 displays the first calibration pattern 211a and sends a shooting instruction to the camera.
In this step, the first input is used to control the display screen of the host 210 to display the first calibration pattern 211 a.
Wherein the first input may be expressed in at least one of the following ways:
first, the first input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, receiving the first input of the user may be represented by receiving a touch operation of the user in a display area of the display screen of the host 210.
In order to reduce the misoperation rate of the user, the action area of the first input can be limited to a specific area, such as the upper middle area of the shooting preview interface; or under the state of displaying the shooting preview interface, displaying a target control on the current interface, and touching the target control to realize the first input; or setting the first input as a continuous multi-tap operation on the display area within the target time interval.
Second, the first input may be represented as a physical key input.
In this embodiment, the body of the host 210 is provided with a physical key corresponding to the positioning camera before shooting, and receives a first input of the user, which may be expressed as receiving a first input that the user presses the corresponding physical key; the first input may also be a combined operation of pressing a plurality of physical keys simultaneously.
Third, the first input may be represented as a voice input.
In this embodiment, the host 210 may trigger the display of the first calibration pattern 211a when receiving a voice such as "display first calibration pattern".
Of course, in other embodiments, the first input may also be in other forms, including but not limited to character input, and the like, which may be determined according to actual needs, and this is not limited in this application.
The host 210 may control the display screen of the host 210 to display the first calibration pattern 211a in response to the first input after receiving the first input, and send a shooting instruction to the camera in a case where the display screen displays the first calibration pattern 211 a. Thus, the camera can capture the target image 310, and the target image 310 includes the target calibration pattern 211 b.
it can be understood that, before the main shooting, the multiple cameras each collect the target image 310, and the target image 310 collected by each camera includes the target calibration pattern 211b, and the target calibration patterns 211b in the multiple target images 310 correspond to the same shooting object.
The distances and angles from the cameras to the shot object are different, and the arrangement angles of the cameras are also different, so that the size, shape and position of the target calibration pattern 211b in the target image 310 acquired by the cameras are also different.
In other words, the target calibration pattern 211b in each target image 310 is related to the position information of the corresponding camera, so that the calibration of the camera position information can be realized by performing image analysis on the target image 310.
This step can be implemented as follows:
and analyzing the distortion and the scaling of the target calibration pattern 211b in each target image 310 and the position of the target calibration pattern 211b in the target calibration pattern 211b to determine the position information of each camera.
The manner of determination includes, but is not limited to: the target image 310 is compared with a sample target image 310 for which position information is previously determined, or position information of the camera is obtained through a trained model (or neural network).
The position information may be position information of the camera relative to the host 210, and particularly, when the shooting object corresponding to the target calibration pattern 211b is the first calibration pattern 211a displayed on the display screen of the host 210, the position information may be determined conveniently.
In some embodiments, determining 120 camera position information based on the target image 310 may include:
determining the size of the target object in the target image 310, the target object being determined based on the target calibration pattern 211 b;
based on the size of the target object, position information of the camera is determined.
In an embodiment, one or more target objects may be predetermined, and for each target image 310, the size of the target object may be determined from the target image 310, and the size may be a length (number of pixels) or a distance (number of pixels) or a region area (number of pixels).
The size is determined for each target image 310, and since the target object is determined based on the target calibration pattern 211b, and the target calibration patterns 211b in the plurality of target images 310 correspond to the same photographic subject, the size is related to the position information of the camera, and the position information of the corresponding camera can be determined by analyzing the size of the target object.
The target object is defined in advance, and before actual shooting, the position information of the camera is determined based on the size of the target object, so that the calculation process of the positioning process is simplified, the calculation amount is reduced, the calibration time is shortened, and the whole shooting process is simpler.
In some embodiments, the step of determining the size of the target object in the target image 310 includes:
determining the length of the target edge of the target calibration pattern 211b in the target image 310, and determining the orientation information of the target calibration pattern 211b in the target image 310;
determining position information of the camera based on the size of the target object, including:
and determining the position information of the camera based on the length and the azimuth information of the target edge.
It will be appreciated that in this embodiment, the size of the target object is determined from two different dimensions:
first, the size of the target object itself.
The size may include the length of the target edge of the target calibration pattern 211b in the target image 310, and for a single size, the farther the camera is from the first calibration pattern 211a, the smaller the size of the target calibration pattern 211b obtained by capturing the first calibration pattern 211a is, and the closer the camera is to the first calibration pattern 211a, the larger the size of the target calibration pattern 211b obtained by capturing the first calibration pattern 211a is; for the relative sizes of the target edges, when the rotation angle of the camera and the included angle between the camera and the first calibration pattern 211a are changed, the size ratios of the target edges are also changed.
That is, the distance from the camera to the first calibration pattern 211a is highly correlated with the length of the target edge of the single target calibration pattern 211b, and the angle of the camera is highly correlated with the length ratio of the target edges of the plurality of target calibration patterns 211 b.
By selecting the target edge which is easy to distinguish and calculate in advance from the target calibration pattern 211b, it is convenient to accurately obtain at least part of the position information of the camera in the subsequent calculation.
In an actual implementation, the length of the target edge of the target calibration pattern 211b includes: the length of the left side and the length of the right side of the target calibration pattern 211 b.
Fig. 3 shows the target calibration pattern 211b captured by the first camera 221, where l1 is the length of the left side of the target calibration pattern 211b, and r1 is the length of the right side of the target calibration pattern 211 b.
Fig. 4 shows the target calibration pattern 211b captured by the second camera 222, where l2 is the length of the left side of the target calibration pattern 211b, and r2 is the length of the right side of the target calibration pattern 211 b.
Second, the relative position relationship between the target object and the entire target image 310.
The dimensions may include orientation information of the target calibration pattern 211b in the target image 310.
It can be understood that the rotation angle of the camera and the angle between the camera and the first calibration pattern 211a cause the distortion of the target calibration pattern 211b corresponding to the first calibration pattern 211a, so that the orientation information of the target calibration pattern 211b in the whole target image 310 changes.
Correspondingly, by analyzing the orientation information of the target calibration pattern 211b in the target image 310, the rotation angle of the camera and the included angle between the camera and the first calibration pattern 211a can be obtained.
In practical implementation, the orientation information of the target calibration pattern 211b in the target image 310 includes: the distance of the left side to the left boundary of target image 310 and the distance of the right side to the right boundary of target image 310.
Fig. 3 shows the target calibration pattern 211b captured by the first camera 221, where m1 is the distance from the left side of the target calibration pattern 211b to the left boundary of the target image 310, and n1 is the distance from the right side of the target calibration pattern 211b to the right boundary of the target image 310.
Fig. 4 shows the target calibration pattern 211b captured by the second camera 222, where m2 is the distance from the left side of the target calibration pattern 211b to the left boundary of the target image 310, and n2 is the distance from the right side of the target calibration pattern 211b to the right boundary of the target image 310.
The calculations of the various dimensions described above may be based on computer vision processing.
In the actual implementation, for determining the distance m from the left side of the target calibration pattern 211b to the left boundary of the target image 310 and the distance n from the right side of the target calibration pattern 211b to the right boundary of the target image 310, the processing method may include the following steps:
(1) the received target image 310 is denoised.
It can be understood that, in the acquisition and transmission process, the target image 310 may be affected by noise interference of the camera and the external environment, and the like to generate noise, where the noise is an area with a large gray scale change, and is easily identified as a false edge in the subsequent identification process.
In some embodiments, noise reduction can be achieved through a gaussian fuzzy algorithm, a target image with noise removed is obtained, and when size calculation is performed on the target image with noise removed, accuracy is higher and calculation amount is smaller.
The target image 310 for which the noise is large can be optimized by increasing the size of the kernel of the gaussian blur.
After the target image with the noise removed is obtained, m and n can be obtained by adopting a Canny edge detection algorithm, and the specific calculation mode is as follows:
(2) and performing non-maximum suppression on the target image after the noise is removed, and determining a gradient and an edge.
Determining the edge of the target image after the noise is removed and a corresponding gradient, wherein the edge is a coordinate sequence, the gradient is a value of the degree of the edge, and the edge data between the set maximum and minimum threshold values can be left by filtering the undersized and oversized edges through the gradient;
(3) a double threshold limit.
After the non-maximum value is suppressed, a great number of possible edge points still exist, edge data below the low threshold value and edge data above the high threshold value are removed by setting the low threshold value and the high threshold value, and the edge data between the low threshold value and the high threshold value are marked as weak edges.
(4) And calculating the distance m from the leftmost edge to the left edge of the picture and the distance n from the rightmost edge to the right edge of the picture.
After the weak edge in the target image after the noise is removed is obtained, the distance from the leftmost edge to the left edge of the image can be calculated and recorded as m; the distance from the rightmost edge to the right edge of the picture is denoted as n.
For determining the length l of the left side of the target calibration pattern 211b and the length r of the right side of the target calibration pattern 211b, the processing method may include the following steps:
(1) the received target image 310 is denoised.
It can be understood that, in the acquisition and transmission process, the target image 310 may be affected by noise interference of the camera and the external environment, and the like to generate noise, where the noise is an area with a large gray scale change, and is easily identified as a false edge in the subsequent identification process.
In some embodiments, noise reduction can be achieved through a gaussian fuzzy algorithm, a target image with noise removed is obtained, and when size calculation is performed on the target image with noise removed, accuracy is higher and calculation amount is smaller.
The target image 310 for which the noise is large can be optimized by increasing the size of the kernel of the gaussian blur.
After the target image with the noise removed is obtained, m and n can be obtained by adopting a Canny edge detection algorithm, and the specific calculation mode is as follows:
(2) and performing non-maximum suppression on the target image after the noise is removed, and determining a gradient and an edge.
Determining the edge of the target image after the noise is removed and a corresponding gradient, wherein the edge is a coordinate sequence, the gradient is a value of the degree of the edge, and the edge data between the set maximum and minimum threshold values can be left by filtering the undersized and oversized edges through the gradient;
(3) a double threshold limit.
After the non-maximum value is suppressed, a great number of possible edge points still exist, edge data below the low threshold value and edge data above the high threshold value are removed by setting the low threshold value and the high threshold value, and the edge data between the low threshold value and the high threshold value are marked as weak edges.
(4) The length l of the left side of the target calibration pattern 211b and the length r of the right side of the target calibration pattern 211b are calculated.
After obtaining the weak edge in the target image after the noise removal, calculating the distance on the longitudinal axis of the edge of the left side of the target calibration pattern 211b to obtain the length l of the left side of the target calibration pattern 211b, wherein the length l is the coordinate difference between the uppermost continuous point and the lowermost continuous point, and the continuity means that edge pixel points exist in 8 positions around the current pixel point; the length r of the right side of the target calibration pattern 211b is calculated in the same manner.
In some embodiments, the step of determining the position information of the camera based on the size of the target object comprises:
and inputting the size of the target object into the positioning model to obtain the position information output by the positioning model.
The positioning model is used for determining the position information of the camera according to the input size of the target object.
The positioning model is obtained by training with the sample size of the target object as a sample and sample position information corresponding to the sample size as a sample label.
In other words, the training samples of the positioning model are: a sample size of the target object; the training labels of the positioning model are as follows: and sample position information, wherein the sample position information can be manually calibrated in advance or determined by other high-precision image positioning processing modes.
Taking the example that the display screen of the host 210 displays the first calibration pattern 211a, the position information of the camera includes: the included angle α between the camera and the host 210, the rotation angle β of the camera itself, and the vertical distance d between the camera and the host 210 along the normal of the display screen of the host 210.
Wherein α is an included angle between a connecting line of the camera and the host 210 and a normal of the display screen of the host 210; beta is the included angle between the axis of the camera and the normal of the display screen of the host 210; d is the vertical distance of the camera to the host 210 along the normal to the display screen of the host 210.
As shown in fig. 2, an included angle α 1 between the first camera 221 and the host 210, a rotation angle β 1 of the first camera 221 itself, and a vertical distance d1 between the first camera 221 and the host 210 along a normal of the display screen of the host 210; the included angle α 2 between the second camera 222 and the host 210, the rotation angle β 2 of the second camera 222, and the vertical distance d2 between the second camera 222 and the host 210 along the normal of the display screen of the host 210.
The positioning model is as follows: α, β, d ═ f (l, r, m, n).
The above model can take many forms:
first, a, beta, d and l, r, m, n form a linear correlation relationship.
Thus, a linear correlation relationship is constructed in advance, least square fitting is carried out on the linear correlation relationship through a sample set, parameter values in the linear correlation relationship are obtained, and a trained positioning model is obtained.
The sample set comprises a plurality of samples (l, r, m, n) and corresponding sample labels (α, β, d).
The trained positioning model can be used for calibrating the position information of the camera in the actual shooting process.
And secondly, a nonlinear correlation relationship is formed between alpha, beta and d and l, r, m and n.
Thus, the nonlinear polynomial correlation relationship is constructed in advance, and the nonlinear polynomial correlation relationship is fitted through the sample set to obtain the parameter values in the nonlinear polynomial correlation relationship, so that the trained positioning model is obtained.
The sample set comprises a plurality of samples (l, r, m, n) and corresponding sample labels (α, β, d).
The trained positioning model can be used for calibrating the position information of the camera in the actual shooting process.
The model can be an objective function and is determined by data fitting;
or, the model can be a neural network, and the training of the model is realized in a supervised learning mode.
And step 130, determining a target camera to be switched from the plurality of cameras based on the position information.
Through the steps 110 and 120, the position information of the plurality of cameras can be obtained, so that under the condition that the position information of each camera is known, an appropriate camera can be accurately selected as a target camera according to the shooting requirement, and an image collected by the target camera is shot or clipped.
Taking shooting a cool sport as an example, a plurality of cameras are arranged on a preset target cool path, and the position information of each camera is determined through the step 110 and the step 120, so that each camera has a proper shooting visual angle.
When the athlete runs cool, the matching degree of the position of the athlete and the position information of the cameras is used for automatically determining which camera is the target camera, so that coherent running videos can be automatically eliminated, a user does not need to manually clip the videos, or the clipping difficulty is reduced.
According to the camera switching method provided by the embodiment of the application, the position information of the multiple cameras is firstly positioned through image recognition, and then the target camera is determined based on the position information, so that the difficulty of continuous shooting and editing of the multiple cameras can be reduced, and the shooting success rate is improved.
The following specifically describes the embodiments of the present application from two different implementation perspectives.
Firstly, a target camera is automatically determined.
In this embodiment, the step 130 of determining the target camera to be switched from the plurality of cameras based on the position information may include:
and determining a target camera to be switched from the plurality of cameras based on the position information of the shooting object and/or the visual angle of the shooting object in the visual field of the cameras and the position information of the cameras.
The above-mentioned mode can be automatically identified and determined by the host computer or the control device of the host computer.
In actual execution, the host computer receives the position signal of the shooting object, combines the position signal with the position information of the camera for analysis, can determine which camera has the best shooting view at each moment, determines that the camera is the target camera, receives the image shot by the target camera, and displays the image on the display screen.
When the position of the shooting object is changed to the other camera to shoot a better visual field, the camera can be automatically switched to the target camera.
And secondly, manually determining a target camera.
In this embodiment, the step 130 of determining the target camera to be switched from the plurality of cameras based on the position information may include:
displaying a plurality of controls corresponding to the plurality of cameras according to the position information of the plurality of cameras;
receiving a second input of a user to a target control in the plurality of controls;
and responding to the second input, and determining that the camera corresponding to the target control is the target camera.
As shown in fig. 5, there are 4 cameras in total, the position information of the camera 1, the camera 2, the camera 3, and the camera 4 has been determined in the manner of steps 110 to 120 in the 4 cameras, and on the display interface, controls corresponding to the camera 1, the camera 2, the camera 3, and the camera 4 are displayed, and the arrangement position and the direction of each control are set according to the position information of each camera with respect to the host.
In this way, the user can clearly know the position of each camera through the interface.
The second input is for selecting a target camera.
Wherein the second input may be expressed in at least one of the following ways:
first, the second input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, receiving the second input of the user may be represented by receiving a touch operation of the user on a target control corresponding to the target camera.
For example, the method is used to select a currently suitable camera according to the observed position of the shooting object and the positions of controls corresponding to multiple cameras displayed on the interface, and click a corresponding target control, where an image acquired by the target camera corresponding to the target control is determined as an image required at the current time.
Second, the second input may be represented as a physical key input.
In this embodiment, the body of the host is provided with an entity key corresponding to the selection, and receives the second input of the user, which may be expressed as receiving the second input of the user pressing the corresponding entity key; the second input may also be a combined operation of pressing a plurality of physical keys simultaneously.
Third, the second input may be presented as a voice input.
In this embodiment, the host may trigger the determination that camera 3 is the target camera when receiving a voice such as "third camera".
Of course, in other embodiments, the second input may also be represented in other forms, including but not limited to character input, and the like, which may be determined according to actual needs, and this is not limited in this application.
The embodiment of the application also provides a switching device of the camera, which is applied to electronic equipment, wherein the electronic equipment comprises a host and a plurality of cameras separated from the host.
As shown in fig. 6, the switching device of the camera includes: a first receiving module 610, a first determining module 620, and a second determining module 630.
A first receiving module 610, configured to receive a target image acquired by a camera, where the target image includes a target calibration pattern;
a first determining module 620, configured to determine, based on the target image, position information of the camera;
and a second determining module 630, configured to determine, according to the position information, a target camera to be switched from the multiple cameras.
According to the switching device of the cameras, the position information of the multiple cameras is firstly positioned through image recognition, and then the target camera is determined based on the position information, so that the difficulty of continuous shooting and editing of the multiple cameras can be reduced, and the shooting success rate is improved.
In some embodiments, the first determining module 620 is further configured to determine a size of a target object in the target image, the target object being determined based on the target calibration pattern; based on the size of the target object, position information of the camera is determined.
In some embodiments, the first determining module 620 is further configured to input the size of the target object into the positioning model, and obtain the position information output by the positioning model; the positioning model is obtained by taking the sample size of the target object as a sample and taking sample position information corresponding to the sample size as a sample label for training.
In some embodiments, the first determining module 620 is further configured to determine a length of a target edge of the target calibration pattern in the target image, and determine orientation information of the target calibration pattern in the target image; and determining the position information of the camera based on the length and the azimuth information of the target edge.
In some embodiments, the length of the target edge of the target calibration pattern comprises: the length of the left side and the length of the right side of the target calibration pattern; the orientation information of the target calibration pattern in the target image comprises: the distance of the left side to the left boundary of the target image and the distance of the right side to the right boundary of the target image.
In some embodiments, the switching device of the camera may further include:
the first control module is used for controlling a display screen of the host to display a first calibration pattern before receiving a target image acquired by the camera, and the target calibration pattern is obtained by shooting the first calibration pattern by the camera.
The switching device of the camera in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a host. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
In some of the embodiments of the present application, the device may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The switching device for the camera provided in the embodiment of the present application can implement each process implemented by the switching device for the camera in the method embodiments of fig. 1 to fig. 5, and is not described herein again to avoid repetition.
As shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 720, a memory 710, and a program or an instruction stored in the memory 710 and executable on the processor 720, where the program or the instruction is executed by the processor 720 to implement each process of the above-mentioned embodiment of the camera switching method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The network module 802 is configured to receive a target image acquired by the camera, where the target image includes a target calibration pattern;
a processor 810 for determining position information of the camera based on the target image;
a processor 810, configured to determine a target camera to be switched from the multiple cameras based on the position information.
According to the electronic equipment provided by the embodiment of the application, the position information of the multiple cameras is firstly positioned through image recognition, and then the target camera is determined based on the position information, so that the difficulty of continuous shooting and clipping of the multiple cameras can be reduced, and the shooting success rate is improved.
Optionally, the processor 810 is further configured to determine a size of a target object in the target image, where the target object is determined based on the target calibration pattern;
the processor 810 is further configured to determine position information of the camera based on the size of the target object.
Optionally, the processor 810 is further configured to input the size of the target object into a positioning model, and obtain the position information output by the positioning model;
the positioning model is obtained by training by taking the sample size of the target object as a sample and taking sample position information corresponding to the sample size as a sample label.
Optionally, the processor 810 is further configured to determine a length of a target edge of the target calibration pattern in the target image, and determine orientation information of the target calibration pattern in the target image;
the processor 810 is further configured to determine position information of the camera based on the length of the target edge and the orientation information.
Optionally, the processor 810 is further configured to control a display screen of the host to display a first calibration pattern, and the target calibration pattern is obtained by the camera shooting the first calibration pattern.
It should be noted that, in this embodiment, the electronic device 800 may implement each process in the method embodiment in this embodiment and achieve the same beneficial effects, and for avoiding repetition, details are not described here again.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the camera switching method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the embodiment of the switching method for a camera, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a host (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. The camera switching method is applied to electronic equipment, and the electronic equipment comprises a host and a plurality of cameras separated from the host; the method comprises the following steps:
receiving a target image acquired by the camera, wherein the target image comprises a target calibration pattern;
determining position information of the camera based on the target image;
and determining a target camera to be switched from the plurality of cameras based on the position information.
2. The method for switching the camera according to claim 1, wherein the determining the position information of the camera based on the target image comprises:
determining a size of a target object in the target image, the target object being determined based on the target calibration pattern;
determining position information of the camera based on the size of the target object.
3. The method for switching the camera according to claim 2, wherein the determining the size of the target object in the target image comprises:
determining the length of a target edge of the target calibration pattern in the target image, and determining the orientation information of the target calibration pattern in the target image;
the determining the position information of the camera based on the size of the target object comprises:
determining position information of the camera based on the length of the target edge and the azimuth information;
the length of the target edge of the target calibration pattern comprises: the length of the left side and the length of the right side of the target calibration pattern;
the orientation information of the target calibration pattern in the target image comprises: a distance of the left side to a left boundary of the target image and a distance of the right side to a right boundary of the target image.
4. The method for switching the camera according to any one of claims 1 to 3, wherein before the receiving the target image captured by the camera, the method further comprises:
and controlling a display screen of the host to display a first calibration pattern, wherein the target calibration pattern is obtained by shooting the first calibration pattern by the camera.
5. The switching device of the camera is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a host and a plurality of cameras separated from the host; the device comprises:
the first receiving module is used for receiving a target image acquired by the camera, and the target image comprises a target calibration pattern;
the first determining module is used for determining the position information of the camera based on the target image;
and the second determining module is used for determining the target camera to be switched from the plurality of cameras according to the position information.
6. The switching device of the camera according to claim 5, wherein the first determining module is further configured to determine a size of a target object in the target image, the target object being determined based on the target calibration pattern; determining position information of the camera based on the size of the target object.
7. The switching device of the camera according to claim 5, wherein the first determining module is further configured to determine a length of a target edge of the target calibration pattern in the target image, and determine orientation information of the target calibration pattern in the target image; determining position information of the camera based on the length of the target edge and the azimuth information;
the length of the target edge of the target calibration pattern comprises: the length of the left side and the length of the right side of the target calibration pattern;
the orientation information of the target calibration pattern in the target image comprises: a distance of the left side to a left boundary of the target image and a distance of the right side to a right boundary of the target image.
8. The switching device of a camera according to any one of claims 5 to 7, further comprising:
the first control module is used for controlling a display screen of the host to display a first calibration pattern before receiving a target image acquired by the camera, and the target calibration pattern is obtained by shooting the first calibration pattern by the camera.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method of switching a camera according to any one of claims 1-4.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the switching method of a camera head according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110336564.2A CN113055599B (en) | 2021-03-29 | 2021-03-29 | Camera switching method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110336564.2A CN113055599B (en) | 2021-03-29 | 2021-03-29 | Camera switching method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113055599A true CN113055599A (en) | 2021-06-29 |
CN113055599B CN113055599B (en) | 2023-02-07 |
Family
ID=76516230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110336564.2A Active CN113055599B (en) | 2021-03-29 | 2021-03-29 | Camera switching method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113055599B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118411808A (en) * | 2024-04-24 | 2024-07-30 | 南京连务汇科技有限公司 | Fire safety alarm system based on interval big data analysis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160134838A1 (en) * | 2014-11-06 | 2016-05-12 | Cisco Technology, Inc. | Automatic Switching Between Dynamic and Preset Camera Views in a Video Conference Endpoint |
CN108989580A (en) * | 2018-10-25 | 2018-12-11 | 努比亚技术有限公司 | Camera switching method, mobile terminal and readable storage medium storing program for executing |
CN110211187A (en) * | 2019-04-28 | 2019-09-06 | 上海小萌科技有限公司 | A kind of multi-cam position calibration method |
CN110300268A (en) * | 2019-07-26 | 2019-10-01 | 上海龙旗科技股份有限公司 | Camera switching method and equipment |
CN111131713A (en) * | 2019-12-31 | 2020-05-08 | 深圳市维海德技术股份有限公司 | Lens switching method, device, equipment and computer readable storage medium |
CN112351156A (en) * | 2019-08-06 | 2021-02-09 | 华为技术有限公司 | Lens switching method and device |
-
2021
- 2021-03-29 CN CN202110336564.2A patent/CN113055599B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160134838A1 (en) * | 2014-11-06 | 2016-05-12 | Cisco Technology, Inc. | Automatic Switching Between Dynamic and Preset Camera Views in a Video Conference Endpoint |
CN108989580A (en) * | 2018-10-25 | 2018-12-11 | 努比亚技术有限公司 | Camera switching method, mobile terminal and readable storage medium storing program for executing |
CN110211187A (en) * | 2019-04-28 | 2019-09-06 | 上海小萌科技有限公司 | A kind of multi-cam position calibration method |
CN110300268A (en) * | 2019-07-26 | 2019-10-01 | 上海龙旗科技股份有限公司 | Camera switching method and equipment |
CN112351156A (en) * | 2019-08-06 | 2021-02-09 | 华为技术有限公司 | Lens switching method and device |
CN111131713A (en) * | 2019-12-31 | 2020-05-08 | 深圳市维海德技术股份有限公司 | Lens switching method, device, equipment and computer readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118411808A (en) * | 2024-04-24 | 2024-07-30 | 南京连务汇科技有限公司 | Fire safety alarm system based on interval big data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN113055599B (en) | 2023-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112162930B (en) | Control identification method, related device, equipment and storage medium | |
CN110675420B (en) | Image processing method and electronic equipment | |
CN108229277B (en) | Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment | |
KR101184460B1 (en) | Device and method for controlling a mouse pointer | |
CN107613202B (en) | Shooting method and mobile terminal | |
EP2903256B1 (en) | Image processing device, image processing method and program | |
CN110175995B (en) | Pathological image-based image state determination method, device and system | |
CN103679130B (en) | Hand method for tracing, hand tracing equipment and gesture recognition system | |
CN113194253B (en) | Shooting method and device for removing reflection of image and electronic equipment | |
CN110572636A (en) | camera contamination detection method and device, storage medium and electronic equipment | |
CN112437232A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN112367559B (en) | Video display method and device, electronic equipment, server and storage medium | |
CN112911147A (en) | Display control method, display control device and electronic equipment | |
CN114390201A (en) | Focusing method and device thereof | |
CN112437231A (en) | Image shooting method and device, electronic equipment and storage medium | |
CN113055599B (en) | Camera switching method and device, electronic equipment and readable storage medium | |
CN111145151A (en) | Motion area determination method and electronic equipment | |
CN112367486B (en) | Video processing method and device | |
CN112672051B (en) | Shooting method and device and electronic equipment | |
CN113794831A (en) | Video shooting method and device, electronic equipment and medium | |
CN113747076A (en) | Shooting method and device and electronic equipment | |
CN117152660A (en) | Image display method and device | |
WO2023151527A1 (en) | Image photographing method and apparatus | |
CN116261043A (en) | Focusing distance determining method, device, electronic equipment and readable storage medium | |
CN112565605B (en) | Image display method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |