CN115942119A - Linkage monitoring method and device, electronic equipment and readable storage medium - Google Patents

Linkage monitoring method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115942119A
CN115942119A CN202210966540.XA CN202210966540A CN115942119A CN 115942119 A CN115942119 A CN 115942119A CN 202210966540 A CN202210966540 A CN 202210966540A CN 115942119 A CN115942119 A CN 115942119A
Authority
CN
China
Prior art keywords
camera
target area
coordinate
target
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210966540.XA
Other languages
Chinese (zh)
Other versions
CN115942119B (en
Inventor
海涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202210966540.XA priority Critical patent/CN115942119B/en
Publication of CN115942119A publication Critical patent/CN115942119A/en
Application granted granted Critical
Publication of CN115942119B publication Critical patent/CN115942119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The disclosure relates to a linkage monitoring method and device, electronic equipment and a readable storage medium. The method is applied to the electronic equipment which is in communication connection with the first camera and the second camera, and at least comprises the following steps: acquiring a target area through the first camera; determining the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera; and controlling the second camera to monitor the target area according to the movement position. The embodiment of the disclosure realizes linkage monitoring of the first camera and the second camera, so that monitoring is more flexible, and a monitoring scene is expanded.

Description

Linkage monitoring method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of cameras, and in particular, to a linkage monitoring method and apparatus, an electronic device, and a readable storage medium.
Background
In a surveillance scenario, a single dome camera or a single bolt camera is typically employed. In order to enlarge the monitoring range, a plurality of gunlock cameras can be used for collecting videos and then splicing the videos. However, a single camera can only take local or only monitor a blurred panorama, and cannot be universally applied to more monitoring scenes. The monitoring range can be enlarged by adopting a plurality of cameras for monitoring, but the monitoring range cannot be locally considered.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a linkage monitoring method and apparatus, an electronic device, and a readable storage medium.
According to a first aspect of the embodiments of the present disclosure, a linkage monitoring method is provided, which is applied to an electronic device that establishes communication connection with both a first camera and a second camera, and at least includes:
acquiring a target area through the first camera;
determining the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
and controlling the second camera to monitor the target area according to the movement position.
In some embodiments, the method further comprises:
acquiring the resolution width of the second camera;
determining a target scaling of the second camera based on the width of the target area and the resolution width;
adjusting a focal length of the second camera based on the target scaling.
In some embodiments, said adjusting the focal length of said second camera based on said target scaling comprises:
when the target zoom ratio is larger than or equal to a preset clear zoom ratio, adjusting the focal length of the second camera based on the target zoom ratio;
and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
In some embodiments, the method further comprises:
and determining attribute classification of the monitored object in the target area and state classification of the monitored object through a classification model.
In some embodiments, the method further comprises:
acquiring a undistorted image through the first camera;
acquiring a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
aligning the center of the second camera to the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points;
and determining the calibration relation based on the plurality of first sample coordinates and the plurality of second sample coordinates.
In some embodiments, said determining said calibration relationship based on a plurality of said first sample coordinates and a plurality of said second sample coordinates comprises:
associating the first sample coordinate and the second sample coordinate corresponding to the same calibration point;
and establishing the relationship among a plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship.
In some embodiments, the acquiring a undistorted image by the first camera comprises:
acquiring a sample image through the first camera;
determining the undistorted image based on the sample image and a undistorted model.
In some embodiments, the determining the motion position of the second camera based on the target region and the calibration relationship between the first camera and the second camera includes:
acquiring a central coordinate of the target area;
and determining the motion coordinate of the second camera based on the central coordinate and the calibration relation between the first camera and the second camera.
In some embodiments, the obtaining the center coordinates of the target area includes:
acquiring the initial coordinate of the target area, the height of the target area and the width of the target area;
obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area;
and obtaining the vertical coordinate of the central coordinate based on the vertical coordinate of the initial coordinate and the height of the target area.
In some embodiments, the first camera comprises a camera for taking a panorama; the second camera comprises a camera for capturing details.
According to a second aspect of the embodiments of the present disclosure, there is provided a linkage monitoring apparatus, applied to an electronic device having communication connections established with both a first camera and a second camera, including at least:
the first acquisition module is configured to acquire a target area through the first camera;
the first determination module is configured to determine the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
the control module is configured to control the second camera to monitor the target area according to the movement position;
wherein the first camera and the second camera have different monitoring functions.
In some embodiments, the apparatus further comprises:
a second obtaining module configured to obtain a resolution width of the second camera;
a second determination module configured to determine a target scaling of the second camera based on a width of the target region and the resolution width;
an adjustment module configured to adjust a focal length of the second camera based on the target zoom ratio.
In some embodiments, the adjusting module is further configured to adjust the focal length of the second camera based on the target zoom ratio when the target zoom ratio is greater than or equal to a preset clear zoom ratio; and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
In some embodiments, the apparatus further comprises:
a classification module configured to determine an attribute classification of the monitored object in the target region and a state classification of the monitored object through a classification model.
In some embodiments, the apparatus further comprises:
a third acquisition module configured to acquire a undistorted image through the first camera;
a fourth obtaining module, configured to obtain a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
the alignment module is configured to align the center of the second camera with the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points;
a third determining module configured to determine the calibration relationship based on the plurality of first sample coordinates and the plurality of second sample coordinates.
In some embodiments, the third determining module is further configured to associate the first sample coordinate and the second sample coordinate corresponding to the same calibration point; establishing a relationship among a plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship;
the third acquisition module is further configured to acquire a sample image through the first camera; determining the undistorted image based on the sample image and a undistorted model.
In some embodiments, the first determining module comprises:
a fifth obtaining module configured to obtain a center coordinate of the target area;
and the fourth determination module is configured to determine the motion coordinate of the second camera based on the central coordinate and the calibration relation between the first camera and the second camera.
In some embodiments, the fifth obtaining module is configured to obtain a start coordinate of the target area, a height of the target area, and a width of the target area; obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area; and obtaining the vertical coordinate of the central coordinate based on the vertical coordinate of the initial coordinate and the height of the target area.
In some embodiments, the first camera comprises a camera for taking a panorama; the second camera comprises a camera for capturing details.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus including at least: a processor and a memory for storing executable instructions operable on the processor, wherein:
when the processor is configured to execute the executable instructions, the executable instructions perform the steps in the linkage monitoring method provided by the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium including:
the instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the linkage monitoring method as described in the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure determines the movement position of the second camera based on the target area obtained by the first camera and the calibration relation, and controls the second camera to monitor the target area according to the movement position of the second camera. Therefore, the first camera is used for detecting the target area, the second camera is used for tracking the target area automatically along with linkage of the first camera, linkage monitoring of the first camera and the second camera is achieved, monitoring is more flexible, and monitoring scenes are expanded.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a first schematic diagram illustrating a linkage monitoring method according to an exemplary embodiment.
Fig. 2 is a second schematic diagram of a linkage monitoring method according to an exemplary embodiment.
FIG. 3 is a third schematic diagram illustrating a linkage monitoring method according to an exemplary embodiment.
FIG. 4 is a fourth schematic diagram of a linkage monitoring method shown in accordance with an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a linkage monitoring device according to an exemplary embodiment.
FIG. 6 is a block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The embodiment of the present disclosure provides a linkage monitoring method, which is applied to an electronic device that establishes communication connection with both a first camera and a second camera, and as shown in fig. 1, the electronic device executes the linkage monitoring method including the following steps:
s101, acquiring a target area through the first camera;
s102, determining the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
s103, controlling the second camera to monitor the target area according to the movement position.
In the embodiment of the present disclosure, the electronic device is connected with the first camera and the second camera in a communication manner, and the electronic device can control the first camera and the second camera to perform linkage monitoring through the communication connection. For example, the electronic device controlling the second camera can monitor the target area of interest detected by the first camera.
The electronic device may be a background server, or may also be an intelligent device having a control module capable of controlling the first camera and the second camera, which is not limited in the embodiments of the present disclosure.
The linkage monitoring method is applied to a monitoring scene of linking the first camera and the second camera, the first camera and the second camera have different monitoring functions, and the second camera is linked to enable the second camera to track a target area after the target area is detected by the first camera, so that the two different monitoring functions can be considered at the same time, and the monitoring scene is not only capable of monitoring a single function.
For example, a first camera has a monitoring function of monitoring a panorama but cannot monitor details, and a second camera has a monitoring function of monitoring details but cannot monitor the panorama. A plurality of target areas are arranged in the monitoring range, the first camera can detect the target areas through the function of monitoring the panorama, but the second camera cannot monitor the target areas due to the limited monitoring visual field. Based on the situation, on the basis that the first camera determines the target area, the electronic equipment can be linked with the second camera through the calibration relation between the first camera and the second camera in the linkage monitoring method, so that the second camera can also monitor the target area to capture the details of the target area, and further the panorama and the details can be monitored simultaneously.
The first camera may be used to determine a target area. In some embodiments, the first camera comprises a camera for taking a panorama.
In the embodiment of the present disclosure, the first camera is used for shooting a panorama, that is, the first camera has a function of monitoring the panorama. Here, the first camera may have an ultra-wide angle, and a wider and comprehensive monitoring picture may be photographed through the ultra-wide angle. For example, the first camera may include a fisheye camera or a bolt camera, and embodiments of the present disclosure are not limited.
The second camera can be used for capturing the details of the target area along with the linkage of the first camera.
In some embodiments, the second camera comprises a camera for capturing details.
In the embodiment of the present disclosure, the second camera is used for capturing details, that is, the second camera has a zoom function. Here, the second camera can realize details of the observation target region by the zoom function.
The second camera may also have a rotation function by which the second camera can be moved in the lateral direction and the longitudinal direction to enable control of the second camera to monitor the target area. For example, the second camera may comprise a dome camera, and embodiments of the present disclosure are not limited. Here, the transverse direction and the longitudinal direction are two directions perpendicular to each other.
The target area may be an area of interest in the monitored image. The target area may include: the area where the person is located, the area where the animal is located, or the area where the flower and grass building is located, embodiments of the present disclosure are not limited.
The above-mentioned target area is obtained through first camera includes: and pulling the video stream through the first camera to perform image analysis to obtain a target area. The embodiment of the disclosure can perform image analysis by adopting a deep learning algorithm or a target detection algorithm.
For example, the deep learning algorithm includes a convolutional neural network-based deep learning algorithm, a cyclic neural network-based deep learning algorithm, or a recurrent neural network-based deep learning algorithm; the target detection algorithm includes a sliding window target detection algorithm, a Two Stage target detection algorithm, or a YOLO target detection algorithm, which is not limited in the embodiments of the present disclosure.
Here, the electronic device may perform image analysis using a deep learning algorithm or a target detection algorithm to obtain the target region. The target area may be a starting coordinate of the target area, a height of the target area, a width of the target area, and the like, and the embodiments of the present disclosure are not limited thereto.
In the embodiment of the disclosure, the initial coordinates of the target area, the height of the target area and the width of the target area can be obtained through different target detection algorithms. For example, a first target detection algorithm may be used to obtain a start coordinate of the target area, a height of the target area, and a width of the target area, so as to obtain a center coordinate of the target area; wherein the first target detection algorithm is a Two Stage target detection algorithm; the initial coordinate of the target area, the height of the target area and the width of the target area can be obtained through a second target detection algorithm; wherein the second target detection algorithm is a sliding window target detection algorithm.
The calibration relation between the first camera and the second camera is used for representing the corresponding relation between the position of the first camera when monitoring the target area and the position of the second camera when monitoring the same target area. Therefore, based on the target area determined by the first camera, the movement position of the second camera which needs to move when monitoring the target area can be determined. Therefore, the second camera can be controlled to monitor the target area based on the movement position, so that the second camera can also track the interested target area.
It should be noted that, because different cameras have different setting positions and different steering angles, when monitoring the same target area, the first camera and the second camera need to be associated to obtain a calibration relationship between the first camera and the second camera. Therefore, after the target area is obtained through the first camera, the movement position of the second camera monitoring the same target area can be determined through the calibration relation.
Here, the calibration relationship between the first camera and the second camera is preset, and the movement position of the second camera can be determined directly based on the calibration relationship.
After the movement position of the second camera is determined, the second camera can be controlled to monitor the target area according to the movement position. For example, the motor of the second camera is not at the moving position, and in order to enable the second camera to monitor the target area, the second camera needs to be controlled to rotate so that the motor of the second camera moves to the moving position.
In the embodiment of the disclosure, the motion position of the second camera is determined based on the target area obtained by the first camera and the calibration relation, and the second camera is controlled to monitor the target area according to the motion position of the second camera. Therefore, the first camera is used for detecting the target area, the second camera is used for automatically tracking the target area along with linkage of the first camera, and linkage monitoring of the first camera and the second camera is achieved.
It can be seen that, when the first camera and the second camera have different monitoring functions, for example, the first camera has a function of monitoring a panoramic view and the second camera has a function of capturing details, the embodiment of the present disclosure can simultaneously implement two different monitoring functions through the calibration relationship between the first camera and the second camera, for example, the details of a target area can be monitored simultaneously on the basis that the target area is detected by monitoring the panoramic view, so that the monitoring is more flexible, and the monitoring scene is enlarged.
In some embodiments, the method further comprises:
acquiring the resolution width of the second camera;
determining a target scaling of the second camera based on the width of the target area and the resolution width;
adjusting a focal length of the second camera based on the target zoom ratio.
In the embodiment of the disclosure, after the target area is acquired by the first camera, the focal length of the second camera can be further adjusted based on the width of the target area, so that the second camera can clearly capture the details of the target area.
In the process of acquiring the resolution width, if the resolution of the second camera is 1280 × 960, the resolution width of the second camera is 960; if the resolution of the second camera is 1920 x 1080, the resolution width of the second camera is 1080.
The determining a target scaling of the second camera based on the width and the resolution width of the target area includes: the target scaling is derived based on the quotient of the width of the target area and the resolution width.
For example, equation (1) may be employed to determine the target scaling. Wherein the width of the target region is w; the resolution width is w'; the target scaling width is s.
Figure BDA0003795040580000081
In the embodiment of the present disclosure, after the second camera is linked to enable the second camera to track the target area, if the second camera further adopts the preset clear zoom scale to adjust the focal length, the captured details may not meet the clear requirement, for example, the captured image is smaller than the preset clear image and is not clear.
Based on this, the embodiment of the present disclosure adjusts the focal length of the second camera based on the target scaling, and can flexibly adjust the focal length according to the actual position of the target area, so that the second camera can capture the details of the target area more clearly.
In some embodiments, said adjusting the focal length of said second camera based on said target scaling comprises:
when the target scaling is larger than or equal to a preset clear scaling, adjusting the focal length of the second camera based on the target scaling;
and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
In the embodiment of the disclosure, the preset clear zoom ratio is a zoom ratio preset by the second camera, and when the preset clear zoom ratio is smaller than the preset clear zoom ratio, an image shot by the second camera is smaller than a preset clear image, which may cause captured details to be too small and unclear; when the preset clear zoom scale is larger than the preset clear zoom scale, the second camera can shoot to obtain a clear image. Therefore, the target scaling is compared with the preset clear scaling, and the second camera can capture details more clearly.
When the target zoom ratio is larger than or equal to the preset clear zoom ratio, the focal length is adjusted based on the target zoom ratio, so that the focal length of the second camera can adapt to the current position of the target area, and further the details of the target area can be captured better. When the target zoom ratio is smaller than the preset clear zoom ratio, the focal length of the second camera is not adjusted based on the target zoom ratio, but the focal length of the second camera is adjusted through the preset clear zoom ratio, so that the second camera can obtain clear details.
In some embodiments, the method further comprises:
and determining attribute classification of the monitored object in the target area and state classification of the monitored object through a classification model.
The classification model comprises: attribute multi-classification models and state multi-classification models, embodiments of the present disclosure are not limited.
The attribute classification of the monitoring object is used for distinguishing the attributes of the monitoring object. The attribute classification includes: object classification, color classification, and/or function classification. Wherein the object classification may include: human, animal, plant, or vehicle, etc.; the color classification may include: red, yellow, blue, or the like; the function classification may include a home appliance class, a furniture class, or a mobile terminal class, etc.
The state classification of the monitoring object is used for distinguishing the state of the monitoring object. For example, the status classification includes: a stationary state and/or a moving state. Wherein the motion state comprises: a walking state, a speaking state, a laughing state, a running state, or a flying state, embodiments of the present disclosure are not limited.
It should be noted that different monitoring scenes can be determined by attribute classification and state classification of the monitoring object. The different monitoring scenarios may include: the monitoring scene of indoor activities, the monitoring scene of outdoor trips, traffic monitoring scene or security monitoring scene, and the embodiments of the present disclosure are not limited.
In the embodiment of the disclosure, the attribute classification and the state classification of the monitored object are determined through the classification model, so that different monitoring scenes of different monitoring positions can be determined, and further, the second camera can be linked with the first camera to automatically monitor the details of the different monitoring scenes.
As shown in fig. 2, in some embodiments, the method further comprises:
s104, acquiring a distortion-removed image through the first camera;
s105, acquiring a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
s106, aligning the center of the second camera to the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the calibration points;
s107, determining the calibration relation based on the plurality of first sample coordinates and the plurality of second sample coordinates.
In the embodiment of the present disclosure, before determining the motion position of the second camera based on the calibration relationship between the first camera and the second camera, the calibration relationship needs to be obtained.
In the process of obtaining the calibration relation, due to processing or assembling errors of the first camera, distortion exists between actual imaging and ideal imaging on the image plane of the first camera. Based on this, the embodiment of the present disclosure determines the first sample coordinates based on the undistorted image, and can achieve more accurate first sample coordinates from the undistorted image.
In some embodiments, acquiring the undistorted image with the first camera comprises: acquiring a sample image through the first camera; determining the undistorted image based on the sample image and a undistorted model.
In the disclosed embodiment, the undistorted image needs to be obtained before determining the undistorted image. In the process of obtaining the distortion removal model, a checkerboard or dot calibration board can be used for collecting a plurality of calibration images of the first camera under different postures; denoising the plurality of calibration images to obtain a plurality of denoised images; the undistorted model is determined based on a plurality of denoised images.
Here, the distortion removal model can be obtained by using cv:: fishery: caliibrate in the open source library opencv for a plurality of de-noised images, and the embodiment of the disclosure is not limited.
The determining a undistorted image based on the sample image and the undistorted model includes: and obtaining a distortion-removed image based on the product of the sample image and the distortion removal model.
For example, the undistorted image may be determined using equation (2). Wherein the sample image is F; a distortion removal model A; the undistorted image is F'.
F'=A*F (2)
In the embodiment of the disclosure, after the undistorted image is determined, a plurality of first sample coordinates corresponding to a plurality of calibration points may be obtained from the undistorted image.
Here, the index point may be a point having a distinct feature. For example, the index point may include a corner or a pen tip, embodiments of the present disclosure not being limited.
The number of the selected calibration points may be set to 10 to 20, and the embodiment of the present disclosure is not limited.
In the disclosed embodiment, a calibration point has a first sample coordinate. This first sample coordinate can be the central coordinate of the area that the calibration point occupies, so, can confirm the position of calibration point better through the central coordinate of calibration point.
For example, when the index point is a corner, the first sample coordinate may be a center coordinate of an area occupied by the corner; when the calibration point is a mobile phone, the first sample coordinate may be a center coordinate of an area occupied by the mobile phone.
In the embodiment of the present disclosure, after the plurality of calibration points are obtained, the center of the second camera may be aligned to the plurality of calibration points, so as to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points. Here, centering the second camera on the plurality of index points includes: and aligning the second camera to the obtained calibration points in sequence, so that the center of the second camera can coincide with the calibration points.
The second sample coordinate may be used to mark the moving position of the pan/tilt head when the center of the second camera is aligned with the plurality of calibration points. The holder is used for bearing the second camera and can drive the second camera to move by driving the holder to rotate.
After the first sample coordinates and the second sample coordinates are obtained, a calibration relationship may be determined based on the plurality of first sample coordinates and the plurality of second sample coordinates.
In the embodiment of the present disclosure, the calibration relationship may be obtained based on a linear function, a fitting function, or a difference function, which is not limited in the embodiment of the present disclosure.
In some embodiments, said determining said calibration relationship based on a plurality of said first sample coordinates and a plurality of said second sample coordinates comprises:
associating the first sample coordinate and the second sample coordinate corresponding to the same calibration point;
and establishing the relationship among the plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship.
In the embodiment of the present disclosure, the first sample coordinate and the second sample coordinate corresponding to the same calibration point are associated, and the first sample coordinate and the second sample coordinate corresponding to the same calibration point may be input to the difference function as a set of coordinate data.
After the first sample coordinate and the second sample coordinate corresponding to the same calibration point are associated, associated coordinate data corresponding to a plurality of calibration points can be obtained. For example, where the calibration points are pen tip and table corner, the associated coordinate data may include: the first sample coordinate corresponding to the pen point and the second sample coordinate corresponding to the pen point; the method can also comprise the following steps: the first sample coordinate corresponding to the table corner and the second sample coordinate corresponding to the table corner.
Here, the calibration relationship determined based on the difference function may be set as formula (3). Wherein B (p, t) is the second sample coordinate; f' (x, y) is the first sample coordinate; k is a calibration coefficient; f 0 ' is the calibration distance.
B(p,t)=F 0 '+K*F′(x,y) (3)
The calibration distance is the shortest distance among the distances from the selected first sample coordinate to be calculated to the plurality of calibration points. The above calibration coefficient can be determined by equation (4). Wherein, B (p) 1 ,t 1 ) And B (p) 2 ,t 2 ) The coordinates of the two selected second samples are obtained; f' (x) 1 ,y 1 ) And F' (p) 2 ,y 2 ) The two first sample coordinates are selected.
Figure BDA0003795040580000111
In the embodiment of the disclosure, by calibrating the calibration relation between the first camera and the second camera, the second camera can better track the monitoring target area based on the calibration relation under the condition of linking the first camera and the second camera.
In some embodiments, the determining the motion position of the second camera based on the target region and the calibration relationship between the first camera and the second camera includes:
acquiring the central coordinates of the target area;
and determining the motion coordinate of the second camera based on the central coordinate and the calibration relation between the first camera and the second camera.
In embodiments of the present disclosure, after the target region is acquired, the center coordinates of the target region may be determined. The central coordinates of the target area are used for representing the position of the target area. The center coordinate of the target area is composed of the abscissa of the target area and the ordinate of the target area.
In some embodiments, the obtaining the center coordinates of the target area may include:
acquiring the initial coordinate of the target area, the height of the target area and the width of the target area;
obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area;
and obtaining the vertical coordinate of the central coordinate based on the vertical coordinate of the initial coordinate and the height of the target area.
In an embodiment of the present disclosure, obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area may include: the abscissa of the center coordinate is obtained based on half of the sum of the abscissa of the start coordinate and the width of the target region.
For example, the abscissa of the center coordinate may be determined using equation (5). Wherein u is the abscissa of the initial coordinate; w is the width of the target region; u' is the abscissa of the central coordinate.
Figure BDA0003795040580000121
In the embodiment of the present disclosure, obtaining the ordinate of the center coordinate based on the ordinate of the start coordinate and the height of the target area may include: the ordinate of the center coordinate is obtained based on the difference between the ordinate of the start coordinate and half the height of the target area.
For example, the ordinate of the center coordinate may be determined using equation (6). Wherein v is the ordinate of the initial coordinate; h is the height of the target area; v' is the ordinate of the central coordinate.
Figure BDA0003795040580000122
In the embodiment of the disclosure, after the central coordinate of the target area is determined, the motion coordinate of the second camera may be determined based on the central coordinate of the target area and the calibration relation. Here, the motion coordinates of the second camera may be obtained by substituting the center coordinates of the target area into x and y in the first sample coordinates F' (x, y) in the above equation (3).
Here, after the first camera determines the target area, the motion coordinate of the second camera can be directly obtained based on the calibration relation, so that the second camera can be aligned to the target area, and further, the second camera can monitor the details of the target area.
In order to better understand one or more of the above embodiments, the embodiments of the present disclosure take the first camera as a fisheye camera and the second camera as a dome camera as an example, and the following descriptions are provided:
as shown in fig. 3, controlling the dome camera to monitor the target area determined by the fisheye camera according to the embodiment of the disclosure may include the following steps:
s301, pulling a video stream through a fisheye camera;
s302, performing image analysis on the pulled video stream by adopting a first target detection algorithm to obtain a central coordinate of a target area;
s303, determining the motion coordinate of the dome camera based on the central coordinate of the target area and the calibration relation between the fisheye camera and the dome camera;
and S304, controlling the camera of the ball machine to monitor the target area based on the motion coordinate of the camera of the ball machine.
In the embodiment of the disclosure, the fisheye camera has a panoramic monitoring function, and can obtain a wider and comprehensive monitoring picture through a super wide angle; the camera of the dome camera has a zooming function, and details of the target area can be observed through the zooming function. Therefore, the fisheye camera can be linked with the dome camera through the calibration relation between the fisheye camera and the dome camera, the target detection area of the fisheye camera is realized, the target area which is interested in is automatically tracked by the dome camera, and the panorama and the details can be monitored simultaneously.
As shown in fig. 4, adjusting the focal length of the camera of the ball machine, that is, adjusting the zoom view of the camera of the ball machine according to the embodiment of the present disclosure may include the following steps:
s401, pulling a video stream through a fisheye camera;
s402, carrying out image analysis on the pulled video stream by adopting a second target detection algorithm to obtain the width of a target area;
s403, determining a target scaling of the dome camera based on the width of the target area and the resolution width of the dome camera;
and S404, adjusting the focal length of the camera of the dome camera based on the target scaling of the camera of the dome camera.
In the embodiment of the disclosure, after the target area is monitored by the dome camera through the calibration relation between the fisheye camera and the dome camera, the focal length of the dome camera can be adjusted, so that the dome camera can monitor the details of the target area, and the dome camera can be used for monitoring the details while monitoring the panorama by utilizing the fisheye camera.
The embodiment of the disclosure further provides a linkage monitoring device, which is applied to an electronic device which is in communication connection with both the first camera and the second camera. As shown in fig. 5, the linkage monitoring apparatus 500 at least includes:
a first obtaining module 501 configured to obtain a target area through the first camera;
a first determining module 502 configured to determine a motion position of the second camera based on the target area and a calibration relationship between the first camera and the second camera;
a control module 503 configured to control the second camera to monitor the target area according to the motion position.
In some embodiments, the apparatus further comprises:
a second obtaining module configured to obtain a resolution width of the second camera;
a second determination module configured to determine a target scaling of the second camera based on a width of the target region and the resolution width;
an adjustment module configured to adjust a focal length of the second camera based on the target zoom ratio.
In some embodiments, the adjusting module is further configured to adjust the focal length of the second camera based on the target zoom ratio when the target zoom ratio is greater than or equal to a preset clear zoom ratio; and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
In some embodiments, the apparatus further comprises:
a classification module configured to determine an attribute classification of the monitored object in the target region and a state classification of the monitored object through a classification model.
In some embodiments, the apparatus further comprises:
a third acquisition module configured to acquire a undistorted image through the first camera;
a fourth obtaining module configured to obtain a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
the alignment module is configured to align the center of the second camera with the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points;
a third determination module configured to determine the calibration relationship based on the plurality of first sample coordinates and the plurality of second sample coordinates.
In some embodiments, the third determining module is further configured to associate the first sample coordinate and the second sample coordinate corresponding to the same calibration point; establishing a relationship among a plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship;
the third acquisition module is further configured to acquire a sample image through the first camera; determining the undistorted image based on the sample image and a undistorted model.
In some embodiments, the first determining module comprises:
a fifth obtaining module configured to obtain a center coordinate of the target area;
and the fourth determination module is configured to determine the motion coordinate of the second camera based on the central coordinate and the calibration relation between the first camera and the second camera.
In some embodiments, the fifth obtaining module is configured to obtain a start coordinate of the target region, a height of the target region, and a width of the target region; obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area; and obtaining the vertical coordinate of the central coordinate based on the vertical coordinate of the initial coordinate and the height of the target area.
In some embodiments, the first camera comprises a camera for taking a panorama; the second camera includes a camera for capturing details.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment. Referring to FIG. 6, electronic device 900 includes a processing component 922, which further includes one or more processors and memory resources, represented by memory 932, for storing instructions, such as applications, that may be executed by processing component 922. The application programs stored in memory 932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 922 is configured to execute instructions to perform the method of linkage monitoring described above.
The electronic device 900 may also include a power component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input/output (I/O) interface 958. The electronic device 900 may operate based on an operating system, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM or the like, stored in the memory 932.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided that includes instructions, such as the memory 932 that includes instructions, that are executable by the processing component 922 of the electronic device 900 to perform the methods described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium having instructions stored thereon, which when executed by a processing component of an electronic device, enable the electronic device to perform a method of linkage monitoring, the method being applied to an electronic device having communication connections established with a first camera and a second camera, and comprising at least:
acquiring a target area through the first camera;
determining the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
and controlling the second camera to monitor the target area according to the movement position.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the method described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (20)

1. The linkage monitoring method is applied to electronic equipment which is in communication connection with a first camera and a second camera, and comprises the following steps:
acquiring a target area through the first camera;
determining the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
and controlling the second camera to monitor the target area according to the movement position.
2. The method of claim 1, further comprising:
acquiring the resolution width of the second camera;
determining a target scaling of the second camera based on the width of the target area and the resolution width;
adjusting a focal length of the second camera based on the target scaling.
3. The method of claim 2, wherein said adjusting the focal length of the second camera based on the target zoom ratio comprises:
when the target zoom ratio is larger than or equal to a preset clear zoom ratio, adjusting the focal length of the second camera based on the target zoom ratio;
and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
4. The method according to any one of claims 1 to 3, further comprising:
and determining attribute classification of the monitored object in the target area and state classification of the monitored object through a classification model.
5. The method according to any one of claims 1 to 3, further comprising:
acquiring a undistorted image through the first camera;
acquiring a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
aligning the center of the second camera to the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points;
and determining the calibration relation based on the plurality of first sample coordinates and the plurality of second sample coordinates.
6. The method of claim 5, wherein said determining said calibration relationship based on a plurality of said first sample coordinates and a plurality of said second sample coordinates comprises:
associating the first sample coordinate and the second sample coordinate corresponding to the same calibration point;
and establishing the relationship among the plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship.
7. The method of claim 5, wherein the acquiring the undistorted image by the first camera comprises:
acquiring a sample image through the first camera;
determining the undistorted image based on the sample image and a undistorted model.
8. The method of any one of claims 1 to 3, wherein determining the motion position of the second camera based on the target region and a calibration relationship between the first camera and the second camera comprises:
acquiring a central coordinate of the target area;
and determining the motion coordinate of the second camera based on the central coordinate and the calibration relation between the first camera and the second camera.
9. The method of claim 8, wherein the obtaining the center coordinates of the target area comprises:
acquiring the initial coordinate of the target area, the height of the target area and the width of the target area;
obtaining the abscissa of the central coordinate based on the abscissa of the starting coordinate and the width of the target area;
and obtaining the ordinate of the central coordinate based on the ordinate of the starting coordinate and the height of the target area.
10. The method of any of claims 1 to 3, wherein the first camera comprises a camera for taking a panorama; the second camera includes a camera for capturing details.
11. The utility model provides a linkage monitoring device which characterized in that is applied to and has all established communication connection's electronic equipment with first camera and second camera, includes:
the first acquisition module is configured to acquire a target area through the first camera;
the first determination module is configured to determine the motion position of the second camera based on the target area and the calibration relation between the first camera and the second camera;
and the control module is configured to control the second camera to monitor the target area according to the motion position.
12. The apparatus of claim 11, further comprising:
a second obtaining module configured to obtain a resolution width of the second camera;
a second determination module configured to determine a target scaling of the second camera based on a width of the target region and the resolution width;
an adjustment module configured to adjust a focal length of the second camera based on the target zoom ratio.
13. The apparatus of claim 12, wherein the adjustment module is further configured to adjust the focal length of the second camera based on the target zoom ratio when the target zoom ratio is greater than or equal to a preset clear zoom ratio; and when the target zoom ratio is smaller than the preset clear zoom ratio, adjusting the focal length of the second camera based on the preset clear zoom ratio.
14. The apparatus of any one of claims 11 to 13, further comprising:
a classification module configured to determine an attribute classification of the monitored object in the target region and a state classification of the monitored object through a classification model.
15. The apparatus of any one of claims 11 to 13, further comprising:
a third acquisition module configured to acquire a undistorted image through the first camera;
a fourth obtaining module configured to obtain a plurality of first sample coordinates corresponding to a plurality of calibration points from the undistorted image;
the alignment module is configured to align the center of the second camera with the plurality of calibration points to obtain a plurality of second sample coordinates corresponding to the plurality of calibration points;
a third determination module configured to determine the calibration relationship based on the plurality of first sample coordinates and the plurality of second sample coordinates.
16. The apparatus of claim 15, wherein the third determining module is further configured to associate the first sample coordinate and the second sample coordinate corresponding to the same calibration point; establishing a relationship among a plurality of groups of associated coordinate data through a difference function to obtain the calibration relationship;
the third acquisition module is further configured to acquire a sample image through the first camera; determining the undistorted image based on the sample image and a undistorted model.
17. The apparatus of any of claims 11 to 13, wherein the first determining module comprises:
a fifth obtaining module configured to obtain a center coordinate of the target area;
a fourth determining module configured to determine the motion coordinate of the second camera based on the center coordinate and a calibration relationship between the first camera and the second camera.
18. The apparatus of any of claims 11 to 13, wherein the first camera comprises a camera for taking a panorama; the second camera comprises a camera for capturing details.
19. An electronic device, characterized in that the electronic device comprises at least: a processor and a memory for storing executable instructions operable on the processor, wherein:
the processor is configured to execute the executable instructions, and the executable instructions perform the steps of the linkage monitoring method provided in any one of the above claims 1 to 10.
20. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an electronic device, enable the electronic device to perform the linkage monitoring method of any one of claims 1 to 10.
CN202210966540.XA 2022-08-12 2022-08-12 Linkage monitoring method and device, electronic equipment and readable storage medium Active CN115942119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210966540.XA CN115942119B (en) 2022-08-12 2022-08-12 Linkage monitoring method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210966540.XA CN115942119B (en) 2022-08-12 2022-08-12 Linkage monitoring method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115942119A true CN115942119A (en) 2023-04-07
CN115942119B CN115942119B (en) 2023-11-21

Family

ID=86551108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210966540.XA Active CN115942119B (en) 2022-08-12 2022-08-12 Linkage monitoring method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115942119B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN108230397A (en) * 2017-12-08 2018-06-29 深圳市商汤科技有限公司 Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium
CN109120904A (en) * 2018-10-19 2019-01-01 武汉星巡智能科技有限公司 Binocular camera monitoring method, device and computer readable storage medium
CN110969097A (en) * 2019-11-18 2020-04-07 浙江大华技术股份有限公司 Linkage tracking control method, equipment and storage device for monitored target
CN113949814A (en) * 2021-11-09 2022-01-18 重庆紫光华山智安科技有限公司 Gun and ball linkage snapshot method, device, equipment and medium
CN114359406A (en) * 2021-12-30 2022-04-15 像工场(深圳)科技有限公司 Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
CN114511639A (en) * 2020-11-16 2022-05-17 深圳市智虹慧能科技有限公司 Calibration method of multi-view camera and related device
CN114612575A (en) * 2022-03-21 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Camera parameter calibration and three-dimensional data generation method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN108230397A (en) * 2017-12-08 2018-06-29 深圳市商汤科技有限公司 Multi-lens camera is demarcated and bearing calibration and device, equipment, program and medium
CN109120904A (en) * 2018-10-19 2019-01-01 武汉星巡智能科技有限公司 Binocular camera monitoring method, device and computer readable storage medium
CN110969097A (en) * 2019-11-18 2020-04-07 浙江大华技术股份有限公司 Linkage tracking control method, equipment and storage device for monitored target
CN114511639A (en) * 2020-11-16 2022-05-17 深圳市智虹慧能科技有限公司 Calibration method of multi-view camera and related device
CN113949814A (en) * 2021-11-09 2022-01-18 重庆紫光华山智安科技有限公司 Gun and ball linkage snapshot method, device, equipment and medium
CN114359406A (en) * 2021-12-30 2022-04-15 像工场(深圳)科技有限公司 Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
CN114612575A (en) * 2022-03-21 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Camera parameter calibration and three-dimensional data generation method and system

Also Published As

Publication number Publication date
CN115942119B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN109887040B (en) Moving target active sensing method and system for video monitoring
CN109151439B (en) Automatic tracking shooting system and method based on vision
CN105744163B (en) A kind of video camera and image capture method based on depth information tracking focusing
CN103716594B (en) Panorama splicing linkage method and device based on moving target detecting
WO2017045326A1 (en) Photographing processing method for unmanned aerial vehicle
CN110910459B (en) Camera device calibration method and device and calibration equipment
CN108549413A (en) A kind of holder method of controlling rotation, device and unmanned vehicle
CN105979147A (en) Intelligent shooting method of unmanned aerial vehicle
CN112699839B (en) Automatic video target locking and tracking method under dynamic background
WO2020237565A1 (en) Target tracking method and device, movable platform and storage medium
CN112311965A (en) Virtual shooting method, device, system and storage medium
CN106713740B (en) Positioning tracking camera shooting method and system
CN112207821B (en) Target searching method of visual robot and robot
CN107038714B (en) Multi-type visual sensing cooperative target tracking method
CN109886995B (en) Multi-target tracking method in complex environment
CN110602376B (en) Snapshot method and device and camera
CN113111715B (en) Unmanned aerial vehicle target tracking and information acquisition system and method
CN112307912A (en) Method and system for determining personnel track based on camera
CN111242988A (en) Method for tracking target by using double pan-tilt coupled by wide-angle camera and long-focus camera
CN114331860A (en) Distorted image correction method and positioning method thereof
CN110991306A (en) Adaptive wide-field high-resolution intelligent sensing method and system
TWI696147B (en) Method and system for rendering a panoramic image
CN112001224A (en) Video acquisition method and video acquisition system based on convolutional neural network
CN114697528A (en) Image processor, electronic device and focusing control method
CN112839165A (en) Method and device for realizing face tracking camera shooting, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant