CN114245024B - Optical camera acquisition control system and method - Google Patents

Optical camera acquisition control system and method Download PDF

Info

Publication number
CN114245024B
CN114245024B CN202210174168.9A CN202210174168A CN114245024B CN 114245024 B CN114245024 B CN 114245024B CN 202210174168 A CN202210174168 A CN 202210174168A CN 114245024 B CN114245024 B CN 114245024B
Authority
CN
China
Prior art keywords
target
camera
cameras
data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210174168.9A
Other languages
Chinese (zh)
Other versions
CN114245024A (en
Inventor
刘瑞军
陈宝锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Leiying Photoelectric Technology Co ltd
Original Assignee
Shenzhen Leiying Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Leiying Photoelectric Technology Co ltd filed Critical Shenzhen Leiying Photoelectric Technology Co ltd
Priority to CN202210174168.9A priority Critical patent/CN114245024B/en
Publication of CN114245024A publication Critical patent/CN114245024A/en
Application granted granted Critical
Publication of CN114245024B publication Critical patent/CN114245024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an optical camera acquisition control system and a method, comprising the following steps: an initial information acquisition module, a data processing center, an acquisition resource planning module, an image acquisition control module and an image communication module, by collecting the moving data of the object needing to be collected in the space and the shooting data of the optical camera, all the collected data are stored and managed through the data processing center, an appropriate number of optical cameras are distributed to collect images at appropriate positions through the collection resource planning module, generating a mapping model through an image acquisition planning module, monitoring a target through an image acquisition control module, when the target moves, the same trigger source is set for the camera, the camera is synchronously rotated according to the mapping model to control the acquisition of images, the image is transmitted through the image communication module, so that the image acquisition efficiency is improved, and the working difficulty of adjusting and controlling the camera is reduced.

Description

Optical camera acquisition control system and method
Technical Field
The invention relates to the technical field of camera control, in particular to an optical camera acquisition control system and method.
Background
The optical cameras are various in types, but the basic principle of the operation is the same, the optical image signals are converted into electric signals, convenience is provided for storage or transmission, along with the development of scientific technology, the application range of the optical cameras is more extensive, and the optical cameras are also widely applied to the aspect of monitoring the operation of machines: the method comprises the steps of collecting images, and wirelessly transmitting the images through the Internet to realize image collection and image communication so as to find out running abnormity in time and reduce loss;
however, when the optical camera is controlled to capture images in the prior art, the following problems exist: firstly, when a target is a non-static object, a plurality of cameras are adopted to collect images which are contained in the target and need to be collected, but the number and the positions of the cameras cannot be reasonably distributed, so that the defects that complete and effective information cannot be obtained to the maximum extent, resources cannot be reasonably utilized, and the resource cost is increased exist; secondly, when the target moves, need rotate the camera and carry out information acquisition, prior art need all control every camera: the rotation angle of each camera is controlled, so that synchronous control of other cameras cannot be realized when one camera is controlled, and the working difficulty is increased.
Therefore, there is a need for an optical camera acquisition control system and method to solve the above problems.
Disclosure of Invention
The present invention is directed to a system and a method for controlling an optical camera to solve the above problems.
In order to solve the technical problems, the invention provides the following technical scheme: an optical camera acquisition control system, characterized by: the system comprises: the system comprises an initial information acquisition module, a data processing center, an acquisition resource planning module, an image acquisition control module and an image communication module;
acquiring moving data of a target needing acquired information in a space and shooting data of an optical camera through the initial information acquisition module;
storing and managing all the acquired data through the data processing center;
calling and analyzing target moving data from the data processing center through the acquisition resource planning module, and distributing a proper number of optical cameras to acquire images at proper positions according to an analysis result;
positioning the target and the optical camera through the image acquisition planning module, simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle of the camera when the information required to be acquired is shot, and generating a mapping model according to the confirmation result;
monitoring the target through the image acquisition control module, setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
and wirelessly transmitting the acquired image data to a receiving terminal through the image communication module.
Further, the initial information acquisition module comprises a target data acquisition unit and a shooting information acquisition unit, and the target data acquisition unit is used for acquiring distribution data of a target in a space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center;
the acquisition resource planning module comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of a target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
Furthermore, the image acquisition planning module comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning a target and the distributed cameras and transmitting positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
Furthermore, the image acquisition control module comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to an adjusting model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous triggering of the rest cameras when the rotation angle of one random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image acquisition unit is used for shooting a target image after the adjustment is finished and transmitting the shot image to the image communication module.
An optical camera acquisition control method is characterized in that: the method comprises the following steps:
s01: collecting distribution data of a target needing to be collected information when the target moves in a space, and distributing and setting an optical camera;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set rotation data of an optical camera, and generating a relative mapping model of target movement data and camera rotation data;
s03: and monitoring the target in real time, and synchronously triggering and controlling the camera to rotate when the monitored target moves.
Further, in step S01: the maximum shooting angle collected by the camera is
Figure 354456DEST_PATH_IMAGE001
Wherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes is
Figure 825758DEST_PATH_IMAGE002
Each vertical plane area is sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to be
Figure 35022DEST_PATH_IMAGE003
Acquiring the imaging area of the camera as S, when the acquired target moves, the set of times of information which needs to be acquired when the target appears on the imaging plane is k = { k1, k2, …, km }, and calculating an effective coefficient Wi of an image acquired by the optical camera on a vertical line of a random vertical plane according to the following formula:
Figure 735125DEST_PATH_IMAGE004
wherein ki represents the number of times that information needs to be acquired when a target appears on one imaging plane at random, the set of effective coefficients of images acquired by arranging optical cameras on the vertical lines of all vertical planes is W = { W1, W2, …, Wm }, the threshold value of the effective coefficient is set as W, Wi and W are compared, the effective coefficient exceeding the threshold value is screened out, the optical cameras are arranged on the vertical lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, the heights of the cameras are the same, in order to confirm the final distribution number and distribution positions of the cameras, the space is divided equally through the vertical planes, the cameras are simulated to be arranged at the relative positions of the centers of the vertical planes, the number of times that the target information appears in the image of the cameras when the target moves is acquired, the effective coefficient of the information acquired by each camera is calculated according to the proportion of the number of the occurrence times, so as to quickly confirm the position where the target information does not appear, the camera is not arranged at the position where the target information cannot appear, so that the effectiveness of shooting the target information is improved, and meanwhile, the camera resources are saved.
Further, in step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,
Figure 103658DEST_PATH_IMAGE005
and is
Figure 231014DEST_PATH_IMAGE006
The corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis is
Figure 317788DEST_PATH_IMAGE007
The rotation angles of the cameras are integrated into
Figure 821581DEST_PATH_IMAGE008
Obtaining the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 when the target moves each time
Figure 47551DEST_PATH_IMAGE009
Acquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired as
Figure 345808DEST_PATH_IMAGE010
Establishing a mapping model of the camera rotation angle with the number of 1 and the target data:
Figure 919878DEST_PATH_IMAGE011
wherein d = { d1, d2, …, di, …, dM }, and a and B represent model coefficients, which are calculated according to the following formulas:
Figure 696204DEST_PATH_IMAGE012
Figure 39329DEST_PATH_IMAGE013
wherein,
Figure 633122DEST_PATH_IMAGE014
the method comprises the steps of representing the rotation angle of a camera with the number of 1 when an object moves randomly for one time, generating a mapping model of confirmation coefficients A and B, collecting historical data, analyzing information position change data and camera rotation angle change data which need to be collected when the object moves through big data to form a data corresponding relation, and generating the mapping model.
Further, in step S03: monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis is
Figure 179641DEST_PATH_IMAGE015
The distance between the positions of the information to be collected is D, and matching is carried out
Figure 540084DEST_PATH_IMAGE016
And
Figure 222869DEST_PATH_IMAGE017
: if it is
Figure 368987DEST_PATH_IMAGE018
Substituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:
Figure 137223DEST_PATH_IMAGE019
and the set of the rotation angles of the rest cameras is as follows:
Figure 770198DEST_PATH_IMAGE020
the method includes the steps that the same trigger source is set for p cameras, rotation angle parameters are input, the remaining cameras are synchronously rotated when the cameras with the number 1 are controlled to rotate, target images are collected after rotation is completed, the target images are wirelessly transmitted to a receiving terminal, under the condition that multiple cameras shoot, the relative difference value of the rotation angles of the cameras is obtained according to big data analysis, when a target moves, the cameras are connected to the same trigger source, the rotation angle parameters are synchronously input, and the function that the remaining cameras are synchronously rotated while one camera is controlled to rotate is achieved.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the shooting space is divided equally through the vertical planes, the relative position of the center of each vertical plane is simulated to be provided with a camera, and historical data are acquired: the method comprises the steps of collecting the times of target information appearing in a camera picture when a target moves, removing the invalid setting positions of the cameras by combining the proportion of the imaging picture of the camera to a vertical plane and the proportion of the times appearing in the shooting pictures of different cameras, reasonably distributing the number and the positions of the cameras, improving the effectiveness of shooting the target information, and fully utilizing the camera resources, thereby saving the equipment cost; by collecting historical data and analyzing the change data of the information position and the change data of the rotation angle of the camera to be collected when the target moves through big data, a data corresponding relation is formed, and a mapping model is generated: when the movement of the target is monitored actually, the camera rotation angle can be confirmed according to the target movement data, the camera is rotated rapidly, and the speed of shooting the information to be collected is improved; under the condition that a plurality of cameras shoot, when a target moves, the cameras are connected to the same trigger source, and the rotation angle parameters are synchronously input, so that the function of synchronously rotating the other cameras while controlling one camera to rotate is realized, and the image acquisition efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of an optical camera acquisition control system of the present invention;
fig. 2 is a step diagram of an optical camera acquisition control method according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1-2, the present invention provides a technical solution: an optical camera acquisition control system, characterized by: the system comprises: an initial information acquisition module S1, a data processing center S2, an acquisition resource planning module S3, an image acquisition planning module S4, an image acquisition control module S5 and an image communication module S6;
acquiring movement data of a target needing acquired information in a space and shooting data of an optical camera through an initial information acquisition module S1;
storing and managing all the collected data through a data processing center S2;
target movement data are called from the data processing center S2 through the acquisition resource planning module S3, and an appropriate number of optical cameras are distributed to acquire images at appropriate positions according to the analysis result;
positioning the target and the optical camera through an image acquisition planning module S4, simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle of the camera when the information required to be acquired is shot, and generating a mapping model according to the confirmation result;
monitoring the target through an image acquisition control module S5, setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
the acquired image data is wirelessly transmitted to the receiving terminal through the image communication module S6.
The initial information acquisition module S1 comprises a target data acquisition unit and a shooting information acquisition unit, wherein the target data acquisition unit is used for acquiring the distribution data of a target in the space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center S2;
the acquisition resource planning module S3 comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of the target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
The image acquisition planning module S4 comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning the target and the distributed cameras and transmitting the positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
The image acquisition control module S5 comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to a mapping model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous trigger of the rest cameras when the rotation angle of a random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image capturing unit is used for capturing a target image after the adjustment is completed, and transmitting the captured image to the image communication module S6.
S01: collecting distribution data of a target needing to be collected information when the target moves in a space, and distributing and setting an optical camera;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set rotation data of an optical camera, and generating a relative mapping model of target movement data and camera rotation data;
s03: and monitoring the target in real time, and synchronously triggering and controlling the camera to rotate when the monitored target moves.
In step S01: the maximum shooting angle collected by the camera is
Figure 573069DEST_PATH_IMAGE021
Wherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes is
Figure 898877DEST_PATH_IMAGE022
Each vertical plane having an area sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to be
Figure 419988DEST_PATH_IMAGE023
Acquiring the imaging area of the camera as S, when the acquired target moves, the set of times of information which needs to be acquired when the target appears on the imaging plane is k = { k1, k2, …, km }, and calculating an effective coefficient Wi of an image acquired by the optical camera on a vertical line of a random vertical plane according to the following formula:
Figure 466442DEST_PATH_IMAGE024
the method comprises the steps that ki represents the number of times that information needs to be acquired when a target appears on one random imaging plane, the set of effective coefficients of images acquired by arranging optical cameras on perpendicular lines of all vertical planes is W = { W1, W2, … and Wm }, the threshold value of the effective coefficient is set to be W, Wi and W are compared, the effective coefficient exceeding the threshold value is screened out, the optical cameras are arranged on the perpendicular lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, the setting heights of the cameras are the same, the effectiveness of shooting the target information is improved, meanwhile, camera resources are fully utilized, the effective coefficients are calculated by combining the condition that the target information appears outside the imaging picture, and the fault tolerance rate of the calculation result is improved.
In step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,
Figure 638666DEST_PATH_IMAGE025
and is
Figure 620528DEST_PATH_IMAGE026
The corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis is
Figure 881133DEST_PATH_IMAGE027
The rotation angles of the cameras are integrated into
Figure 872223DEST_PATH_IMAGE028
Obtaining the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 when the target moves each time
Figure 633374DEST_PATH_IMAGE029
Acquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired as
Figure 51717DEST_PATH_IMAGE030
Establishing a mapping model of the camera rotation angle with the number of 1 and the target data:
Figure 531109DEST_PATH_IMAGE031
wherein d = { d1, d2, …, di, …, dM }, a and B represent model coefficients, and a and B are calculated according to the following formulas:
Figure 60310DEST_PATH_IMAGE032
Figure 941548DEST_PATH_IMAGE033
wherein,
Figure 530792DEST_PATH_IMAGE034
the number of the camera rotation angle is 1 when the target moves at one time randomly, a mapping model of confirmation coefficients A and B is generated, the camera rotation angle is confirmed according to target movement data, the camera is rotated quickly when the target moves, and the speed of shooting information needing to be collected is improved.
In step S03: monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis is
Figure 765989DEST_PATH_IMAGE035
The distance between the positions of the information to be collected is D, and matching is carried out
Figure 957936DEST_PATH_IMAGE036
And
Figure 444412DEST_PATH_IMAGE037
: if it is
Figure 719404DEST_PATH_IMAGE038
Substituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:
Figure 658541DEST_PATH_IMAGE039
and the set of the rotation angles of the rest cameras is as follows:
Figure 778813DEST_PATH_IMAGE040
the method includes the steps that the same trigger source is set for p cameras, rotation angle parameters are input, the remaining cameras are synchronously rotated when the cameras with the numbers 1 are controlled to rotate, target images are collected after rotation is completed, the target images are wirelessly transmitted to a receiving terminal, the cameras are connected to the same trigger source, the rotation angle parameters are synchronously input, the function of synchronously rotating the remaining cameras while one camera is controlled to rotate is achieved, and the working difficulty of controlling the rotation of multiple cameras is reduced.
The first embodiment is as follows: the ground of setting up the shooting space is the reference surface, will shoot the space through m =3 vertical planes perpendicular with the reference surface and divide equally, and two adjacent vertical planes intersect, and the contained angle of every two adjacent vertical planes is
Figure 385375DEST_PATH_IMAGE041
Each vertical plane area is sFlat plate=8, unit is: square meter, setting optical camera on the vertical line of each vertical plane, and adjusting the shooting angles of 3 cameras into the center of each vertical plane
Figure 565689DEST_PATH_IMAGE042
Acquiring the imaging area of the camera as S =7, when acquiring the movement of the target, the set of times of acquiring information required to be acquired when the target appears on the imaging plane is k = { k1, k2, k3} = {20, 5, 15}, and the imaging area is acquired according to a formula
Figure 992122DEST_PATH_IMAGE043
Obtaining an effective coefficient set of images collected by an optical camera on the vertical lines of all vertical planes, wherein the effective coefficient set is W = { W1, W2, W3} = {0.42, 0.01 and 0.28}, an effective coefficient threshold value is set to be W =0.24, comparing Wi and W, and screening out effective coefficients exceeding the threshold value: 0.42 and 0.28, arranging optical cameras on the vertical lines of the vertical planes corresponding to 0.42 and 0.28, wherein the arrangement heights of the cameras are the same;
example two: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, the set of camera position coordinates is obtained to be (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1) }, p =2, the corresponding camera is numbered, the number set is {1, 2}, the target is shot by the simulation camera, in the historical data, when the target moves at random once, the moving vector coordinates of the position of the information needing to be collected are confirmed, the distance between the positions of the information needing to be collected before and after the target moves at random once is collected to be di =3, and the included angle between the moving vector and the x axis is collected to be d =3
Figure 184594DEST_PATH_IMAGE044
The average value of the difference between the camera rotation angle numbered 2 and the camera rotation angle numbered 1 is obtained as
Figure 380083DEST_PATH_IMAGE045
Acquiring data of M =3 times of target movement, and collecting the rotation angles of the cameras numbered 1 when the target movement is acquired as
Figure 731299DEST_PATH_IMAGE046
Establishing a mapping model of the camera rotation angle with the number of 1 and the target data:
Figure 769662DEST_PATH_IMAGE047
wherein d = { d1, d2, d3} = {5, 3, 4}, according to the formula
Figure 248048DEST_PATH_IMAGE048
And
Figure 812890DEST_PATH_IMAGE049
calculating A and B:
Figure 85740DEST_PATH_IMAGE050
Figure 736033DEST_PATH_IMAGE051
and generating a mapping model of the confirmation coefficients A and B:
Figure 752530DEST_PATH_IMAGE052
monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis is
Figure 452107DEST_PATH_IMAGE053
The distance between the positions of the information to be collected is D =6, and the matching is carried out
Figure 895858DEST_PATH_IMAGE054
And
Figure 33447DEST_PATH_IMAGE055
Figure 853635DEST_PATH_IMAGE056
substituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:
Figure 393070DEST_PATH_IMAGE057
and the camera with the number of 2 rotates by an angle of
Figure 866777DEST_PATH_IMAGE058
Setting the same trigger source for 2 cameras, and inputting rotation angle parameters:
Figure 242394DEST_PATH_IMAGE059
and
Figure 115541DEST_PATH_IMAGE060
and synchronously rotating the camera with the number 2 when controlling the camera with the number 1 to rotate, acquiring a target image after the rotation is finished, and wirelessly transmitting the target image to a receiving terminal.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An optical camera acquisition control system, characterized by: the system comprises: an initial information acquisition module (S1), a data processing center (S2), an acquisition resource planning module (S3), an image acquisition planning module (S4), an image acquisition control module (S5) and an image communication module (S6);
collecting moving data of a target needing to be collected in space and shooting data of an optical camera through the initial information collecting module (S1);
storing and managing all the collected data by the data processing center (S2);
retrieving target movement data from the data processing center (S2) through the acquisition resource planning module (S3), and allocating a proper number of optical cameras to acquire images at proper positions according to the analysis result;
positioning the target and the optical camera through the image acquisition planning module (S4), simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle between the cameras when the information required to be acquired is shot, and generating a mapping model between the relative rotation angle between the cameras and the position change data of the information acquired by the target according to the confirmed relative rotation angle between the cameras when the information required to be acquired is shot;
monitoring the target through the image acquisition control module (S5), setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
wirelessly transmitting the acquired image data to a receiving terminal through the image communication module (S6).
2. An optical camera acquisition control system according to claim 1, characterized in that: the initial information acquisition module (S1) comprises a target data acquisition unit and a shooting information acquisition unit, wherein the target data acquisition unit is used for acquiring distribution data of a target in a space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center (S2);
the acquisition resource planning module (S3) comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of the target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
3. An optical camera acquisition control system according to claim 1, characterized in that: the image acquisition planning module (S4) comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning a target and the distributed cameras and transmitting positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
4. An optical camera acquisition control system according to claim 1, characterized in that: the image acquisition control module (S5) comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to a mapping model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous triggering of the rest cameras when the rotation angle of one random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image acquisition unit is used for shooting a target image after the adjustment is finished and transmitting the shot image to the image communication module (S6).
5. An optical camera acquisition control method is characterized in that: the method comprises the following steps:
s01: collecting distribution data of a target needing to be collected information when the target moves in a space, calling and analyzing target movement data, and distributing a proper number of optical cameras to collect images at proper positions according to the analyzed target movement data;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set relative rotation angle change data among optical cameras, and generating a relative mapping model between target movement data and the relative rotation angle change data among the cameras;
s03: and monitoring the target in real time, setting the same trigger source for the camera when the monitored target moves, and synchronously rotating and adjusting the camera according to the mapping model.
6. The optical camera acquisition control method according to claim 5, characterized in that: in step S01: the maximum shooting angle collected by the camera is
Figure 387934DEST_PATH_IMAGE001
Wherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes is
Figure 874279DEST_PATH_IMAGE002
Each vertical plane area is sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to be
Figure 117042DEST_PATH_IMAGE003
Acquiring the imaging area of the camera as S, collecting the number of times that information needs to be collected when a target appears on an imaging plane when the target moves, wherein the number of times is k = { k1, k2, …, km }, and calculating the number of times that the information needs to be collected when the target appears on the imaging plane according to the following formulaThe effective coefficient Wi of the image collected by the optical camera is set on the vertical line of the vertical plane:
Figure 247809DEST_PATH_IMAGE004
and ki represents the number of times that information needs to be acquired when a target appears on one random imaging plane, the set of effective coefficients of images acquired by arranging the optical cameras on the vertical lines of all vertical planes is W = { W1, W2, …, Wm }, the threshold value of the effective coefficient is set to be W, Wi and W are compared, the effective coefficients exceeding the threshold value are screened out, the optical cameras are arranged on the vertical lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, and the setting heights of the cameras are the same.
7. The optical camera acquisition control method according to claim 5, characterized in that: in step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,
Figure 527612DEST_PATH_IMAGE005
and is
Figure 642198DEST_PATH_IMAGE006
The corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis is
Figure 157493DEST_PATH_IMAGE007
Set of rotation angles of camerasAre synthesized into
Figure 34445DEST_PATH_IMAGE008
The rotation angle is a clockwise rotation angle, and the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 is obtained when the target moves each time
Figure 875362DEST_PATH_IMAGE009
Acquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired as
Figure 211665DEST_PATH_IMAGE010
Establishing a mapping model of the camera rotation angle with the number of 1 and the target data:
Figure 406017DEST_PATH_IMAGE011
wherein d = { d1, d2, …, di, …, dM }, a and B represent model coefficients, and a and B are calculated according to the following formulas:
Figure 776956DEST_PATH_IMAGE012
Figure 523195DEST_PATH_IMAGE013
wherein,
Figure 205849DEST_PATH_IMAGE014
and (3) representing the number 1 of the rotation angles of the camera when the target moves at one time randomly, and generating a mapping model of confirmation coefficients A and B, wherein target data refers to distance data between positions of information to be acquired before and after the target moves.
8. The optical camera acquisition control method according to claim 7, characterized in that: in step S03: real-time monitoring of targetsWhen the movement of the target is monitored, the included angle between the movement vector of the position of the information to be acquired and the x axis is determined to be
Figure 594105DEST_PATH_IMAGE015
The distance between the positions of the information to be collected is D, matching
Figure 553971DEST_PATH_IMAGE016
And
Figure 346477DEST_PATH_IMAGE017
: if it is
Figure 922952DEST_PATH_IMAGE018
Substituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:
Figure 583740DEST_PATH_IMAGE019
and the set of the rotation angles of the rest cameras is as follows:
Figure 27141DEST_PATH_IMAGE020
setting the same trigger source for p cameras, inputting rotation angle parameters, synchronously rotating the remaining cameras when controlling the cameras with the rotation number 1, acquiring target images after the rotation is finished, and wirelessly transmitting the target images to a receiving terminal.
CN202210174168.9A 2022-02-25 2022-02-25 Optical camera acquisition control system and method Active CN114245024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210174168.9A CN114245024B (en) 2022-02-25 2022-02-25 Optical camera acquisition control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210174168.9A CN114245024B (en) 2022-02-25 2022-02-25 Optical camera acquisition control system and method

Publications (2)

Publication Number Publication Date
CN114245024A CN114245024A (en) 2022-03-25
CN114245024B true CN114245024B (en) 2022-05-27

Family

ID=80748139

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210174168.9A Active CN114245024B (en) 2022-02-25 2022-02-25 Optical camera acquisition control system and method

Country Status (1)

Country Link
CN (1) CN114245024B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114900671B (en) * 2022-07-12 2022-09-23 深圳市盈润科技有限公司 Comprehensive management service system and method based on agricultural Internet of things
CN116095279B (en) * 2023-04-11 2023-09-22 广东广宇科技发展有限公司 Intelligent security resource investment method and system
CN117934722B (en) * 2024-01-26 2024-09-06 武汉海德斯路科技有限公司 Inspection point image acquisition method and device based on three-dimensional model

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751677B (en) * 2008-12-17 2013-01-02 中国科学院自动化研究所 Target continuous tracking method based on multi-camera
JP2011199556A (en) * 2010-03-19 2011-10-06 Nippon Hoso Kyokai <Nhk> Stereoscopy imaging device, stereoscopic signal generator, and program
JP5325251B2 (en) * 2011-03-28 2013-10-23 株式会社日立製作所 Camera installation support method, image recognition method
JP2012249117A (en) * 2011-05-30 2012-12-13 Hitachi Ltd Monitoring camera system
CN107666601A (en) * 2017-10-23 2018-02-06 合肥时代智慧高新投资管理有限公司 A kind of multiple-camera automatic tracking method based on electronic location technology
CN108259828A (en) * 2018-01-19 2018-07-06 长沙全度影像科技有限公司 A kind of system and its image pickup method for the shooting of bullet time
CN113395449B (en) * 2021-06-16 2022-06-24 国网山东省电力公司武城县供电公司 Abnormity monitoring system with self-adaptive resource adjustment

Also Published As

Publication number Publication date
CN114245024A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN114245024B (en) Optical camera acquisition control system and method
CN114004941B (en) Indoor scene three-dimensional reconstruction system and method based on nerve radiation field
CN111025283B (en) Method and device for linking radar and dome camera
JP7185821B2 (en) Stereocalibration method for movable vision system
CN105825493B (en) Method for registering images and device
CN110910459B (en) Camera device calibration method and device and calibration equipment
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
CN115512042A (en) Network training and scene reconstruction method, device, machine, system and equipment
CN109064499B (en) Multilayer frame anti-seismic experiment high-speed video measurement method based on distributed analysis
DE19746910A1 (en) Texture image transmission method for computer-aided design system
CN109658497A (en) three-dimensional model reconstruction method and device
DE102007021518B4 (en) Method of processing a video data set
CN112330795A (en) Human body three-dimensional reconstruction method and system based on single RGBD image
CN106871900A (en) Image matching positioning method in ship magnetic field dynamic detection
CN113487683A (en) Target tracking system based on trinocular vision
CN110544273A (en) motion capture method, device and system
CN113079369A (en) Method and device for determining image pickup equipment, storage medium and electronic device
CN114299236A (en) Oblique photogrammetry space-ground fusion live-action modeling method, device, product and medium
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN110880159A (en) Image splicing method and device, storage medium and electronic device
CN108460797A (en) Depth camera relative pose and scene level computational methods and device
CN116051658B (en) Camera hand-eye calibration method and device for target detection based on binocular vision
WO2018087392A1 (en) 3d referencing
CN116012449A (en) Image rendering method and device based on depth information
CN111526297B (en) Curved screen image acquisition method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant