CN114245024B - Optical camera acquisition control system and method - Google Patents
Optical camera acquisition control system and method Download PDFInfo
- Publication number
- CN114245024B CN114245024B CN202210174168.9A CN202210174168A CN114245024B CN 114245024 B CN114245024 B CN 114245024B CN 202210174168 A CN202210174168 A CN 202210174168A CN 114245024 B CN114245024 B CN 114245024B
- Authority
- CN
- China
- Prior art keywords
- target
- camera
- cameras
- data
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 37
- 238000012544 monitoring process Methods 0.000 claims abstract description 21
- 238000004891 communication Methods 0.000 claims abstract description 12
- 230000001360 synchronised effect Effects 0.000 claims description 16
- 238000004458 analytical method Methods 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 12
- 238000007405 data analysis Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000013468 resource allocation Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses an optical camera acquisition control system and a method, comprising the following steps: an initial information acquisition module, a data processing center, an acquisition resource planning module, an image acquisition control module and an image communication module, by collecting the moving data of the object needing to be collected in the space and the shooting data of the optical camera, all the collected data are stored and managed through the data processing center, an appropriate number of optical cameras are distributed to collect images at appropriate positions through the collection resource planning module, generating a mapping model through an image acquisition planning module, monitoring a target through an image acquisition control module, when the target moves, the same trigger source is set for the camera, the camera is synchronously rotated according to the mapping model to control the acquisition of images, the image is transmitted through the image communication module, so that the image acquisition efficiency is improved, and the working difficulty of adjusting and controlling the camera is reduced.
Description
Technical Field
The invention relates to the technical field of camera control, in particular to an optical camera acquisition control system and method.
Background
The optical cameras are various in types, but the basic principle of the operation is the same, the optical image signals are converted into electric signals, convenience is provided for storage or transmission, along with the development of scientific technology, the application range of the optical cameras is more extensive, and the optical cameras are also widely applied to the aspect of monitoring the operation of machines: the method comprises the steps of collecting images, and wirelessly transmitting the images through the Internet to realize image collection and image communication so as to find out running abnormity in time and reduce loss;
however, when the optical camera is controlled to capture images in the prior art, the following problems exist: firstly, when a target is a non-static object, a plurality of cameras are adopted to collect images which are contained in the target and need to be collected, but the number and the positions of the cameras cannot be reasonably distributed, so that the defects that complete and effective information cannot be obtained to the maximum extent, resources cannot be reasonably utilized, and the resource cost is increased exist; secondly, when the target moves, need rotate the camera and carry out information acquisition, prior art need all control every camera: the rotation angle of each camera is controlled, so that synchronous control of other cameras cannot be realized when one camera is controlled, and the working difficulty is increased.
Therefore, there is a need for an optical camera acquisition control system and method to solve the above problems.
Disclosure of Invention
The present invention is directed to a system and a method for controlling an optical camera to solve the above problems.
In order to solve the technical problems, the invention provides the following technical scheme: an optical camera acquisition control system, characterized by: the system comprises: the system comprises an initial information acquisition module, a data processing center, an acquisition resource planning module, an image acquisition control module and an image communication module;
acquiring moving data of a target needing acquired information in a space and shooting data of an optical camera through the initial information acquisition module;
storing and managing all the acquired data through the data processing center;
calling and analyzing target moving data from the data processing center through the acquisition resource planning module, and distributing a proper number of optical cameras to acquire images at proper positions according to an analysis result;
positioning the target and the optical camera through the image acquisition planning module, simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle of the camera when the information required to be acquired is shot, and generating a mapping model according to the confirmation result;
monitoring the target through the image acquisition control module, setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
and wirelessly transmitting the acquired image data to a receiving terminal through the image communication module.
Further, the initial information acquisition module comprises a target data acquisition unit and a shooting information acquisition unit, and the target data acquisition unit is used for acquiring distribution data of a target in a space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center;
the acquisition resource planning module comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of a target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
Furthermore, the image acquisition planning module comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning a target and the distributed cameras and transmitting positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
Furthermore, the image acquisition control module comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to an adjusting model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous triggering of the rest cameras when the rotation angle of one random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image acquisition unit is used for shooting a target image after the adjustment is finished and transmitting the shot image to the image communication module.
An optical camera acquisition control method is characterized in that: the method comprises the following steps:
s01: collecting distribution data of a target needing to be collected information when the target moves in a space, and distributing and setting an optical camera;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set rotation data of an optical camera, and generating a relative mapping model of target movement data and camera rotation data;
s03: and monitoring the target in real time, and synchronously triggering and controlling the camera to rotate when the monitored target moves.
Further, in step S01: the maximum shooting angle collected by the camera isWherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes isEach vertical plane area is sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to beAcquiring the imaging area of the camera as S, when the acquired target moves, the set of times of information which needs to be acquired when the target appears on the imaging plane is k = { k1, k2, …, km }, and calculating an effective coefficient Wi of an image acquired by the optical camera on a vertical line of a random vertical plane according to the following formula:
wherein ki represents the number of times that information needs to be acquired when a target appears on one imaging plane at random, the set of effective coefficients of images acquired by arranging optical cameras on the vertical lines of all vertical planes is W = { W1, W2, …, Wm }, the threshold value of the effective coefficient is set as W, Wi and W are compared, the effective coefficient exceeding the threshold value is screened out, the optical cameras are arranged on the vertical lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, the heights of the cameras are the same, in order to confirm the final distribution number and distribution positions of the cameras, the space is divided equally through the vertical planes, the cameras are simulated to be arranged at the relative positions of the centers of the vertical planes, the number of times that the target information appears in the image of the cameras when the target moves is acquired, the effective coefficient of the information acquired by each camera is calculated according to the proportion of the number of the occurrence times, so as to quickly confirm the position where the target information does not appear, the camera is not arranged at the position where the target information cannot appear, so that the effectiveness of shooting the target information is improved, and meanwhile, the camera resources are saved.
Further, in step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,and isThe corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis isThe rotation angles of the cameras are integrated intoObtaining the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 when the target moves each timeAcquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired asEstablishing a mapping model of the camera rotation angle with the number of 1 and the target data:wherein d = { d1, d2, …, di, …, dM }, and a and B represent model coefficients, which are calculated according to the following formulas:
wherein,the method comprises the steps of representing the rotation angle of a camera with the number of 1 when an object moves randomly for one time, generating a mapping model of confirmation coefficients A and B, collecting historical data, analyzing information position change data and camera rotation angle change data which need to be collected when the object moves through big data to form a data corresponding relation, and generating the mapping model.
Further, in step S03: monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis isThe distance between the positions of the information to be collected is D, and matching is carried outAnd: if it isSubstituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:and the set of the rotation angles of the rest cameras is as follows:
the method includes the steps that the same trigger source is set for p cameras, rotation angle parameters are input, the remaining cameras are synchronously rotated when the cameras with the number 1 are controlled to rotate, target images are collected after rotation is completed, the target images are wirelessly transmitted to a receiving terminal, under the condition that multiple cameras shoot, the relative difference value of the rotation angles of the cameras is obtained according to big data analysis, when a target moves, the cameras are connected to the same trigger source, the rotation angle parameters are synchronously input, and the function that the remaining cameras are synchronously rotated while one camera is controlled to rotate is achieved.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the shooting space is divided equally through the vertical planes, the relative position of the center of each vertical plane is simulated to be provided with a camera, and historical data are acquired: the method comprises the steps of collecting the times of target information appearing in a camera picture when a target moves, removing the invalid setting positions of the cameras by combining the proportion of the imaging picture of the camera to a vertical plane and the proportion of the times appearing in the shooting pictures of different cameras, reasonably distributing the number and the positions of the cameras, improving the effectiveness of shooting the target information, and fully utilizing the camera resources, thereby saving the equipment cost; by collecting historical data and analyzing the change data of the information position and the change data of the rotation angle of the camera to be collected when the target moves through big data, a data corresponding relation is formed, and a mapping model is generated: when the movement of the target is monitored actually, the camera rotation angle can be confirmed according to the target movement data, the camera is rotated rapidly, and the speed of shooting the information to be collected is improved; under the condition that a plurality of cameras shoot, when a target moves, the cameras are connected to the same trigger source, and the rotation angle parameters are synchronously input, so that the function of synchronously rotating the other cameras while controlling one camera to rotate is realized, and the image acquisition efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a block diagram of an optical camera acquisition control system of the present invention;
fig. 2 is a step diagram of an optical camera acquisition control method according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Referring to fig. 1-2, the present invention provides a technical solution: an optical camera acquisition control system, characterized by: the system comprises: an initial information acquisition module S1, a data processing center S2, an acquisition resource planning module S3, an image acquisition planning module S4, an image acquisition control module S5 and an image communication module S6;
acquiring movement data of a target needing acquired information in a space and shooting data of an optical camera through an initial information acquisition module S1;
storing and managing all the collected data through a data processing center S2;
target movement data are called from the data processing center S2 through the acquisition resource planning module S3, and an appropriate number of optical cameras are distributed to acquire images at appropriate positions according to the analysis result;
positioning the target and the optical camera through an image acquisition planning module S4, simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle of the camera when the information required to be acquired is shot, and generating a mapping model according to the confirmation result;
monitoring the target through an image acquisition control module S5, setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
the acquired image data is wirelessly transmitted to the receiving terminal through the image communication module S6.
The initial information acquisition module S1 comprises a target data acquisition unit and a shooting information acquisition unit, wherein the target data acquisition unit is used for acquiring the distribution data of a target in the space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center S2;
the acquisition resource planning module S3 comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of the target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
The image acquisition planning module S4 comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning the target and the distributed cameras and transmitting the positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
The image acquisition control module S5 comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to a mapping model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous trigger of the rest cameras when the rotation angle of a random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image capturing unit is used for capturing a target image after the adjustment is completed, and transmitting the captured image to the image communication module S6.
S01: collecting distribution data of a target needing to be collected information when the target moves in a space, and distributing and setting an optical camera;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set rotation data of an optical camera, and generating a relative mapping model of target movement data and camera rotation data;
s03: and monitoring the target in real time, and synchronously triggering and controlling the camera to rotate when the monitored target moves.
In step S01: the maximum shooting angle collected by the camera isWherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes isEach vertical plane having an area sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to beAcquiring the imaging area of the camera as S, when the acquired target moves, the set of times of information which needs to be acquired when the target appears on the imaging plane is k = { k1, k2, …, km }, and calculating an effective coefficient Wi of an image acquired by the optical camera on a vertical line of a random vertical plane according to the following formula:
the method comprises the steps that ki represents the number of times that information needs to be acquired when a target appears on one random imaging plane, the set of effective coefficients of images acquired by arranging optical cameras on perpendicular lines of all vertical planes is W = { W1, W2, … and Wm }, the threshold value of the effective coefficient is set to be W, Wi and W are compared, the effective coefficient exceeding the threshold value is screened out, the optical cameras are arranged on the perpendicular lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, the setting heights of the cameras are the same, the effectiveness of shooting the target information is improved, meanwhile, camera resources are fully utilized, the effective coefficients are calculated by combining the condition that the target information appears outside the imaging picture, and the fault tolerance rate of the calculation result is improved.
In step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,and isThe corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis isThe rotation angles of the cameras are integrated intoObtaining the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 when the target moves each timeAcquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired asEstablishing a mapping model of the camera rotation angle with the number of 1 and the target data:wherein d = { d1, d2, …, di, …, dM }, a and B represent model coefficients, and a and B are calculated according to the following formulas:
wherein,the number of the camera rotation angle is 1 when the target moves at one time randomly, a mapping model of confirmation coefficients A and B is generated, the camera rotation angle is confirmed according to target movement data, the camera is rotated quickly when the target moves, and the speed of shooting information needing to be collected is improved.
In step S03: monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis isThe distance between the positions of the information to be collected is D, and matching is carried outAnd: if it isSubstituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:and the set of the rotation angles of the rest cameras is as follows:
the method includes the steps that the same trigger source is set for p cameras, rotation angle parameters are input, the remaining cameras are synchronously rotated when the cameras with the numbers 1 are controlled to rotate, target images are collected after rotation is completed, the target images are wirelessly transmitted to a receiving terminal, the cameras are connected to the same trigger source, the rotation angle parameters are synchronously input, the function of synchronously rotating the remaining cameras while one camera is controlled to rotate is achieved, and the working difficulty of controlling the rotation of multiple cameras is reduced.
The first embodiment is as follows: the ground of setting up the shooting space is the reference surface, will shoot the space through m =3 vertical planes perpendicular with the reference surface and divide equally, and two adjacent vertical planes intersect, and the contained angle of every two adjacent vertical planes isEach vertical plane area is sFlat plate=8, unit is: square meter, setting optical camera on the vertical line of each vertical plane, and adjusting the shooting angles of 3 cameras into the center of each vertical planeAcquiring the imaging area of the camera as S =7, when acquiring the movement of the target, the set of times of acquiring information required to be acquired when the target appears on the imaging plane is k = { k1, k2, k3} = {20, 5, 15}, and the imaging area is acquired according to a formulaObtaining an effective coefficient set of images collected by an optical camera on the vertical lines of all vertical planes, wherein the effective coefficient set is W = { W1, W2, W3} = {0.42, 0.01 and 0.28}, an effective coefficient threshold value is set to be W =0.24, comparing Wi and W, and screening out effective coefficients exceeding the threshold value: 0.42 and 0.28, arranging optical cameras on the vertical lines of the vertical planes corresponding to 0.42 and 0.28, wherein the arrangement heights of the cameras are the same;
example two: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, the set of camera position coordinates is obtained to be (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1) }, p =2, the corresponding camera is numbered, the number set is {1, 2}, the target is shot by the simulation camera, in the historical data, when the target moves at random once, the moving vector coordinates of the position of the information needing to be collected are confirmed, the distance between the positions of the information needing to be collected before and after the target moves at random once is collected to be di =3, and the included angle between the moving vector and the x axis is collected to be d =3The average value of the difference between the camera rotation angle numbered 2 and the camera rotation angle numbered 1 is obtained asAcquiring data of M =3 times of target movement, and collecting the rotation angles of the cameras numbered 1 when the target movement is acquired asEstablishing a mapping model of the camera rotation angle with the number of 1 and the target data:wherein d = { d1, d2, d3} = {5, 3, 4}, according to the formulaAndcalculating A and B:,and generating a mapping model of the confirmation coefficients A and B:monitoring the target in real time, and when the monitored target moves, determining that the included angle between the movement vector of the position of the information to be acquired and the x axis isThe distance between the positions of the information to be collected is D =6, and the matching is carried outAnd:substituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:and the camera with the number of 2 rotates by an angle ofSetting the same trigger source for 2 cameras, and inputting rotation angle parameters:andand synchronously rotating the camera with the number 2 when controlling the camera with the number 1 to rotate, acquiring a target image after the rotation is finished, and wirelessly transmitting the target image to a receiving terminal.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. An optical camera acquisition control system, characterized by: the system comprises: an initial information acquisition module (S1), a data processing center (S2), an acquisition resource planning module (S3), an image acquisition planning module (S4), an image acquisition control module (S5) and an image communication module (S6);
collecting moving data of a target needing to be collected in space and shooting data of an optical camera through the initial information collecting module (S1);
storing and managing all the collected data by the data processing center (S2);
retrieving target movement data from the data processing center (S2) through the acquisition resource planning module (S3), and allocating a proper number of optical cameras to acquire images at proper positions according to the analysis result;
positioning the target and the optical camera through the image acquisition planning module (S4), simulating the optical camera to shoot the target when the position of the target is changed, confirming the relative rotation angle between the cameras when the information required to be acquired is shot, and generating a mapping model between the relative rotation angle between the cameras and the position change data of the information acquired by the target according to the confirmed relative rotation angle between the cameras when the information required to be acquired is shot;
monitoring the target through the image acquisition control module (S5), setting the same trigger source for the camera when the target moves, synchronously rotating and adjusting the camera according to the mapping model, and acquiring image data after adjustment;
wirelessly transmitting the acquired image data to a receiving terminal through the image communication module (S6).
2. An optical camera acquisition control system according to claim 1, characterized in that: the initial information acquisition module (S1) comprises a target data acquisition unit and a shooting information acquisition unit, wherein the target data acquisition unit is used for acquiring distribution data of a target in a space; the shooting information acquisition unit is used for acquiring the shooting range and shooting angle data of the optical camera and transmitting all the acquired data to the data processing center (S2);
the acquisition resource planning module (S3) comprises a target data analysis unit and a resource allocation unit, wherein the target data analysis unit is used for analyzing the distribution data of the target in the space and confirming the target moving range; and the resource allocation unit is used for allocating a proper number of cameras to acquire the target images according to the confirmed target movement range.
3. An optical camera acquisition control system according to claim 1, characterized in that: the image acquisition planning module (S4) comprises a shooting space modeling unit, a GPS positioning unit, an adjustment parameter analysis unit and a mapping model generation unit, wherein the shooting space modeling unit is used for carrying out three-dimensional modeling on a shooting space; the GPS positioning unit is used for positioning a target and the distributed cameras and transmitting positioning information to the adjustment parameter analysis unit; the adjustment parameter analysis unit is used for analyzing the relative rotation angle change data of the camera when the position of the acquired target information changes; the mapping model generating unit is used for analyzing the mapping relation between the relative rotation angle change between the cameras and the position change data of the target acquired information, and generating a camera relative rotation angle adjusting model according to the analysis result.
4. An optical camera acquisition control system according to claim 1, characterized in that: the image acquisition control module (S5) comprises a target monitoring unit, a synchronous rotation adjusting unit, a synchronous trigger control unit and a target image acquisition unit, wherein the target monitoring unit is used for monitoring a target in real time, and when the movement of the target is monitored, the synchronous rotation adjusting unit is used for synchronously adjusting the angle of the camera according to a mapping model; the synchronous trigger control unit is used for setting the same trigger source for the cameras, controlling the synchronous triggering of the rest cameras when the rotation angle of one random camera is adjusted, and synchronously adjusting the rotation angles of the rest cameras; the target image acquisition unit is used for shooting a target image after the adjustment is finished and transmitting the shot image to the image communication module (S6).
5. An optical camera acquisition control method is characterized in that: the method comprises the following steps:
s01: collecting distribution data of a target needing to be collected information when the target moves in a space, calling and analyzing target movement data, and distributing a proper number of optical cameras to collect images at proper positions according to the analyzed target movement data;
s02: calling historical data, analyzing position change of information to be collected when a target moves and set relative rotation angle change data among optical cameras, and generating a relative mapping model between target movement data and the relative rotation angle change data among the cameras;
s03: and monitoring the target in real time, setting the same trigger source for the camera when the monitored target moves, and synchronously rotating and adjusting the camera according to the mapping model.
6. The optical camera acquisition control method according to claim 5, characterized in that: in step S01: the maximum shooting angle collected by the camera isWherein n represents the number of cameras, the ground of the shooting space is set as a reference plane, the shooting space is divided equally by m vertical planes vertical to the reference plane, two adjacent vertical planes are intersected, and the included angle between every two adjacent vertical planes isEach vertical plane area is sFlat plateThe optical cameras are arranged on the vertical lines of each vertical plane in a simulated mode, the vertical lines pass through the center of each vertical plane, and the shooting angles of the m cameras are uniformly adjusted to beAcquiring the imaging area of the camera as S, collecting the number of times that information needs to be collected when a target appears on an imaging plane when the target moves, wherein the number of times is k = { k1, k2, …, km }, and calculating the number of times that the information needs to be collected when the target appears on the imaging plane according to the following formulaThe effective coefficient Wi of the image collected by the optical camera is set on the vertical line of the vertical plane:
and ki represents the number of times that information needs to be acquired when a target appears on one random imaging plane, the set of effective coefficients of images acquired by arranging the optical cameras on the vertical lines of all vertical planes is W = { W1, W2, …, Wm }, the threshold value of the effective coefficient is set to be W, Wi and W are compared, the effective coefficients exceeding the threshold value are screened out, the optical cameras are arranged on the vertical lines of the vertical planes corresponding to the effective coefficients exceeding the threshold value, and the setting heights of the cameras are the same.
7. The optical camera acquisition control method according to claim 5, characterized in that: in step S02: after the optical camera is set, three-dimensional modeling is carried out on a shooting space, the target and the optical camera are positioned, and the set camera position coordinate set obtained is (x, y, z) = { (x 1, y1, z 1), (x 2, y2, z 1), …, (xp, yp, z 1) }, wherein p represents the set number of cameras,and isThe corresponding cameras are numbered, the number set is {1, 2, …, p }, the analog cameras shoot targets, historical data are collected, when the targets move at random once, the coordinates of the movement vectors of the positions of the information needing to be collected are (X, Y, Z), the distance between the positions of the information needing to be collected before and after the random once movement of the targets is collected is di, and the included angle between the movement vectors and the X axis isSet of rotation angles of camerasAre synthesized intoThe rotation angle is a clockwise rotation angle, and the average value set of the difference value between the rotation angle of each camera and the rotation angle of the camera with the number of 1 is obtained when the target moves each timeAcquiring data of M times of target movement, and collecting the camera rotation angles numbered 1 when the target movement is acquired asEstablishing a mapping model of the camera rotation angle with the number of 1 and the target data:wherein d = { d1, d2, …, di, …, dM }, a and B represent model coefficients, and a and B are calculated according to the following formulas:
wherein,and (3) representing the number 1 of the rotation angles of the camera when the target moves at one time randomly, and generating a mapping model of confirmation coefficients A and B, wherein target data refers to distance data between positions of information to be acquired before and after the target moves.
8. The optical camera acquisition control method according to claim 7, characterized in that: in step S03: real-time monitoring of targetsWhen the movement of the target is monitored, the included angle between the movement vector of the position of the information to be acquired and the x axis is determined to beThe distance between the positions of the information to be collected is D, matchingAnd: if it isSubstituting D into the mapping model to obtain the rotation angle of the camera with the number of 1 when the current target moves as follows:and the set of the rotation angles of the rest cameras is as follows:setting the same trigger source for p cameras, inputting rotation angle parameters, synchronously rotating the remaining cameras when controlling the cameras with the rotation number 1, acquiring target images after the rotation is finished, and wirelessly transmitting the target images to a receiving terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210174168.9A CN114245024B (en) | 2022-02-25 | 2022-02-25 | Optical camera acquisition control system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210174168.9A CN114245024B (en) | 2022-02-25 | 2022-02-25 | Optical camera acquisition control system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114245024A CN114245024A (en) | 2022-03-25 |
CN114245024B true CN114245024B (en) | 2022-05-27 |
Family
ID=80748139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210174168.9A Active CN114245024B (en) | 2022-02-25 | 2022-02-25 | Optical camera acquisition control system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114245024B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114900671B (en) * | 2022-07-12 | 2022-09-23 | 深圳市盈润科技有限公司 | Comprehensive management service system and method based on agricultural Internet of things |
CN116095279B (en) * | 2023-04-11 | 2023-09-22 | 广东广宇科技发展有限公司 | Intelligent security resource investment method and system |
CN117934722B (en) * | 2024-01-26 | 2024-09-06 | 武汉海德斯路科技有限公司 | Inspection point image acquisition method and device based on three-dimensional model |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101751677B (en) * | 2008-12-17 | 2013-01-02 | 中国科学院自动化研究所 | Target continuous tracking method based on multi-camera |
JP2011199556A (en) * | 2010-03-19 | 2011-10-06 | Nippon Hoso Kyokai <Nhk> | Stereoscopy imaging device, stereoscopic signal generator, and program |
JP5325251B2 (en) * | 2011-03-28 | 2013-10-23 | 株式会社日立製作所 | Camera installation support method, image recognition method |
JP2012249117A (en) * | 2011-05-30 | 2012-12-13 | Hitachi Ltd | Monitoring camera system |
CN107666601A (en) * | 2017-10-23 | 2018-02-06 | 合肥时代智慧高新投资管理有限公司 | A kind of multiple-camera automatic tracking method based on electronic location technology |
CN108259828A (en) * | 2018-01-19 | 2018-07-06 | 长沙全度影像科技有限公司 | A kind of system and its image pickup method for the shooting of bullet time |
CN113395449B (en) * | 2021-06-16 | 2022-06-24 | 国网山东省电力公司武城县供电公司 | Abnormity monitoring system with self-adaptive resource adjustment |
-
2022
- 2022-02-25 CN CN202210174168.9A patent/CN114245024B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114245024A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114245024B (en) | Optical camera acquisition control system and method | |
CN114004941B (en) | Indoor scene three-dimensional reconstruction system and method based on nerve radiation field | |
CN111025283B (en) | Method and device for linking radar and dome camera | |
JP7185821B2 (en) | Stereocalibration method for movable vision system | |
CN105825493B (en) | Method for registering images and device | |
CN110910459B (en) | Camera device calibration method and device and calibration equipment | |
CN110428501B (en) | Panoramic image generation method and device, electronic equipment and readable storage medium | |
CN115512042A (en) | Network training and scene reconstruction method, device, machine, system and equipment | |
CN109064499B (en) | Multilayer frame anti-seismic experiment high-speed video measurement method based on distributed analysis | |
DE19746910A1 (en) | Texture image transmission method for computer-aided design system | |
CN109658497A (en) | three-dimensional model reconstruction method and device | |
DE102007021518B4 (en) | Method of processing a video data set | |
CN112330795A (en) | Human body three-dimensional reconstruction method and system based on single RGBD image | |
CN106871900A (en) | Image matching positioning method in ship magnetic field dynamic detection | |
CN113487683A (en) | Target tracking system based on trinocular vision | |
CN110544273A (en) | motion capture method, device and system | |
CN113079369A (en) | Method and device for determining image pickup equipment, storage medium and electronic device | |
CN114299236A (en) | Oblique photogrammetry space-ground fusion live-action modeling method, device, product and medium | |
CN114943773A (en) | Camera calibration method, device, equipment and storage medium | |
CN110880159A (en) | Image splicing method and device, storage medium and electronic device | |
CN108460797A (en) | Depth camera relative pose and scene level computational methods and device | |
CN116051658B (en) | Camera hand-eye calibration method and device for target detection based on binocular vision | |
WO2018087392A1 (en) | 3d referencing | |
CN116012449A (en) | Image rendering method and device based on depth information | |
CN111526297B (en) | Curved screen image acquisition method and device and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |