CN112351210A - Active vision acquisition system - Google Patents
Active vision acquisition system Download PDFInfo
- Publication number
- CN112351210A CN112351210A CN202011228762.9A CN202011228762A CN112351210A CN 112351210 A CN112351210 A CN 112351210A CN 202011228762 A CN202011228762 A CN 202011228762A CN 112351210 A CN112351210 A CN 112351210A
- Authority
- CN
- China
- Prior art keywords
- video
- active
- control module
- data
- video acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/66—Remote control of cameras or camera parts, e.g. by remote control devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/617—Upgrading or updating of programs or applications for camera control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an active vision acquisition system which comprises 1 active vision control module and n video acquisition sensors, wherein an active vision control algorithm is embedded in the active vision control module, and a control instruction is sent to the n video acquisition sensors in a wireless mode, so that the video acquisition module has activity when acquiring video data, and the direction angle, the pitch angle and the focal length can be automatically adjusted as required. The traditional passive video acquisition mode is converted into an anthropomorphic active visual model, namely: after an interested target is found, the active vision control module dispatches the full-field video sensor to acquire video data with better quality from different angles and focal lengths for analysis and use.
Description
Technical Field
The invention relates to an acquisition system, in particular to an active vision acquisition system.
Background
Video capture handle simulationVideo conversionBecome intoDigital videoAnd saved in the format of the digital video file. The video acquisition is toAnalog cameraThe video signal output by video recorder, LD video disk player and TV set is converted into binary digital information by special analog and digital conversion equipment.
The traditional passive video acquisition mode is relatively slow in acquisition of a target and cannot acquire high-quality videos from multiple angles.
Disclosure of Invention
The invention aims to provide an active visual acquisition system, which is converted from a traditional passive video acquisition mode into an anthropomorphic active visual model, namely: after an interested target is found, the active vision control module dispatches the full-field video sensor to acquire video data with better quality from different angles and focal lengths for analysis and use, so that the problems in the background technology can be solved.
In order to achieve the purpose, the invention provides the following technical scheme:
the utility model provides an initiative vision acquisition system, includes 1 initiative vision control module and n video acquisition sensors, the initiative vision control module is embedded to have initiative vision control algorithm, sends control command to n video acquisition sensors through wireless mode for video acquisition module has the initiative when gathering video data, can automatic adjustment direction angle, every single move angle and focus as required.
Furthermore, the active vision control module can control n video acquisition sensors to work independently according to needs, or multiple machines cooperate to acquire video data of the same interested target from different angles and heights so as to acquire the optimal target characteristic data.
Furthermore, the active visual control module can be networked with the data processing center, can upload data as a component of the global planning of the data processing center, and can also receive a control strategy downloaded by the data processing center and update the scheduling method of the video sensor of the current scene.
Furthermore, each video acquisition sensor is provided with a 3D cloud platform and has an adjustable focal length, and a wired or wireless mode can be adopted according to actual conditions when video data are uploaded.
Further, the active visual control algorithm comprises a structural traffic scene modeling, a moving target track tracking model in a structural scene and a visual sensor scheduling model.
Compared with the prior art, the invention has the beneficial effects that:
the traditional passive video acquisition mode is converted into an anthropomorphic active visual model, namely: after an interested target is found, the active vision control module dispatches the full-field video sensor to acquire video data with better quality from different angles and focal lengths for analysis and use.
Drawings
FIG. 1 is a block diagram of the system components of the present invention;
FIG. 2 is a schematic diagram of a rotary and a structured scene model according to the present invention;
FIG. 3 is a diagram of the trajectory of a motor vehicle entering a rotary island according to the present invention;
FIG. 4 is a diagram illustrating simulation experiment results of shooting angles according to the present invention;
FIG. 5 is a structured scene model of three sensors of the present invention;
FIG. 6 is a graph showing the experimental results of FIG. 5 according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The utility model provides an initiative vision acquisition system, includes 1 initiative vision control module and n video acquisition sensors, the initiative vision control module is embedded to have initiative vision control algorithm, sends control command to n video acquisition sensors through wireless mode for video acquisition module has the initiative when gathering video data, can be as required automatic adjustment direction angle, every single move angle and focus, see figure 1 and show.
The active vision control module can control n video acquisition sensors to work independently according to needs, or multiple machines cooperate to acquire video data of the same interested target from different angles and heights so as to acquire the optimal target characteristic data.
The active visual control module can be networked with the data processing center, can upload data to be used as a component of global planning of the data processing center, and can also receive control strategies downloaded by the data processing center and update the scheduling method of the video sensor of the current scene.
Each video acquisition sensor has the 3D cloud platform and the focus is adjustable, can adopt wired or wireless mode according to actual conditions when uploading video data.
The active visual control algorithm comprises a structural traffic scene modeling, a moving target track tracking model in a structural scene and a visual sensor scheduling model.
1. Structured traffic scene modeling
The traffic parameters of the target scene are highly abstracted, a structured traffic scene model is constructed by combining with the actual engineering, and a highly digital scene model is provided for the subsequent video acquisition sensor.
Taking the rotary island as an example (see fig. 2), a structured scene model for active vision is established. In the process of establishing the roundabout model, contents such as the width of a motor vehicle lane, the speed of the motor vehicle, traffic rules for entering and exiting the roundabout and the like in the national road traffic standard are used as parameters of the model.
The core area of the rotary is assumed to beHaving a coordinate range of(x-pr)2+(y-qr)2≤ri 2(ii) a Road area coordinate rangeri 2≤(x-pr)2+(y-qr)2≤ro 2. In the formula roRepresents the outer circle radius of the roundabout, riIs the radius of the circle in the rotary island, pr,qrRespectively the distance between the center of the circle and the x and y axes of the coordinate axis.
2. Active tracking model for moving target track in structured scene
The purpose of establishing the set of expected coordinates of the moving object is to provide pre-set coordinate parameters for the active visual scheduling algorithm.
When the motor vehicle enters the roundabout, the active visual model determines the tracking priority according to the emergency degree of the event, and actively tracks the track of the interested target. According to traffic regulations, vehicles enter the rotary island as shown in fig. 3.
Then the moving target moving just driving into the roundabout track model in the structured scene is shown as formula (1):
in the above formula (alpha)e,βe) Is the center of the ring island, pe、qeThe projection of the trajectory equation on the x and y axes and the initial point coordinate of the target vehicle entering the rotary island are shown. After entering the roundabout, the motion trajectory of the target vehicle can be predicted according to the model of fig. 2 as shown in equation (2):
(x-px)2+(y-py)2=rt,ri+dr<x<ro-dr,dr<y<2ro-dr (2)
wherein d isrVehicle width of moving object (p)x,py) Is the center of the track circle. The expected coordinate set CR of the whole track of the interested moving vehicle target in the roundabout can be obtained by the formulas (1) and (2), and a data basis is provided for an active visual scheduling model of the next step.
The following is the video capture angle simulation experiment results, as shown in fig. 4.
TABLE 1 shooting angle variation trend chart
3. Video acquisition sensor scheduling model based on confidence rule base
Describing each constraint condition and parameter applicable condition in the structured scene as formula (3) by using a confidence rule, wherein a plurality of rules form a rule base so as to establish a scheduling model of the video acquisition sensor:
optimizing an active visual scheduling model in a structured scene on an acquired training sample set, wherein a mean square error is used as an objective function in the optimization process:
and a complete scheduling model is obtained after training, and when an event (such as violation, traffic accident and the like) occurs, a plurality of video acquisition sensors are scheduled to actively acquire data with higher quality and capture an accident scene, so that the duty assignment efficiency is improved, and the rescue time is shortened.
The experimental results are as follows.
Taking 3 video acquisition sensors as an example, 480 moving objects (including the situations of entering, exiting, winding around the island and the like at four intersection entering the island) are respectively scheduled, and are compared and analyzed with the accuracy index determined by the scheduling data with the label. The collection field and the complete structural scene are shown in fig. 5, and the positions of the video collection sensors are southwest corner, southeast corner and northeast corner respectively.
Experiments were performed using the K-means method (K-means) \ Fuzzy System (Fuzzy System), artificial neural Networks (neural Networks), Radial Basis functions (Radial Basis functions) and the method (Belief Rule Base), respectively, and labeled scheduling data (True Set) was used as a reference for comparison.
The results of the experiment are shown in FIG. 6. The experimental result is generated by comparing the calculation result of the active visual scheduling algorithm with the labeled scheduling data (True Set) as a standard. From experimental results (each method fitting curve is compared with a labeled scheduling data curve), the method is superior to other methods in stability, and is equivalent to other methods in precision.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.
Claims (5)
1. The utility model provides an initiative vision acquisition system, includes 1 initiative vision control module and n video acquisition sensor, its characterized in that, the initiative vision control module is embedded to have initiative vision control algorithm, sends control command to n video acquisition sensor through wireless mode for video acquisition module has the initiative when gathering video data, can be as required automatic adjustment direction angle, pitch angle and focus.
2. The active vision acquisition system of claim 1 wherein the active vision control module can control the n video acquisition sensors to work independently or multiple machines cooperate to acquire video data from different angles and heights for the same object of interest in order to obtain the best object feature data.
3. The active visual acquisition system of claim 1 wherein the active visual control module is networked with the data processing center, is capable of uploading data as part of a global planning for the data processing center, and is also capable of receiving control strategies downloaded by the data processing center to update the scheduling method for the video sensor of the current scene.
4. The active vision acquisition system of claim 1 in which each video acquisition sensor has a 3D pan/tilt and is adjustable in focal length, and uploading video data can be done in a wired or wireless manner depending on the actual situation.
5. The active visual acquisition system of claim 1 in which the active visual control algorithm comprises a structured traffic scene modeling, a moving object trajectory tracking model in a structured scene, a visual sensor scheduling model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011228762.9A CN112351210A (en) | 2020-11-06 | 2020-11-06 | Active vision acquisition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011228762.9A CN112351210A (en) | 2020-11-06 | 2020-11-06 | Active vision acquisition system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112351210A true CN112351210A (en) | 2021-02-09 |
Family
ID=74428389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011228762.9A Pending CN112351210A (en) | 2020-11-06 | 2020-11-06 | Active vision acquisition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112351210A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103051887A (en) * | 2013-01-23 | 2013-04-17 | 河海大学常州校区 | Eagle eye-imitated intelligent visual sensing node and work method thereof |
CN109887040A (en) * | 2019-02-18 | 2019-06-14 | 北京航空航天大学 | The moving target actively perceive method and system of facing video monitoring |
-
2020
- 2020-11-06 CN CN202011228762.9A patent/CN112351210A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103051887A (en) * | 2013-01-23 | 2013-04-17 | 河海大学常州校区 | Eagle eye-imitated intelligent visual sensing node and work method thereof |
CN109887040A (en) * | 2019-02-18 | 2019-06-14 | 北京航空航天大学 | The moving target actively perceive method and system of facing video monitoring |
Non-Patent Citations (1)
Title |
---|
赵振国、朱海龙、刘靖宇等: "一种结构化道路建模方法", 《智能计算机与应用》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112612287B (en) | System, method, medium and device for planning local path of automatic driving automobile | |
CN107272680B (en) | A kind of automatic follower method of robot based on ROS robot operating system | |
CN108932736A (en) | Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method | |
CN107144281B (en) | Unmanned aerial vehicle indoor positioning system and positioning method based on cooperative targets and monocular vision | |
CN113093772B (en) | Method for accurately landing hangar of unmanned aerial vehicle | |
CN110544296A (en) | intelligent planning method for three-dimensional global flight path of unmanned aerial vehicle in environment with uncertain enemy threat | |
CN104808685A (en) | Vision auxiliary device and method for automatic landing of unmanned aerial vehicle | |
CN109946564B (en) | Distribution network overhead line inspection data acquisition method and inspection system | |
CN111399517B (en) | Following monitoring method of track type inspection robot based on UWB positioning system | |
CN112947526B (en) | Unmanned aerial vehicle autonomous landing method and system | |
CN112506219A (en) | Intelligent traffic supervision unmanned aerial vehicle track planning method and system and readable storage medium | |
CN105867112A (en) | Intelligent vehicle based on control algorithm with automatically optimized parameter and control method thereof | |
CN112351210A (en) | Active vision acquisition system | |
CN114326771A (en) | Unmanned aerial vehicle shooting route generation method and system based on image recognition | |
CN116309851B (en) | Position and orientation calibration method for intelligent park monitoring camera | |
CN112785564A (en) | Pedestrian detection tracking system and method based on mechanical arm | |
CN115684637B (en) | Highway vehicle speed measuring method and device based on roadside monocular camera calibration | |
CN111586303A (en) | Control method and device for dynamically tracking road surface target by camera based on wireless positioning technology | |
CN108227689A (en) | A kind of design method of Agriculture Mobile Robot independent navigation | |
CN115661726A (en) | Autonomous video acquisition and analysis method for rail train workpiece assembly | |
CN115167390A (en) | Power distribution network inspection robot system and method | |
CN111432334B (en) | Following monitoring method and system for rail-mounted inspection robot | |
CN113485368A (en) | Navigation and line patrol method and device for line patrol robot of overhead transmission line | |
CN114663818B (en) | Airport operation core area monitoring and early warning system and method based on vision self-supervision learning | |
CN110738706A (en) | quick robot vision positioning method based on track conjecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210209 |