CN109636763B - Intelligent compound eye monitoring system - Google Patents

Intelligent compound eye monitoring system Download PDF

Info

Publication number
CN109636763B
CN109636763B CN201710927624.1A CN201710927624A CN109636763B CN 109636763 B CN109636763 B CN 109636763B CN 201710927624 A CN201710927624 A CN 201710927624A CN 109636763 B CN109636763 B CN 109636763B
Authority
CN
China
Prior art keywords
image
close
data
resolution
compound eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710927624.1A
Other languages
Chinese (zh)
Other versions
CN109636763A (en
Inventor
金虹辛
贾伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoyuan Perception Wuxi Technology Co ltd
Original Assignee
Xiaoyuan Perception Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoyuan Perception Beijing Technology Co ltd filed Critical Xiaoyuan Perception Beijing Technology Co ltd
Priority to CN201710927624.1A priority Critical patent/CN109636763B/en
Publication of CN109636763A publication Critical patent/CN109636763A/en
Application granted granted Critical
Publication of CN109636763B publication Critical patent/CN109636763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses an intelligent compound eye monitoring system which can output a panoramic image and a close-up image simultaneously. The intelligent monitoring system comprises: the intelligent monitoring system comprises as shown in figure 1: compound eye imaging unit 1, compound eye image processing unit 2, data input unit 3, data analysis unit 4, data output unit 5. The compound eye image processing unit 2 is used for reconstructing a compound eye image to generate a panoramic image and a close-up image; the data input unit 3 is used for acquiring stored data, artificial data, sensor data of the internet of things and superior AI analysis data; the data analysis unit 4 combines the data acquired by the data input unit 3 with the panoramic image and the close-up image generated by the compound eye image processing unit 2 for analysis, and generates corresponding control information; the data output unit 5 is used for outputting the panoramic image, the close-up image, the control information, and the intelligent analysis result.

Description

Intelligent compound eye monitoring system
Technical Field
The invention relates to the technical field of video monitoring and pattern recognition, in particular to an intelligent monitoring system capable of simultaneously outputting panoramic images and close-up images.
Background
With the progress of society and the development of science and technology, the conventional video monitoring system cannot meet the current monitoring requirement. The traditional video monitoring system usually monitors a fixed area by a camera, and when the monitoring distance is long and the area is large, the monitoring system can only acquire general information of a monitoring scene, and cannot acquire detailed information such as human faces and license plates, so that suspicious conditions obtained by shooting cannot be clearly recorded by the monitoring system and displayed for monitoring personnel to watch, and the monitoring system cannot play the due role.
One of the existing solutions is to monitor a large area by using a combination of a plurality of fixed cameras, each camera monitors a specific local area, and then the videos of a plurality of cameras are spliced into a monitoring video of a global area by using a video splicing technology. The solution solves the problem of high-definition monitoring of a long-distance large scene, and has the defects of needing a large number of cameras, being very difficult to successfully debug the architecture, being very difficult to ensure long-term stable splicing, having poor experience and high cost.
Another solution is to use a camera (PTZ camera for short) which can rotate in the horizontal direction and the vertical direction and has a zooming function and a fixed camera to perform combined monitoring (snatching linkage), the fixed camera monitors a large area but cannot acquire detailed information, and the PTZ camera is manually operated or locally monitors some interested areas according to a fixed cruising route. The simultaneous multi-point detail monitoring cannot be considered, important monitoring information is likely to be omitted according to a fixed route cruise monitoring mode, and full-time backtracking information cannot be achieved.
The above solution has mentioned that the conventional monitoring system is not sufficient to be monitored by using a combination of a fixed camera and a PTZ camera, but the prior art still has the following defects in the aspect of controlling the PTZ camera by using the monitoring information of the fixed camera: 1. when multiple cameras are used for combination, the visual angle of the close-up image provided by the close-up camera is fixed, and the close-up image at any angle cannot be provided for a user; 2. when the PZT camera is used, the close-up image can be observed only by adjusting the angle and the focal length of the camera in real time by a user, the close-up image can not be extracted when the video is replayed, and the close-up image and the panoramic image can not be output simultaneously on the monitoring display equipment.
Disclosure of Invention
According to the defects in the prior art, the compound-eye monitoring camera can realize full-time (full-range, full-detail and full-time period) dead-angle-free monitoring on a long-distance large scene, and simultaneously provides video output of a panoramic image and a close-up image and can also provide a full-resolution image; meanwhile, the compound eye monitoring camera can also output intelligently analyzed data and alarm information and can output and store the intelligently analyzed data and alarm information in a video structure mode (so that retrieval is facilitated); on the other hand, the compound-eye monitoring camera can integrate the sensor of the Internet of things, the sensing information data of the Internet of things, and comprehensive intelligent analysis output data and alarm information aiming at the sensing information data of the Internet of things, or integrate the sensing information output of the Internet of things into the structured data of the video data for outputting and storing (alarm convenient for retrieval).
In order to solve the above technical problem, the present invention provides an intelligent compound eye monitoring system, which is capable of outputting multiple paths of video data simultaneously, and as shown in fig. 1, the intelligent monitoring system includes: the system comprises a compound eye imaging unit 1, a compound eye image processing unit 2, a data input unit 3, a data analysis unit 4 and a data output unit 5, wherein the compound eye imaging unit 1 is formed by arranging two or more imaging subunits according to a certain rule;
the compound eye image processing unit 2 is used for reconstructing a compound eye image to generate a panoramic image and a close-up image;
the data input unit 3 is used for acquiring stored data, artificial data, sensor data of the internet of things and superior AI analysis data;
the data analysis unit 4 combines the data acquired by the data input unit 3 with the panoramic image and the close-up image generated by the compound eye image processing unit 2 for analysis, and generates corresponding control information;
the data output unit 5 is used for outputting the panoramic image, the close-up image, the control information, and the intelligent analysis result.
In one embodiment, the data input unit 3 is a data input interface having a standard electrical physical interface such as a USB interface, a 485/CAN bus, an RJ45 network interface, an STAT/eSATA interface, and the like, and a software protocol such as TCP/IP, ModBus, and the like, and is configured to complete data acquisition except for image data and transmit the acquired data to the data analysis unit 4. The method comprises the following data acquisition instructions:
storage data acquisition, acquiring data information from a storage medium or a database, such as: close-up image generation/switching plans, close-up image names, and the like.
And (3) acquiring manual data, and manually inputting a control instruction to the system, such as changing the center position of the close-up image, forcibly triggering an alarm signal, manually calibrating the name of the area, the coordinate information and the like.
And (4) acquiring the things internet by sensing, and acquiring various types of information such as environment temperature information, GPS information and the like through a sensor.
And acquiring the AI data at the upper level, namely acquiring effective data information in a system cascade mode by assistance, such as vehicle color, license plate number and the like acquired by a vehicle identification system, and face characteristic information acquired by a portrait identification system.
In one embodiment, the data analysis unit 4 is configured to summarize data, combine compound eye imaging comprehensive analysis and study, combine mass video data with data comprehensive analysis of the data input unit 3, perform big data structuring, and output the data to the outside. And finishing the binding process of the video and the data information, converting the analysis result into an instruction for controlling the equipment, and transmitting the instruction to the data output unit 5.
In one embodiment, the data output unit 5 performs data output and linkage control. The method comprises the following action instructions:
outputting a panoramic image video stream, and outputting a panoramic image and global data information in a monitoring range according to different display resolutions, display modes and the like.
And outputting the close-up image video stream, and outputting the close-up image video stream and the related information thereof according to the application requirements.
And (4) outputting a control signal, outputting a driving control signal, and driving related devices such as a switch, a loudspeaker, illumination, a flying striking device and the like to complete linkage control.
And (4) outputting the AI data of the lower level, and outputting the data analysis result of the system/device to other intelligent analysis systems in a corresponding data format, such as transmitting the close-up image flow and the attached geographic coordinate information of a certain area to a crowd density analysis system for passenger flow statistics and the like.
In one embodiment, the compound eye system imaging process, the target pixel point or series of points of the panoramic image and the close-up image are preset.
In one embodiment, in the imaging process of the compound eye system, the target pixel points of the panoramic image and the close-up image are dynamically updated and designated by the real-time control of the basic input unit or the comprehensive analysis of the plan, and the image related information and the control signal are output.
In one embodiment, the compound eye system combines the image imaging process of the input information, the target pixel points of the panoramic image and the close-up image are dynamically updated and designated after being controlled by the AI input unit in real time and combined with the plan comprehensive analysis, and the image related information, the control signal and the structured data are output and transmitted to the next-stage AI.
Another aspect of the present invention is to provide an intelligent compound eye monitoring method, which can implement intelligent control on a monitored area, and the intelligent compound eye monitoring method includes the following instructions:
generating a panoramic image, namely performing image splicing on the images acquired by the compound eye imaging subunit to obtain the panoramic image;
generating a close-up image; generating a close-up image for the target area according to intelligent identification and analysis of the panoramic image or the sensing signal of the Internet of things;
generating control information, namely generating a driving control signal for controlling a related device according to analysis of the panoramic image, the close-up image and the sensing signal of the Internet of things;
and outputting the data, namely outputting the panoramic image, the close-up image, the control information and the intelligent recognition analysis result to an external device.
In one embodiment, in the imaging process of the compound eye system, the target pixel points of the panoramic image and the close-up image are dynamically updated and designated by the real-time control of the basic input unit or the comprehensive analysis of the plan, and the image related information and the control signal are output.
In one embodiment, the compound eye system combines the image imaging process of the input information, the target pixel points of the panoramic image and the close-up image are dynamically updated and designated after being controlled by the AI input unit in real time and combined with the plan comprehensive analysis, and the image related information, the control signal and the structured data are output and transmitted to the next-stage AI.
The compound eye imaging unit is used for reconstructing a rear-end compound eye image/video of a compound eye monitoring camera, acquiring a complete full-resolution image/video under a unified coordinate system, or reducing a panoramic image/video with a pixel size, or specifying a close-up image/video unit of a close-up area, or a combined image/video of panoramic view and close-up, and is a processing subsystem for realizing the ultra-high resolution compound eye imaging image/video under the unified coordinate system by carrying out key technical steps including but not limited to image/video acquisition of different front-end cameras, transformation of the coordinate system to the unified coordinate system, image splicing, image fusion, image cutting, video stream generation, video stream output and the like on each path of camera data on the acquired front end.
The acquisition of the close-up image control parameters refers to an interface for front-end control parameters and an information distribution sending module from an operator, a back-end platform and an intelligent analysis module, such as control information of a compound eye image/video processing module, startup and shutdown control information, temperature regulation control information, control signals of an internet of things sensor, control information of a back-end actuator to a front-end actuator, the number, position and magnification information of close-up areas given by the operator or an intelligent analysis result, and the like.
The artificial intelligence analysis is a module for comprehensively and intelligently analyzing images/videos acquired by the compound-eye camera and various information sensed by the internet of things and giving analysis results such as license plate numbers, vehicle types, personnel numbers, vehicle behaviors, personnel behaviors, face recognition data and alarm, object motion data and threshold, temperature data and over-high temperature and the like in a scene. The internet of things sensor/actuator module is a module which integrates various internet of things sensors and/or actuators and is provided with various sensor/actuators and an operator or a rear-end platform interface.
The image data structuralization refers to a processing unit based on video content information extraction, the processing unit adopts space-time segmentation, feature extraction and object identification to finally organize structured data information which can be understood by a computer and a human according to semantic relations on the video content based on technical means such as big data, deep learning, artificial intelligence and the like and methods such as artificial intervention, internet of things data acquisition association and the like, thereby realizing extraction of unstructured video data to the maximum extent, extracting more effective information to convert the more effective information into structured or semi-structured information which can be understood by the human and the machine, providing standard and convenient information data which is provided for other professional industries and used by an external interface, realizing the direction conversion of video data to informationization and informatization, and achieving the purpose of sensing the world by the video.
The image conversion compression means that output video (including but not limited to full-resolution video, panoramic video reduced to resolution such as but not limited to 4K, 1080p, 720p, D1, CIF, etc., and close-up area video with control resolution such as but not limited to 4K, 1080p, 720p, D1, CIF, etc.) of the compound eye monitoring camera can be output by video compression, and a module for performing compression output is a video compression module.
The compound-eye imaging technique and the image processing technique used in the present invention are described in further detail below:
< Compound eye imaging Unit >
The compound eye imaging unit comprises but is not limited to an integral shooting device formed by two or more than two shooting units according to a certain arrangement rule, and the compound eye imaging unit can carry out visual field division shooting or multi-scale visual field division shooting on a certain shot scene and ensure that the shot scene has no missing area. And then an imaging system equivalent to a full-width or local high-resolution imaging effect imaged by a traditional monocular camera is realized through a compound eye imaging algorithm, and a lens used by a camera unit of the system can be an iso-focus lens or an iso-focus lens.
For example, an M × N narrow-field tele imaging subsystem array is composed of M rows and N columns of narrow-field tele imaging subunits, the fields of view of adjacent narrow-field tele imaging subsystems overlap, a main optical axis of each narrow-field tele imaging subsystem converges at a point or in a neighborhood of the point, the point is an optical center of the wide-field ultrahigh-resolution imaging system, where M and N are both natural numbers greater than or equal to 1, and at least one of M and N is greater than 1; the horizontal field angle of each narrow-field-of-view and long-focus optical imaging subsystem is omegah+2ΔωhVertical field angle of ωv+2Δωv(ii) a The wide field of view is ultrahigh in resolutionHorizontal field angle HFOV of imaging system, namely horizontal field angle N omega of M multiplied by N narrow field long focus imaging subsystem arrayh+2ΔωhThe vertical field angle VFOV of the wide-field ultrahigh-resolution imaging system, namely the vertical field angle of the M multiplied by N narrow-field long-focus imaging subsystem array is M omegav+2ΔωvTherein is 180 °>ωh>0°,90°>Δωh>0°,180°>ωv>0°,90°>Δω v>0°,ωhIs the angle between the main optical axes of horizontally adjacent narrow-field long-focus imaging subsystemsvIs the angle between the main optical axes of vertically adjacent narrow field-of-view tele imaging subsystems, delta omegahAn included angle formed by a horizontal view field overlapping region of an object distance at an infinite distance in a plane formed by the main optical axes of horizontally adjacent narrow view field long-focus imaging subsystems and the optical center connecting line of the narrow view field long-focus imaging subsystems, namely a horizontal overlapping view field angle, delta omegavThe included angle formed by the vertical field overlapping area of the object distance at infinite distance in the plane formed by the main optical axes of the vertically adjacent narrow field long-focus imaging subsystems and the optical center connecting line of the narrow field long-focus imaging subsystems is the vertical overlapping field angle.
< Complex eye image reconstruction Algorithm >
Assuming an array consisting of M rows and N columns of narrow-field tele imaging subunits (NFLFS), the fields of view of adjacent narrow-field tele imaging subunits overlap, the primary optical axis of each narrow-field tele imaging subunit converges at a point or within a neighborhood of the point, the point being the compound-eye imaging unit optical center, wherein M, N is a natural number greater than or equal to 1, and at least one of M and N is greater than 1 and not equal to 2; setting the horizontal field angle of the narrow-field-of-view tele imaging subunit at the ith row and the jth column to be omegah+ΔωhhijVertical field angle of ωv+Δωvvij(ii) a Setting the horizontal field angle HFOV of the compound eye imaging unit as N omegah+ΔωhhThe vertical field angle VFOV of the compound eye imaging unit is M omegav+ΔωvvWherein 180 DEG > omegah>0°,90°>Δωh>0°,180°>ωv>0°, 90°>Δωv>0°,ωhFor designing included angle, omega, between main optical axes of horizontally adjacent narrow-field-of-view tele imaging subunitsvFor a design angle, Δ ω, between the principal optical axes of vertically adjacent narrow field-of-view tele imaging subunitshThe view field overlapping angle of the horizontally adjacent narrow view field long-focus imaging subunits is approximate to the included angle between the edge of the horizontal view field overlapping region at the object distance infinity of the horizontally adjacent narrow view field long-focus imaging subunits and the optical center of the system, delta omegavThe view field overlapping angle of the vertically adjacent narrow view field long focal imaging subunits is approximate to the included angle of the edge of the vertical view field overlapping region at the object distance infinity of the vertically adjacent narrow view field long focal imaging subunits to the optical center of the system, epsilonhij、 εvijRespectively the horizontal and vertical field angle errors of the long-focus imaging subunit with the narrow field of view in the ith row and the j columnh、 εvRespectively, the horizontal and vertical field angle errors of the compound eye imaging unit.
And processing the array images with adjacent overlapping characteristics, which are shot by the narrow-field-of-view long-focus imaging subunit array, so as to obtain a wide-field-of-view ultrahigh-resolution image. The invention can adopt two image processing methods of a whole-course operation mode and a data mapping mode.
The whole-course operation mode comprises a plurality of steps of image projection, image overlapping area characteristic point determination, image registration and splicing, image fusion and image cropping.
Firstly, according to the relative position relation of the characteristic point pairs of the adjacent image overlapping regions obtained by current shooting, calculating the projection matrix of the sensor coordinate system where each array image is located relative to the system coordinate system, and obtaining a projection image.
Then, by determining one of the adjacent projection images as a registration template, selecting the other image as a registered template, selecting a starting search point of the valid local extremum feature points in the vicinity of the boundary of the overlapped area of the registration template, and determining the judgment auxiliary points of the two local extremum feature points on an arbitrary cross line passing through the starting search point, wherein the cross line includes but is not limited to a cross line of two lines, with a fixed step length in each direction. And calculating the absolute value of the sum of the pixel value differences of the auxiliary judgment point and the search point on each line in the cross lines, summing the absolute values on the two lines, acquiring the search point of which the summation value is higher than a preset condition threshold value, taking the search point as the coordinate of the current regional extreme value feature point, and placing the coordinate in the extreme value feature point list. And moving the search point, traversing the whole overlapping area, obtaining the search point of which the summation value is higher than a preset condition threshold value, and obtaining an area extreme value feature point list of the overlapping area.
Sorting the obtained regional extreme value feature points according to the sum value from large to small, and selecting the first K sorted extreme value points as candidate regional extreme value feature points; and updating the obtained area extreme characteristic point list of the overlapping area.
The calculation formula of the regional extreme characteristic point is as follows:
Figure BDA0001427940170000081
wherein the content of the first and second substances,
f (i, j) is the pixel value of the point search point P (i, j), i, j is a positive real number;
diffx is the sum of the pixel value differences of the auxiliary determination point P (i-step1, j-step2), P (i + step3, j + step4) and the search point P (i, j) on the first line in the cross lines;
diffy is the sum of the pixel value differences of the auxiliary determination point P (i + step5, j-step6), P (i-step7, j + step8) and the search point P (i, j) on the second line in the cross;
t is a preset condition threshold;
step1, step3, step5, and step7 are step intervals in the image abscissa direction of the samples of the determination auxiliary points and the search points, and step2, step4, step6, and step8 are step intervals in the image ordinate direction of the samples of the determination auxiliary points and the search points.
Next, selecting a maximum possible registration area according to the structure and object distance parameters of the product design; traversing the registered crossed line by taking the crossed line of the regional extreme characteristic points of the over-selected registration template as the registration crossed line of MSAD registrationThe image registration area calculates SAD of each search point, and a minimum SAD point is found to be used as a candidate registration point pair of the extreme value feature point of the current area of the registration area; traversing the first K regional extreme characteristic points, and searching K candidate registration point pairs; obtaining the rationality screening integral of the current candidate registration point pair based on a distance difference integral algorithm according to the obtained candidate registration point pair; sorting the obtained rationality screening integrals of the candidate registration point pairs from small to large, and selecting N before sortingcheckScreening candidate registration point pairs corresponding to the integrals reasonably; n to be selectedcheckAnd taking the candidate registration point pairs as the coordinate relation of registration between the adjacent projection images, and further obtaining the coordinates of all pixel points of each projection image corresponding to the system coordinate system.
Wherein the candidate registration point calculation formula of the current feature point of the overlapping region is as follows:
Figure BDA0001427940170000091
MSAD=minSADQ(ii,jj)
the calculation formula takes the intersection of two lines as an example, and the intersection of a plurality of lines is also applicable;
in the formula (I), the compound is shown in the specification,
step9, step10, step11, step12 are registration steps;
n is the registration step number taking the current extreme characteristic point as the center;
p (i, j) is an extreme characteristic point of the registered image region;
q (ii, jj) is the registration point corresponding to the extreme characteristic point of the registered image region.
Wherein the obtaining of the rationality screening integral of the current candidate registration point pair comprises:
based on the current NcheckA registration point pair (P)k1(i,j),Qk1(ii, jj)), taking any set of registration point pairs (P)k2(i,j),Qk2(ii, jj)), the distances P are calculated respectivelyk1Pk2,Qk1Qk2Will | Pk1Pk2-Qk1Qk2L is used as the current integral value and is accumulated to the rationality screening integral S of the current registration point pairk1The method comprises the following steps:
Figure BDA0001427940170000092
sorting the obtained rationality screening integrals of the candidate registration point pairs from small to large, and selecting N before sortingcheckA plausibility screen integrates the corresponding candidate registration point pairs. And if the characteristic points cannot be searched in the registration area in the image stitching process, adopting the data of the last accurate registration as the coordinate data of the current registration. And splicing the images in a system coordinate system according to the relative position relation between the projection images of the sensors obtained by calculation of the found effective registration point pairs.
And adjusting the hue, brightness and saturation between the spliced adjacent images to achieve smooth transition between the hue, brightness and saturation between the adjacent images to obtain an image after image fusion. And taking an internal quadrilateral of the image obtained after fusion, and cutting off the part except the quadrilateral to obtain image output, wherein the image output is called as a full-resolution image.
The mapping and generation of the compound eye imaging unit image are carried out, namely the relation between the pixel value of the large-resolution image output by the compound eye imaging unit and the pixel value of each sensor array image is found, and the specific expression is as follows:
Figure BDA0001427940170000101
wherein the content of the first and second substances,
r (i, j) is the pixel value at the high resolution image coordinate (i, j);
p is the sensor image number that affects the pixel value at the high resolution image coordinate (i, j);
(xP,yP) An image address numbered P for the sensor image;
fP(xP,yP) Numbering the pixel value of P for the sensor image;
kPis a weighting factor.
When the actual shot object distance and the product designed object distance are changed to a certain extent, the offset of the relative position relationship between adjacent projected images caused by the change of the object distance is obtained, and the pixel mapping relationship stored by the pixel mapping module is corrected according to the offset to obtain an updated pixel mapping relationship.
< close-up image Generation Algorithm >
When the close-up image is generated, firstly, a close-up image center pixel coordinate point f (i, j) and a close-up image magnification factor alpha in a pixel coordinate system of the compressed resolution panoramic image and the output resolution A multiplied by B of the full resolution panoramic image need to be determined in advance. The determination of the close-up image center pixel coordinate point can be obtained in three ways: one way is to preset the position of a close-up image central pixel coordinate point according to the panoramic image; the other mode is that the position of the central pixel coordinate point of the close-up image is specified in real time by an operator when the operator browses the image; another way may also be that the location of the center pixel coordinate point of the close-up image is given by the intelligent recognition unit. The close-up image magnification alpha may be predetermined or specified by the operator in real time as the image is viewed.
And converting the close-up image central pixel coordinate point f (i, j) into a corresponding close-up image central pixel coordinate point position f (i ', j') in a full-resolution panoramic image coordinate system, and then determining the pixel range of the full-resolution close-up image according to the close-up image magnification factor alpha and the full-resolution panoramic image output resolution A multiplied by B, namely, starting from the close-up image central pixel coordinate point position f (i ', j') position, the pixel numbers of the upper, lower, left and right boundaries of the pixel of the full-resolution close-up image are j '-A/2 alpha, j' + A/2 alpha, i '-B/2 alpha and i' + B/2 alpha respectively.
And determining all narrow-field long-focus imaging subunits involved in the range according to the pixel range of the full-resolution close-up image, and performing compound eye image reconstruction on images formed by all the narrow-field long-focus imaging subunits to obtain the full-resolution image of the close-up image.
The predetermined target pixel point in the invention can also be set as the pixel coordinate point g (i, j) of the leftmost upper corner position of the close-up image in the pixel coordinate system of the compressed panoramic image, and the pixel numbers of the rightmost boundary and the leftmost boundary of the pixel sending out the full-resolution close-up image by using the coordinate point are i '+ A/alpha and j' + B/alpha respectively. Similarly, the present invention may also predetermine that the target pixel point is the pixel coordinate position of the bottom right corner, or the bottom left corner, or the top right corner.
Let a resolution of the compressed image output, which is an image resolution for monitoring a display device of a system to output by compressing a full-resolution image formed by the image reconstruction unit, be a × b, which may be 4K, 1080P, 720P, or the like, for example. The close-up image central pixel coordinate point f (i, j) can not be selected beyond the range defined by the compressed image output resolution a x b, namely, in the full-resolution close-up image central pixel coordinate point position f (i ', j'), i 'is at least greater than or equal to a/2, and j' is at least greater than or equal to b/2.
< Artificial Intelligence analysis Algorithm and image data structuring Algorithm >
According to the invention, the images/videos acquired by the compound eye imaging unit and various information sensed by the Internet of things are comprehensively and intelligently analyzed, and the hot targets in the images/videos are tracked and analyzed by utilizing the prior technologies such as big data analysis, machine learning, pattern recognition and the like, and the analysis result is obtained. Such as a module including a license plate number, a vehicle type, a number of people, a vehicle behavior, a person behavior, face recognition data and an alarm, object motion data and an over-range, temperature data and over-temperature, etc. in a scene. Meanwhile, a video content information processing technology is utilized, structured data information which can be understood by a computer and a human is finally organized into by adopting space-time segmentation, feature extraction and object identification according to semantic relations on the video content based on technical means such as big data, deep learning, artificial intelligence and the like and methods such as manual intervention, internet of things collected data association and the like, so that unstructured video data are extracted to the maximum extent, more effective information is extracted and converted into structured or semi-structured information which can be understood by the human and the machine, and standard and convenient information data which is provided for other professional industry fields by an external interface are provided, so that the direction conversion of the video data to the information and the intelligence is realized, and the purpose of sensing the world by the video is achieved.
Additional features and advantages of the invention will be set forth in the detailed description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a system configuration diagram according to a first embodiment of the present invention;
FIG. 1a is a flow chart of a method according to a first embodiment of the present invention;
FIG. 2 is a system configuration diagram according to a second embodiment of the present invention;
FIG. 2a is a flow chart of a method according to a second embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings. It should be further noted that, each unit module described in the illustrated embodiments of the present invention may be a specific device module, or may be a program module that is stored in a storage medium and is convenient for a processor to execute, and can achieve a corresponding technical effect.
First embodiment
Fig. 1 is a system configuration diagram according to a first embodiment of the present invention. Fig. 1a is a flowchart of a panoramic image and close-up image output method according to a first embodiment of the present invention. The method is described below with reference to fig. 1 and 1 a.
The intelligent monitoring system in the embodiment is used for monitoring the occurrence of the scene fire. The compound eye imaging unit 1 in the present embodiment adopts 9 × 9 narrow-field-of-view tele imaging subunits 10 to form an array, the focal length of each narrow-field-of-view tele imaging subunit is 75mm, the field angle is 25 °, and the resolution of the output image is 1080P. The overall bandwidth of the compound eye imaging unit is 36 Mbps. The 81 paths of 1080p main and sub code streams shot by all the 9 × 9 narrow-field tele imaging subunits 10 are input into the compound eye image reconstruction unit 20, and a panoramic image under the full resolution is obtained through storage, decoding, coordinate transformation, scale transformation, splicing, fusion, cutting, encoding and output, wherein the resolution of the image is 162000 × 7500. The panoramic image at full resolution is input to the image conversion and compression unit 23 to output a compressed resolution panoramic image of a predetermined format resolution, for example, the full resolution panoramic image is converted to a 1080P @25fps/30fps resolution image in the present embodiment.
The compound-eye imaging unit 1 outputs the panoramic image video data stream obtained by continuous shooting and processing by the compound-eye image processing unit 2 to the video analysis AI module 41 of the data analysis unit 4, the video analysis AI module 41 identifies the red scenery moving in the scene, and the control instruction module 43 of the data analysis unit 4 generates an image acquisition instruction and corresponding imaging parameters and transmits the image acquisition instruction and the corresponding imaging parameters to the compound-eye image processing unit 2.
In this embodiment, the central pixel coordinate point of the close-up image is the image central point of the selected red scene, and the magnification of the close-up image is a preset numerical value. Then, from the close-up image center pixel coordinate point f (i)1,j1) Performing resolution conversion to obtain corresponding close-up image center pixel coordinate point f (i ') at full resolution'1,j'1) From the close-up image center pixel coordinate point f (i 'at full resolution'1,j'1) And image magnification alpha to obtain the viewing range of the close-up image, e.g. upper left (i'1-960,j'1+540), lower right (i'1+960,j'1+540) all pixels in the rectangular range, and the narrow field of view tele imaging subunit 10 corresponding to this pixel range is:
Figure BDA0001427940170000131
wherein L isx,yNamely, the x-th row and y-th column narrow-field tele imaging subunit 10 in the compound-eye imaging unit 1 inputs the images formed by all the narrow-field tele imaging subunits 10 involved in the above array into the compound-eye image processing unit 2 for image splicing and cropping, and then obtains a compressed resolution close-up image with a predetermined format resolution, for example, the full resolution close-up image is converted into a 1080P @25fps/30fps resolution image in the embodiment. The number of perspectives of the close-up image may be multiplexed and may be predetermined.
At the moment, the intelligent monitoring system can simultaneously output the panoramic image and the close-up image, ensures that the panoramic image and the close-up image have the high-definition resolution of 1080P @25fps/30fps, and has the input total data bandwidth of 36Mbps and the output bandwidth of 8 Mbps. Different from other close-up image acquisition methods in the prior art, the embodiment of the invention directly splices images formed by subunits related to close-up images in the compound eye imaging unit, so that the image processing efficiency is obviously improved.
The compound-eye image processing unit 2 extracts a close-up image video data stream according to the coordinates of the red scenery, the close-up image video data stream is transmitted to the video analysis AI module 41 of the data analysis unit 4 again, the video analysis AI module 41 identifies the close-up image video data stream, if the identification result is suspected fire through image identification, the control instruction module 43 generates an Internet of things sensor starting signal, the temperature information acquired by the radiation temperature sensor at the position of the red moving object is transmitted to the Internet of things intelligent analysis module 42 through the Internet of things sensor data 33 in the data input unit 3, the Internet of things intelligent analysis module 42 performs comprehensive logic judgment according to the red alarm information from the close-up image video data stream and the temperature information from the Internet of things temperature sensor, and a fire judgment signal of the red scenery is given. And the control instruction module 43 controls the data output unit 5 to output the panoramic image video stream, the close-up image video stream, and the fire judgment signal to the AI real-time analysis system and the event response system 6.
Second embodiment
Fig. 2 is a system configuration diagram according to a second embodiment of the present invention. Fig. 2a is a flowchart of a panoramic image and close-up image output method according to the first embodiment of the present invention. The method is described below with reference to fig. 2 and 2 a.
The second embodiment of the invention is applied to vehicle monitoring. As in the first embodiment, the compound-eye imaging unit 1 in the present embodiment employs 9 × 9 narrow-field-of-view tele imaging subunits 10 constituting an array, each having a focal length of 75mm, a field angle of 25 °, and a resolution of 1080P of an output image. The overall bandwidth of the compound eye imaging unit is 36 Mbps. The 81 paths of 1080p main and sub code streams shot by all the 9 × 9 narrow-field tele imaging subunits 10 are input into the compound eye image reconstruction unit 20, and a panoramic image under the full resolution is obtained through storage, decoding, coordinate transformation, scale transformation, splicing, fusion, cutting, encoding and output, wherein the resolution of the image is 162000 × 7500. The panoramic image at full resolution is input to the image conversion and compression unit 23 to output a compressed resolution panoramic image of a predetermined format resolution, for example, the full resolution panoramic image is converted to a 1080P @25fps/30fps resolution image in the present embodiment.
First, the data 33 of the internet of things in the data input unit 3 transmits the vehicle passing signal detected by the ground sensing coil to the data analysis unit 4, and the control instruction module 43 controls the compound eye imaging unit 1 to start video shooting.
And the control instruction module 43 controls the compound eye image processing module 2 to take out the compound eye imaged close-up image video data stream according to the occurrence lane of the vehicle passing signal. Determining, by the control instruction module 43, the identified vehicle image center coordinate point as a close-up image center pixel coordinate point f (i)Identification,jIdentification). The close-up image central pixel coordinate point f (i)Identification,jIdentification) Performing resolution conversion to obtain corresponding close-up image center pixel coordinate point f (i ') at full resolution'Identification,j'Identification) From the close-up image center pixel coordinate point f (i ') at full resolution'Identification,j'Identification) And obtaining the feature by the image magnification factor alphaThe narrow field of view tele imaging subunit 10 to which the image is written is:
Figure BDA0001427940170000141
wherein L isx,yNamely, the x-th row and y-th column narrow-field tele imaging subunit 10 in the compound-eye imaging unit 1 inputs the images formed by all the narrow-field tele imaging subunits 10 involved in the above array into the compound-eye image processing unit 2 for image splicing and cropping, and then obtains a compressed resolution close-up image with a predetermined format resolution, for example, the full resolution close-up image is converted into a 1080P @25fps/30fps resolution image in the embodiment.
The close-up image video is transmitted to a video analysis AI module 41, the video analysis AI module 41 analyzes the video image to obtain structured data including the license plate number, the vehicle logo, the color, the vehicle type, the vehicle traveling direction, whether the vehicle runs the red light or not and transmits the video and the structured data to a unit 5, the data transmission unit 55 transmits the panoramic image video data stream, the close-up image video data stream and the structured data to an AI real-time analysis system and an event response system 6, the AI real-time analysis system and the event response system 6 find that the same license plate appears at both A and B, and the suspected counterfeit license plate vehicle is judged to appear. And then according to the structured data of the feature flow of the two places A and B plus the vehicle advancing direction, transmitting an instruction to the upper-level AI analysis data 34 of the data input unit 3 of the next compound-eye intelligent monitoring system in the two-place A and B vehicle flow direction, and instructing the next intelligent compound-eye monitoring system to confirm the information of the suspected fake-license-plate vehicle, obtaining the confirmation information of the suspected fake-license-plate vehicle in the next intelligent compound-eye monitoring system according to the same flow, and outputting the confirmation information to the AI real-time analysis system and the event response system 6, wherein the AI real-time analysis system and the event response system 6 transmit the confirmed information of the suspected fake-license-plate vehicle and the related data (such as illegal information, vehicle type, license plate number, vehicle body color and the like) to A, B of the next one or more traffic inspection stations in the two places, and manually inspect the suspected fake-license-plate vehicle.
The above are some embodiments listed based on the inventive concept, and it is easily understood for those skilled in the art that other embodiments based on the inventive concept should be included in the protection scope of the present invention, including but not limited to the combination of the listed embodiments of the invention.

Claims (10)

1. An intelligent compound eye monitoring system capable of outputting multiple paths of video data simultaneously, the intelligent compound eye monitoring system comprising: compound eye imaging unit 1, compound eye image processing unit 2, data input unit 3, data analysis unit 4, data output unit 5, compound eye imaging unit 1 is that two or more imaging subunits are arranged according to certain rule and become, its characterized in that:
the compound eye image processing unit 2 is used for reconstructing a compound eye image to generate a panoramic image and a close-up image;
the close-up image acquisition process comprises the following steps: determining a close-up image center pixel coordinate point f (i, j) and a close-up image magnification factor alpha in a panoramic image pixel coordinate system of the compressed resolution, and outputting a full-resolution panoramic image with a resolution A multiplied by B;
converting the close-up image central pixel coordinate point f (i, j) into a corresponding close-up image central pixel coordinate point position f (i ', j') in a full-resolution panoramic image coordinate system, and then determining a pixel range of the full-resolution close-up image according to the close-up image magnification factor alpha and the full-resolution panoramic image output resolution A multiplied by B, wherein the pixel range of the full-resolution close-up image is the pixel numbers of four boundaries of the upper, lower, left and right boundaries of a pixel of the full-resolution close-up image starting from the close-up image central pixel coordinate point position f (i ', j'), and the pixel numbers are j '-A/2 alpha, j' + A/2 alpha, i '-B/2 alpha and' + B/2 alpha respectively;
determining all narrow-field long-focus imaging subunits related in the range according to the pixel range of the full-resolution close-up image, and carrying out compound eye image reconstruction on images formed by all the narrow-field long-focus imaging subunits to obtain the full-resolution image of the close-up image;
the data input unit 3 is used for acquiring stored data, artificial data, sensor data of the internet of things and superior AI analysis data;
the data analysis unit 4 combines the data acquired by the data input unit 3 with the panoramic image and the close-up image generated by the compound eye image processing unit 2 for analysis, and generates corresponding control information;
the data output unit 5 is used for outputting the panoramic image, the close-up image, the control information, and the intelligent analysis result.
2. The intelligent compound eye monitoring system according to claim 1, wherein the compound eye image processing unit 2 is configured to implement the following instructions:
reconstructing a compound eye image, and carrying out image splicing on the images acquired by the imaging subunit so as to acquire a full-resolution image;
acquiring close-up image control parameters, and determining a target pixel point of the close-up image and the close-up image magnification factor;
generating a close-up image, determining an imaging subunit related to the close-up image by using the close-up image control parameters, and reconstructing a compound eye image according to the images formed by the imaging subunits so as to obtain the close-up image under the full resolution;
image conversion compression, which converts and compresses the full-resolution image to output a target-resolution image; the target resolution image includes a panoramic image and a close-up image.
3. The intelligent compound eye monitoring system according to claim 1, wherein the storage data acquisition by the data input unit 3 refers to acquiring data information from a storage medium or a database, including a close-up image generation/switching plan and a close-up image name;
the manual data acquisition refers to manually inputting a control instruction to the system, and comprises changing the center position of the close-up image, forcibly triggering an alarm signal, and manually calibrating the name and the coordinate information of the area;
the data acquisition of the sensor of the Internet of things refers to the acquisition of various types of information including environment temperature information or GPS information through the sensor;
the acquisition of the upper-level AI analysis data refers to the acquisition of effective data information in a manner of assisting system cascade, wherein the effective data information comprises the information of identifying the color of a vehicle, the number of a license plate or the face characteristics.
4. The intelligent compound eye monitoring system according to claim 1, wherein the data output unit 5 performs data output and linkage control, including implementing the following instructions:
outputting a panoramic image video stream, and outputting panoramic images and global data information in a monitoring range according to different display resolutions, display modes and the like;
outputting the close-up image video stream, and outputting the close-up image video stream and the related information thereof according to the application requirement;
the control signal is output, and a driving control signal is output to drive related devices such as a switch, a loudspeaker, illumination, a flying striking device and the like to complete linkage control;
and (4) outputting the AI data of the lower level, and outputting the data analysis result of the system/device to other intelligent analysis systems in a corresponding data format, such as transmitting the close-up image flow and the attached geographic coordinate information of a certain area to a crowd density analysis system for passenger flow statistics and the like.
5. The intelligent compound eye monitoring system according to claim 1, wherein the compound eye image processing unit is configured to preset a target pixel point or a series of target pixels of the panoramic image and the close-up image during image reconstruction.
6. The intelligent compound eye monitoring system according to claim 1, wherein when the compound eye image processing unit performs image reconstruction, target pixel points of the panoramic image and the close-up image are dynamically updated and designated after being controlled by the basic input unit in real time or being comprehensively analyzed in combination with a plan, and image related information and control signals are output.
7. The intelligent compound eye monitoring system of claim 1, wherein when the compound eye image processing unit performs image reconstruction, target pixel points of the panoramic image and the close-up image are dynamically updated and designated after being controlled by the AI input unit in real time and combined with plan comprehensive analysis, and image related information, control signals and structured data are output and transmitted to the next-level AI.
8. An intelligent compound eye monitoring method, which can realize intelligent control on a monitored area, is characterized in that the intelligent compound eye monitoring method comprises the following instructions:
generating a panoramic image, namely performing image splicing on the images acquired by the compound eye imaging subunit to obtain the panoramic image;
generating a close-up image, namely generating the close-up image for the target area according to intelligent recognition analysis of the panoramic image or sensing signals of the Internet of things;
the close-up image acquisition process comprises the following steps: determining a close-up image center pixel coordinate point f (i, j) and a close-up image magnification factor alpha in a panoramic image pixel coordinate system of the compressed resolution, and outputting a full-resolution panoramic image with a resolution A multiplied by B;
converting the close-up image central pixel coordinate point f (i, j) into a corresponding close-up image central pixel coordinate point position f (i ', j') in a full-resolution panoramic image coordinate system, and then determining a pixel range of the full-resolution close-up image according to the close-up image magnification factor alpha and the full-resolution panoramic image output resolution A multiplied by B, wherein the pixel range of the full-resolution close-up image is the pixel numbers of four boundaries of the upper, lower, left and right boundaries of a pixel of the full-resolution close-up image starting from the close-up image central pixel coordinate point position f (i ', j'), and the pixel numbers are j '-A/2 alpha, j' + A/2 alpha, i '-B/2 alpha and' + B/2 alpha respectively;
determining all narrow-field long-focus imaging subunits related in the range according to the pixel range of the full-resolution close-up image, and carrying out compound eye image reconstruction on images formed by all the narrow-field long-focus imaging subunits to obtain the full-resolution image of the close-up image; generating control information, namely generating a driving control signal for controlling a related device according to analysis of the panoramic image, the close-up image and the sensing signal of the Internet of things;
and outputting the data, namely outputting the panoramic image, the close-up image, the control information and the intelligent recognition analysis result to an external device.
9. The intelligent compound eye monitoring method as claimed in claim 8, wherein the target pixel points of the panoramic image and the close-up image are dynamically updated and designated by real-time control of the basic input unit or comprehensive analysis in combination with a plan, and image related information and control signals are output.
10. The intelligent compound eye monitoring method as claimed in claim 8, wherein the target pixel points of the panoramic image and the close-up image are dynamically updated and designated after being controlled by the AI input unit in real time and being combined with the plan comprehensive analysis, and the image related information, the control signal and the structured data are output and transmitted to the next-level AI.
CN201710927624.1A 2017-10-09 2017-10-09 Intelligent compound eye monitoring system Active CN109636763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710927624.1A CN109636763B (en) 2017-10-09 2017-10-09 Intelligent compound eye monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710927624.1A CN109636763B (en) 2017-10-09 2017-10-09 Intelligent compound eye monitoring system

Publications (2)

Publication Number Publication Date
CN109636763A CN109636763A (en) 2019-04-16
CN109636763B true CN109636763B (en) 2022-04-01

Family

ID=66050928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710927624.1A Active CN109636763B (en) 2017-10-09 2017-10-09 Intelligent compound eye monitoring system

Country Status (1)

Country Link
CN (1) CN109636763B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160105A (en) * 2019-12-03 2020-05-15 北京文香信息技术有限公司 Video image monitoring method, device, equipment and storage medium
CN112911366B (en) * 2019-12-03 2023-10-27 海信视像科技股份有限公司 Saturation adjustment method and device and display equipment
CN113114923B (en) * 2020-01-10 2022-11-25 三赢科技(深圳)有限公司 Panoramic camera
WO2023280273A1 (en) * 2021-07-08 2023-01-12 云丁网络技术(北京)有限公司 Control method and system
CN113971782B (en) * 2021-12-21 2022-04-19 云丁网络技术(北京)有限公司 Comprehensive monitoring information management method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725266A (en) * 2004-07-21 2006-01-25 上海高德威智能交通系统有限公司 Vehicle intelligent monitoring recording syste mand method based on video trigger and speed measuring
CN101692310A (en) * 2009-09-23 2010-04-07 德瑞视(北京)科技发展有限公司 Comprehensive video monitoring system of intelligent traffic
CN102821238A (en) * 2012-03-19 2012-12-12 北京泰邦天地科技有限公司 Wide-field ultra-high-resolution imaging system
CN204117361U (en) * 2014-08-07 2015-01-21 南昌市数朗科技发展有限公司 Based on the Intelligent Parking Peccancy Monitoring System of 360 degree of panoramic cameras
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
CN204948235U (en) * 2015-08-07 2016-01-06 富盛科技股份有限公司 A kind of panorama close-up image linkage positioning device based on dynamically splicing large scene
CN106534789A (en) * 2016-11-22 2017-03-22 深圳全景威视科技有限公司 Integrated intelligent security and protection video monitoring system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2237939C (en) * 1998-06-29 1999-09-21 Steve Mann Personal imaging system with viewfinder and annotation means
CN101119482B (en) * 2007-09-28 2011-07-20 北京智安邦科技有限公司 Overall view monitoring method and apparatus
JP5979458B1 (en) * 2015-11-06 2016-08-24 パナソニックIpマネジメント株式会社 Unmanned air vehicle detection system and unmanned air vehicle detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725266A (en) * 2004-07-21 2006-01-25 上海高德威智能交通系统有限公司 Vehicle intelligent monitoring recording syste mand method based on video trigger and speed measuring
CN101692310A (en) * 2009-09-23 2010-04-07 德瑞视(北京)科技发展有限公司 Comprehensive video monitoring system of intelligent traffic
CN102821238A (en) * 2012-03-19 2012-12-12 北京泰邦天地科技有限公司 Wide-field ultra-high-resolution imaging system
CN204117361U (en) * 2014-08-07 2015-01-21 南昌市数朗科技发展有限公司 Based on the Intelligent Parking Peccancy Monitoring System of 360 degree of panoramic cameras
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
CN204948235U (en) * 2015-08-07 2016-01-06 富盛科技股份有限公司 A kind of panorama close-up image linkage positioning device based on dynamically splicing large scene
CN106534789A (en) * 2016-11-22 2017-03-22 深圳全景威视科技有限公司 Integrated intelligent security and protection video monitoring system

Also Published As

Publication number Publication date
CN109636763A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109636763B (en) Intelligent compound eye monitoring system
CN110830756B (en) Monitoring method and device
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
US9898829B2 (en) Monitoring apparatus and system using 3D information of images and monitoring method using the same
CN103795976A (en) Full space-time three-dimensional visualization method
JP5664161B2 (en) Monitoring system and monitoring device
US11681834B2 (en) Test cell presence system and methods of visualizing a test environment
US8982245B2 (en) Method and system for sequential viewing of two video streams
CN110536074B (en) Intelligent inspection system and inspection method
SG191198A1 (en) Imaging system for immersive surveillance
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
CN107197209A (en) The dynamic method for managing and monitoring of video based on panorama camera
KR101778744B1 (en) Monitoring system through synthesis of multiple camera inputs
CN104255022A (en) Server, client terminal, system, and program
JP6624800B2 (en) Image processing apparatus, image processing method, and image processing system
JP6396682B2 (en) Surveillance camera system
CN112640419B (en) Following method, movable platform, device and storage medium
CN110930437B (en) Target tracking method and device
KR102028319B1 (en) Apparatus and method for providing image associated with region of interest
KR20180075506A (en) Information processing apparatus, information processing method, and program
KR102285078B1 (en) Remote detection and tracking of objects
CN115598744A (en) High-dimensional light field event camera based on micro-lens array and extraction method
CN113810609A (en) Video transmission method, server, user terminal and video transmission system
CN114463164A (en) Stereo video fusion method for vehicle fleet
CN111860050A (en) Loop detection method and device based on image frame and vehicle-mounted terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100097 10C, block C, Jinyuan times business center, No.2, lantianchang East Road, Haidian District, Beijing

Applicant after: Xiaoyuan perception (Beijing) Technology Co.,Ltd.

Address before: 100097 10C, block C, Jinyuan times business center, No.2, lantianchang East Road, Haidian District, Beijing

Applicant before: TYPONTEQ Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230329

Address after: Room 1319, Science and Technology Building, Building 683, No. 5, Zhongguancun South Street, Haidian District, Beijing

Patentee after: Xiaoyuan Perception (Wuxi) Technology Co.,Ltd.

Address before: 100097 10C, block C, Jinyuan times business center, No.2, lantianchang East Road, Haidian District, Beijing

Patentee before: Xiaoyuan perception (Beijing) Technology Co.,Ltd.